Mint.com: a perfect banking use case for OAuth

Mint.com provides a free service, which is able with your authorization, to connect to your bank(s), retrieve your bank account information and all your transactions and provide value-add services with the data. In particular, it allows you to see where you money is spent and give you hints at how you could save money. I personally think it is a superbly designed UI to the user data held at banks, which shows how much value there is to unlock, and also how much startups can be so much more efficient at delivering innovating services than banks themselves sometimes. Yodlee, a partner of Mint.com, was a dot com era example of this and Mint.com might be their Web 2.0 equivalent.

One problem is: Mint.com requires you to provide your actual username and password to the online banking service provided by your bank, which you use not just to view transactions, but also to make payments, transfers, etc.This approach has several drawbacks. One is that the user can be legitimately concerned about what would happen if this information would get compromised. Right now, Mint.com reassures its customers by saying: “We don’t store your info, Yodlee does”.

Another problem is that every time your online banking username or password changes, Mint.com stops working until you reconfigure it:

Mint.com screenshot showing: wrong username/password

Not very convenient. Add to this the fact that you can’t have any guarantee of what happens to your username/password when you want to terminate your relationship with Mint.com/Yodlee, and you may feel like you won’t use this application for now.

An implementation of OAuth protocol between Yodlee/Mint.com, the user and the bank can solve these problems at once, and would certainly further drive user adoption.

OAuth is self-described as a protocol for “secure API authentication”, but a better way to put it is that OAuth is a way for users to grant controlled access to their data hosted at online service A to another online service B. To use the car metaphor, if your data at online service A was a car, and online service B was the valet parking person, OAuth would be the way for the valet parking person to ask you for your car’s valet key, with which he can only drive a few miles and can’t open the trunk, and the way for you to give him the key. In security jargon, OAuth allows to delegate capabilities on your data to other applications, in the form of signed tokens, i.e. authorizations to do specific things with specific data that you sign with your identity. The beauty of it is: because these capabilities are signed by you, it can be presented by online service B to access your data at online service A without you having to provide your identity credentials (username/password typically) to online service B.

Coming back to the Mint.com/Yodlee use case, here is how it would work:

  • The user would go to Mint.com to request access to his data at the bank.
  • Mint.com would request his bank a token for a specific capability, for instance, retrieving transaction data
  • Upon receiving this request token, Mint.com would redirect the user to the bank’s token authorization page.
  • The user then authorizes the token (If he is not already logged in, he would do that first)
  • Mint.com can then substitutes the request token with the access token, and access the user’s data as they requested and as the user authorized, until the token is invalidated or expires.

Here are the benefits of OAuth for the user in this use case:

  • Mint.com/Yodlee never know the username/password used to log in at the bank.
  • When the user changes his username/password, Mint.com/Yodlee can still retrieves transaction data
  • When the user decides to terminate the relationship with Mint.com/Yodlee, he knows they don’t have is username/password and he knows that they can’t access his data anymore.
  • When you don’t want to use Mint.com/Yodlee anymore, you can simply invalidate

The big question is: how much work would be involved at banks and Yodlee to support OAuth, and in particular, what would they have to change?

Bank of America Online Banking’s user-friendly password strength indicator

Like many Web services, Bank of America Online Banking provides you with real-time feedback about the strength of your password when changing your password. What’s great with their implementation is that it does it via a thumb up indicator for each security rule your password must comply with that gets updated as the user fills out the password. This technique is the best I’ve seen so far at guiding the user into providing a secure password into a short amount of time, thus improving an experience that is generally frustrating given the generally low perceived value by users of these increasing security requirements (just like with anything security-related, users don’t value it until something bad happens to them).

This is a nice evolution from indicators that merely tell you whether the password is low strength or high strength, or even worse: password management systems like P-synch that only tell you what’s wrong with your new password after you have hit the reset button, requiring you to enter multiple time the password and hit the rest button.

Before the new password is provided, only 2 rules out of 4 are thumb up (one might argue that they should be disabled until the user starts to type)

Password strength indicators prior to passcode change

As the new password is typed, the thumps up turn green or red, until all are green and the user knows he can hit the reset button without fearing that the new password will be rejected.

Password strength indicators during passcode change

Hosting your very own OpenID with phpMyID

Why hosting your own OpenID? for me, it’s a way to stay committed to a fully decentralized Web.

phpMyID is very simple standalone, single user, OpenID Identity Provider developed by CJ Niemira. It does not require access to a database as everything is stored in the .php file.

It is absolutely perfect for a personal Web site.

Installation only took me about 30 minutes by following the very detailed installation instructions. My OpenID is http://lebleu.org/openid (I am going to move it to https as soon as Laughin Squid enables SSL on my site).

I already tried my new very own OpenID with SourceForget.net and OpenIdDirectory.com (to add a vote for phpMyID) and it works great. SourceForge.net just recently (May 5) added support for OpenID. If you already have an account with SF, you will bind your OpenID to your existing account by logging in with your OpenID and then logging in again with your existing account name/password. I found that logging with OpenID on SF takes a bit more time than with username/password, but I guess this will improve over time, and it is a minor pain compared to the convenience of having a single registration and sign-on facility that I control 100%.

Impressive JavaScript Interactive Graphics Library: processing.js

For fans of interactive graphics programming and data visualization out there (like me):  Processing.js is a 10kb JavaScript implementation of the open source Processing interactive graphics programming language. It is a side project of John Resig (jQuery).

Start your Firefox 3 latest beta and look at the bottom of the Processing.js page for demonstrations that will give you an idea of the potential of this.

To open source or not? or to do both? Open source as market segmentation tool.

Several years ago, I found myself confronted with the decision of whether to open source or not the software of a company I co-founded. While I could find considerable literature on the strategic benefits of open-source freeware as an enabler modern version of the razor and blade strategy (giveaway the client or development tool or razor, sell the server or the runtime or the blades), our software was not lending itself very well to such a separation.

My question was rather: if we have to choose between open sourcing blades and razors or nothing, what model would maximize our revenues, given our target market?

By elaborating on the simple notion of “why giveaway something you can charge for”, I developed the chart below to help me discuss the decision with my colleagues. The idea is to not view open source as an all or nothing strategy, but rather as a marketing technique to segment your market and maximize revenue, except that in the open source case, the revenue is mostly intangible.

Diagram of different segment of customers depending on their budget size and perceived value of the software

According to traditional marketing segmentation strategies, customers with large budgets and who have a high perceived value of our software should be charged for it fully and get all the features and rights; and customers with small budgets and who have a lower perceived value of our software should be charged less for a slimmed-down version of the software.

What’s new here is the category of users who don’t have large budgets themselves (the boss of their boss may have as it is the case of developers) but who see a lot of value in the software. These guys may not be able to sign a check, but they are able to bring more eyeballs to fix bugs, post feature requirements that are common to all users whether they have a checkbook or not, or simply spread the word about your software at the fraction of the cost and in a much better way than any PR firm. That’s not direct revenue, but certainly contribute to higher margins by reducing the costs of goods sold.

In most cases, the distribution of target customers (as it was the case for us) is such that a single model (commercial or open source) will dominate:

Diagrams representing 3 common cases of the distribution of customers according to their budget size and perceived value of the software offered

But if the distribution of customer types is such that a single model does not dominate, this model can actually be implemented using dual licensing, where one category of customers is segmented from another and charged differently, by offering both a commercial license and an open-source, usually GPL-like license. What’s even better is that customers are let to choose which category they belong to (actually their lawyers tell them).

Designing a successful Web API

Designing the most widely adopted Web API for a particular functional domain does not merely take to offer the best and most specialized functionality at the best price, as reading Adam Smith would suggest.

It takes two additional things:

  1. reducing the cognitive load on the developer using the Web API, and
  2. reducing his risks of using the Web API.

Reducing the cognitive load includes:

  • Making it easy to find on search engines
  • Not requiring registration for an API key to get access to a version of the API that is not production grade (for instance,  a version that is much slower than the production version)
  • Providing multiple representations (XML, XHTML, JSON, plain text, binary, etc.) for messages and multiple language bindings for the most popular languages in the target developer community
  • Good documentation, including a step-by-step tutorial
  • Examples with source code
  • On-line test tools that allow developers to test the functionality prior to coding
  • Building upon what target developers already know. Depending on the target developer community, this may mean starting with a REST/WOA API or a SOAP/SOA API.
  • Building upon what API users have already learnt so far about the API: learning one functionality should make the next one easier to learn. This includes using precise semantics and re-using them consistently, defining and sticking to naming and message structure conventions.
  • A variety of one-to-many medias (wiki, archived mailing lists, chat channels, etc.) for user-generated support and documentation.

Reducing the risks involves:

  • Being very clear upfront about what the API does and does not and how it compares with others (to reduce the perceived risk that the developer will spend time on something only to discover down the road that it does not address his requirements).
  • Ensuring the highest quality, up-time, availability and performance for the production version, and providing evidence of your commitment to these goals by using long betas (1-2 years) for new versions.
  • Ensuring backward-compatibility: code written for the 1.0 should still work when the 3.0 version of the API is out. This means that your application should support 1.0 to 3.0 concurrently until no one uses 1.0 any more, and that versions should be explicit in URLs (ex. http://mycompany.com/webapi/1.0/…).
  • Providing evidence that the Web API will be around for some time, because it is backed by a sound business model that can sustain its growth, maintenance and developments.

The Bankwatch: “Community currency enhances community value”

The Bankwatch has a post about a community currency this morning. The benefits described are in line with my earlier post on community currencies. This is the first time I see a community currency-related post on Bankwatch. I expect to see more of these from the specialized press first, and from the general press, as the trust crisis in national paper currencies, especially in the U.S. dollar, develops.

Portfolio update

Since my last post on April 1st, I have been adding shorts to my portfolio (SDS, DOG, SKF, SCC, SZK) little by little and liquidating my long stock positions. I sold GOOG at $533 on the 18th and AAPL today at $161 ahead of the earnings. I think the short-term rally we have seen in the recent weeks is coming to an end as investors start to realize the good 1st quarter earnings are not necessary a reflection of things to come. Regarding AAPL specifically, I think that the analyst consensus has moved too quickly to Apple beating by a huge margin the expectation, and most of these expectations are built into the price. This is the only reason to me that can explain why a single downgrade to NEUTRAL by AMR was enough to take more than 4% in one day off the recent high reached yesterday. Everyone is expecting a big profit jump and any piece of bad news may have a huge impact. I’m expecting good results but a conservative view of the rest of the year, i.e. Apple will say that they are not recession-proof, which is precisely what I believe is priced in right now. In other words, I think the risk of going down (to $140/$130) is much higher than the chance of going up ($170/$180) tomorrow. Depending on what I read, I may decide to re-establish my long position in AAPL, but for now, the dowside risk is too high.

I share MacroMan thesis of long large caps, short mid caps, to the extent that large caps are typically those making a lion’s share of their money outside the U.S., and I am planning to adjust my mix of shorts accordingly. I’m also looking into buying a Brazilian stock ETF to bet on the continued growth, leverage of the commodity boom, and overall decline of the dollar. I will try to buy in the lower $80 of EWZ if the end of the short-term U.S. rally takes with it EWZ.

I’ve also restored a position in IAU (Gold) and FXY (Yen) as the dollar fall seems to never end and I’m starting to lose completely my naivety, and starting to accept that the Fed or Teasury or any other Bank in the world is just completely powerless against what I can just describe as a growing global belief that the U.S. has lost its shine, and it will take long, very long, or an industrial miracle, before it wins it back, if it ever does.

I am now ~22% shorts, 13% cash (Yen), 47% Cash (USD), and 18% Gold. My portfolio is overall +8.11% YTD in USD and before taxes, -0.45% YTD in EUR. Pretty sad…

Community currencies: The future of money?

Did you know that it is entirely legal to print your own money in the U.S., as long as it does not resemble the U.S. dollar bill?I recently learned about this little know fact. Many communities in the U.S. and many in the world have their own local currency that complements the dollar: Ithaca hours, BayBucks, Deli Dollars are several of the most well-known. The stories behind each of these currencies are fascinating and inspiring, as it reminds us of what each bill money truly is: a unit of a trusted social contract.

During the late 80s, Taft Farms couldn’t raise money from banks to go through the winter. To solve their cash issues, they issued their own money, notes worth 10 US dollars on which you could read “In Farms we Trust” with a cabbage in place of Lincoln. People would buy for $9 in the fall and that would give them the ability to buy $10 worth of produce in the spring. As you can hear from the owner himself in this archived video, the scheme worked because customers fundamentally trusted that these notes would have value in the spring, because they knew they would enable to farm to survive the winter.The Taft Farms note was simply a unit of that trusted social contract and as such had a much higher value than 9 US dollar note. Nine U.S. dollars in autumn gets you ten U.S. dollars worth of produce in the spring.

Taft Farm note

Depending on how you look at it, that’s a 11.11% interest rate over 6 months (23.45% annualized, which is a very good deal, even if you assume 5% yearly inflation rate of prices in U.S. dollars), or a 10% rebate.Fast forward almost 20 years later with this NetBanker post that describes the value of the social contract in the context of Prosper.com’s peer-to-peer lending community:

Prosper has found that people who receive at least one bid from friends or family have significantly lower default rates than those who only borrow from strangers. By leveraging this social capital, the entire community acts more honestly, even if lending to friends and family is a small part of the overall equation.

In other words, money lent in this way, has more value, then say a zero downpayment, brokered, securitized, and sliced package of hundreds of thousands of loans made to poor credit, zero downpayment buyers of houses in the booming housing market of the 2000s. No one who hasn’t been living under the rock for the last year should be surprised of that. Now, some of you may be surprised to learn that the money bank lend us is very different than the money that you may lend on Prosper.com: most of the money lent by banks does not exist, and they create it from thin air. As John Kenneth Galbraith puts it:

The process by which banks create money is so simple that the mind is repelled.

I encourage you to read JKK’s fascinating book on the history of money Money: Whence it came, where it went, but to keep things simple, bank lend more than they actually have in deposits at the central bank, while charging an interest on all the money they lend. This is known as fractional reserve banking and this is how most of the money is created and why the interest rate set by the Fed is so important. In comparison, Prosper.com and other peer-to-peer lending communities can be seen as a 100% reserve banking system. Like Taft Farms, bank take promises to pay (i.e. provide value) in the future, and exchange it for promises to pay now and charge an interest for it. While Taft Farms did it in the form a currency that can be redeemed for vegetables, banks do it in the form of currency that is legal tender nationwide. Why would the bank not lend Taft Farms money at this rate? because that money is not a reflection of the trusted social contract between the farm and its customers, but simply between the farm and the bank.

Banks are useful simply because they have a given right to issue contracts that become legal tender. Which leads to my guess: in difficult times and declining trust, community money, and community currencies have very likely a much higher value than national currencies like the the dollar, because they are based on a very tangible social contract that minimizes moral hazard.