[EDITED 05/08/2009: see here] The majority of people I’ve talked to like the idea of revolutionizing angel funding. Among the skeptical minority, there are several common objections. Perhaps the weakest is that individual angels can pick winners at the seed stage.
Now, those who make this objection usually don’t state it that bluntly. They might say that investors need technical expertise to evaluate the feasibility of a technology, or industry expertise to evaluate the likelihood of demand materializing, or business expertise to evaluate the evaluate the plausibility of the revenue model. But whatever the detailed form of the assertion, it is predicated upon angels possessing specialized knowledge that allows them to reliably predict the future success of seed-stage companies in which they invest.
It should be no surprise to readers that I find this assertion hard to defend. Given the difficulty in principle of predicting the future state of a complex system given its initial state, one should produce very strong evidence to make such a claim and I haven’t seen any from proponents of angels’ abilities. Moreover, the general evidence of human’s ability to predict these sorts of outcomes makes it unlikely for a person to have a significant degree of forecasting skill in this area.
First, there are simply too many random variables. Remember, startups at this stage typically don’t have a finished product, significant customers, or even a well-defined market. It’s not a stable institution by any means. Unless a lot of things go right, it will fall apart. Consider just a few of the major hurdles a seed-stage startup must clear to succeed.
- The team has to be able to work together effectively under difficult conditions for a long period of time. No insurmountable personality conflicts. No major divergences in vision. No adverse life events.
- The fundamental idea has to work in the future technology ecology. No insurmountable technical barriers. No other startups with obviously superior approaches. No shifts in the landscape that undermine the infrastructure upon which it relies.
- The first wave of employees must execute the initial plan. They must have the technical skills to follow developments in the technical ecology. They must avoid destructive interpersonal conflicts. They must have the right contacts to reach potential early adopters.
- Demand must materialize. Early adopters in the near term must be willing to take a risk on an unproven solution. Broader customers in the mid-term must get enough benefit to overcome their tendency towards inaction. A repeatable sales model must emerge.
- Expansion must occur. The company must close future rounds of funding. The professional executive team must work together effectively. Operations must scale up reasonably smoothly.
As you can see, I listed three example of minor hurdles associated with each major hurdle. This fan out would expand to 5-10 if I made a serious attempt at exhaustive lists. Then there are at least a dozen or so events associated with each minor hurdle, e.g., identifying and closing an individual hire. Moreover, most micro events occur repeatedly. Compound all the instances together and you have an unstable system bombarded by thousands of random events.
Enter Nassim Taleb. In Chapter 11 of The Black Swan, he summarizes a famous calculation by mathematician Michael Berry: to predict the 56th impact among a set of billiard balls on a pool table, you need to take into account the the position of every single elementary particle in the universe. Now, the people in a startup have substantially more degrees of freedom than billiard balls on a pool table and, as my list above illustrates, they participate in vastly more than 56 interactions over the early life of a startup. I think it’s clear that there is too much uncertainty to make reliable predictions based on knowledge of a seed-stage startup’s current state.
“Wait!” you may be thinking, “Perhaps there are some higher level statistical patterns that angels can detect through experience.” True. Of course, I’ve poured over the academic literature and haven’t found any predictive models, let alone seen a real live angel use one to evaluate a seed stage startup. “Not so fast! ” you say, “What if they are intuitively identifying the underlying patterns?” I suppose it’s possible. But most angels don’t make enough investments to get a representative sample (1 per year on average). Moreover, none of them that I know systematically track the startups they don’t invest in to see if their decision making is biased towards false negatives. Even if there were a few angels who cleared the hundred mark and made a reasonable effort to keep track of successful companies they passed on, I’d still be leery.
You see, there’s actually been a lot of research on just how bad human brains are at identifying and applying statistical patterns. Hastie and Dawes summarize the state of knowledge quite well in Sections 3.2-3.6 of Rational Choice in an Uncertain World. In over a hundred comparisons of human judgment to simple statistical models, humans have never won. Moreover, Dawes went one better. He actually generated random linear models that beat humans in all the subject areas he tried. No statistical mojo to determine optimal weights. Just fed in a priori reasonable predictor variables and a random guess at what their weights should be.
Without some sort of hard data amenable to objective analysis, subjective human judgment just isn’t very good. And at the seed stage, there is no hard data. The evidence seems clear. You are better off making a simple list of pluses and minuses than relying on a “gut feel”.
The final line of defense I commonly encounter from people who think personal evaluations are important in making seed investments goes something like, “Angels don’t predict the success of the company, they evaluate the quality of the people. Good people will respond to uncertainty better and that’s why the personal touch yields better results.” Sorry, but again, the evidence is against it.
This statement is equivalent to saying that angels can tell how good a person will be at the job of being an entrepreneur. As it turns out, there is a mountain of evidence that unstructured interviews have little value in predicting job performance. See for example, “The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings” [EDITED 10/17/2011: New link to paper because old one was stale]. Once you have enough data to determine how smart someone is, performance on an unstructured interview explains very little additional variance in job performance. I would argue this finding is especially true for entrepreneurs where the job tasks aren’t clearly defined. Moreover, given that there are so many other random factors involved in startup success than how good a job the founders do, I think it’s hard to justify making interviews the limiting factor in how many investments you can make.
Why then are some people so insistent that personal evaluation is important? Could we be missing something? Always a possibility, but I think the explanation here is simply the illusion of control fallacy. People think they can control random events like coin flips and dice rolls. Lest you think this is merely a laboratory curiosity, check out the abstract from this Fenton-O’Creev, et al study of financial traders. The higher their illusion of control scores, the lower their returns.
I’m always open to new evidence that angels have forecasting skill. But given the overwhelming general evidence against the possibility, it better be specific and conclusive.