Is it Just the Idea that Matters? A Randomized Field
Experiment on Early Stage Investments
Shai Bernstein, Arthur Korteweg, and Kevin Laws*
Abstract
Which start-up characteristics are most important to investors in early-stage firms? This paper
uses a randomized field experiment involving 4,500 active, high profile, early stage investors,
implemented through AngelList, an online platform that matches investors with start-ups that are
seeking capital. The experiment randomizes investors’ information sets on “featured” start-ups
through the use of nearly 17,000 emails. Investors respond strongly to information about the
founding team, whereas they do not respond to information about either firm traction or existing
lead investors. This result is driven by the most experienced and successful investors. The least
experienced investors respond to all categories of information. We present evidence that the
information materially impacts investment rates in start-up companies. The results suggest that,
conditional on the quality of the idea, information about human assets is highly important for the
success of early stage firms.
JEL classification: G32, L26, D23
Keywords: Angels, Early stage firms, Entrepreneurship, Crowdfunding, Theory of the firm
Current Draft: March, 2014
* Shai Bernstein ([email protected]) and Arthur Korteweg ([email protected]) are from
Stanford Graduate School of Business, and Kevin Laws is from AngelList, LLC. We thank
Wayne Ferson, Steve Kaplan, seminar participants at Harvard Business School, UCLA,
University of Illinois at Urbana-Champaign, University of Maryland, University of Southern
California, University of Texas at Austin, and brown bag participants at the UC Berkeley Fung
Institute and Stanford for helpful comments.
2
Early stage investors provide an important source of capital enabling the birth and growth of
start-up companies, which play a key role in promoting innovation and growth in the economy
(Solow (1957)). A large and growing literature analyzes the implications of early stage
investments (e.g., Kortum and Lerner (2000)) and the factors that affect the terms of financing
(e.g., Kaplan and Stromberg (2003)). There is, however, much less understanding of the
investment decision-making of early stage investors, that is, how do they choose which start-up
to fund? What leads investors to pick one company over the other? This stands in sharp contrast
to the wealth of evidence on the behavior of retail and institutional investors who target
investments in publicly traded firms.
Studying the investment decision-making of early stage investors is challenging for
several reasons. First, existing databases consist of completed deals, rather than the pool of start-
ups considered by the investor, thus preventing assessment of the investor selection process.
Second, researchers possess far less data than investors do. Two early stage ventures may seem
similar to a researcher, but may look completely different to an investor. Third, even if such data
were available, how can one separate the causal effect of different start-up characteristics? For
example, if we observe that a serial entrepreneur is more likely to attract financing, is it because
of the importance of her past experience, or because serial entrepreneurs are just more likely to
generate high-quality ideas in the first place. This endogenous relationship between ideas and
start-up characteristics is even more problematic given that the quality of ideas is subjective,
highly uncertain, and unobserved to the researcher, leading to omitted variable bias.
We study which start-up characteristics early-stage investors causally respond to using a
randomized field experiment that builds on the correspondence testing methodology that has
3
been successfully used in labor economics.
1
The experiment takes place on AngelList, an online
platform that matches start-ups and angel investors.
2
We observe the start-up companies at the
stage in which they approach investors and seek capital. The platform includes many well-known
investors that are experienced in investing in, and building, early-stage firms. These individuals
are highly involved in startup creation, taking various roles in these firms such as investors,
board members, advisors and founders, and are therefore well-suited to inform research about
early-stage investor decision-making, and to reveal which start-up characteristics are most
important for early-stage firm success.
AngelList regularly sends out emails to investors to feature start-ups who are attempting
to raise capital. We randomize the information shown in these emails, and measure investors’
responses and level of interest to causally infer what factors drive investors’ decisions. Investors’
interest is gauged by measuring whether they choose to learn more about the firm on the
platform. While all investors see similar information on the start-up idea and potential market,
we randomly expose investors to information in three categories that “package” the company:
the founding team’s background (e.g., college, prior work experience, or entrepreneurial
background), the start-up’s traction (e.g., revenues and user growth), and the identity of current
investors. Broadly speaking, the team and traction categories correspond to information about
human and non-human assets, respectively. Kaplan, Sensoy, and Stromberg (2009) show the
1
The correspondence testing methodology has been frequently used in labor economics by exploring responses to
fictitious job applications. For example, Bertrand and Mullainathan (2004) explore racial discrimination in the labor
market, Weichselbaumer (2003) study the impact of sex stereotypes and sexual orientation, and Nisbett and Cohen
(1996) study employers’ response to past criminal activity.
2
Examples of well-known companies that have raised money through AngelList are Uber, Pinterest, BranchOut,
and Leap Motion. Around 1,300 confirmed financings had been made through AngelList, raising over $200 million
of seed funding. Most of these investments were concentrated in 2012 and 2013. The companies funded through
AngelList have gone on to raise over $2.9 billion in later rounds of venture capital and exit money.
4
differential importance of these two types of capital as the firm grows. The current investors
category can be interpreted as “social proof”, or a different type of human assets, as prominent
investors may apply their expertise or network to the firm.
3
These categories are randomly
revealed to each investor, and we exploit the variation across angels’ reactions within each start-
up. In other words, conditioning on the start-up’s “idea”, we exogenously vary the
aforementioned categories of information.
We sent about 17,000 emails to nearly 4,500 angels, spanning 21 different capital-
seeking start-ups, over the summer of 2013. Our target investors include many of the most
prominent angel investors, who are very active in the startup scene. Among these investors, 82%
have past investments, with a median of eight companies in their portfolio. Almost half of the
investors serve as an advisor to one or more start-up companies. Interestingly, 60% of the
investors have an entrepreneurial background themselves, as they formed at least a single venture
in the past.
The start-ups in the experiment are at a very early stage. The median company seeks to
raise $1.3 million, and approximately half the companies have already raised some capital, with
a median amount raised of $290,000. The median startup has two founders and employs three
additional workers. Only 23% of these firms already formed a board, and 57% graduated from an
incubator.
The randomized experiment reveals that angels are highly responsive to information
about the founding team, whereas information about the traction and current investors does not
lead to a significantly higher response rate. This shows that the information about the human
capital of the firm is very important to potential investors. Interestingly, we find significant
3
The investor category does not, however, reveal information about the level of funding, as this is separately
disclosed in the email.
5
heterogeneity among angels. On one hand, the most highly experienced investors, as measured
by a battery of metrics (including the number of prior investments, a success metric, and their
network centrality), react only to the team information. On the other hand, the least experienced
investors react to all categories of information. This suggests that the significance of team
information is not simply due to a better signal-to-noise ratio.
A unique feature of our setting is that it allows us to empirically confirm that absent the
randomization of information revelation, we would overestimate the importance of the various
categories, as we can run the same models on the subset of emails with the information set that
would have been sent outside of the experiment. This is consistent with the notion that start-up
characteristics may be mistakenly considered as highly influencing investor decision-making
because of their positive correlation with high quality ideas, and thus underscoring the
importance of randomization for establishing causality.
We mitigate external validity concerns by showing that the start-ups in the experiment
are fairly representative for the more than 5,500 start-ups raising capital on AngelList that
attracted a minimal level of attention by investors, across a long list of observable characteristics.
A common drawback of the correspondence testing methodology is that the use of
fictitious correspondence does not allow the researcher to observe real outcomes. For example,
Bertrand and Mullainathan (2004) sent fictitious resumes with randomized names to recruiters in
order to study discrimination, but they cannot observe actual hiring outcomes. Since our
experiment involves real start-ups and real investors, we can in fact observe real outcomes such
as investor introduction requests and investments. We show that when investors are interested in
learning more about the firm (henceforth click’), this interest converts to introductions between
founders and investors, as well as actual investments, at a rate of 15.1% and 3.0%, respectively.
6
The click-to-investment rate may be as high as 6.0% due to underreporting of investments. These
conversions rates indicate that clicks have a material impact on real outcomes.
As discussed in Kaplan, Sensoy, and Stromberg (2009), existing theories of the firm
yield different predictions towards the importance of key assets that the organization is built
around. The property rights theory (Grossman and Hart (1986), Hart and Moore (1990),
Holmstrom (1999), amongst others) places the ownership of non-human assets at the core of the
firm, whereas the contrasting view puts the human assets of the firm at its core (e.g., Wernerfelt
(1984), Rajan and Zingales (1998, 2001) and Rajan (2012)).
4
Our results present suggestive evidence of the importance of human capital assets at the
earliest stages of the firm, around the firm’s birth. Our results, however, do not suggest that non-
human assets are not essential. Kaplan, Sensoy, and Stromberg (2009) explore the evolution of
50 venture capital backed companies from the business plan stage to initial public offering (IPO).
They find that business lines remain stable from birth to IPO, while management turnover is
substantial. Combined with this evidence, the evidence in our paper tells a story that is consistent
with the model in Rajan (2012). Rajan argues that the entrepreneur’s human capital is important
early on to differentiate her enterprise. However, to raise substantial funds (for instance, when
going public), the entrepreneur needs to go through a standardization phase that will make
human capital in the firm replaceable, so outside financiers can obtain control rights. Our results
indeed suggest the importance of human assets at the earliest stages of a firms’ life. Kaplan,
Sensoy and Stromberg (2009) show that at the later stages, human capital is frequently replaced,
4
For example, Rajan and Zingales (1998, 2001) view the firm as a hierarchy of people who gain different degrees of
access to critical resources in the firm. The critical resources can be a person, a business idea, or key customers.
These resources are providing an incentive to specialize human capital towards the firm’s goals. In newer firms,
therefore, the competitive advantage comes from specific human capital rather than from non-human assets, which
can be bought or sold easily (Zingales (2000)).
7
as different human capital skills are needed to run a larger, more mature firm, and to ready the
firm for the injection of significant outside capital.
This paper adds to the literature on early stage investments. While some papers attempt to
illustrate causality by linking early stage investors’ actions to firm success (e.g., Kerr, Lerner and
Schoar (2013), Kortum and Lerner (1998), Sorensen (2007), Samila and Sorenson (2010), and
Bernstein, Giroud, and Townsend (2013)), there is little disagreement that early stage investors
are skilled pickers of successful companies (e.g., Kaplan and Schoar (2005), Sorensen (2007),
Puri and Zarutskie (2012), Korteweg and Sorensen (2013)). Yet, little is known about how
exactly this class of investors selects the companies to which they provide funding. Several
papers explore investors’ behavior using surveys and interviews (e.g., Pence (1982), MacMillan,
Siegel, and Narasimha (1986), MacMillan, Zemann, and Subbanarasimha (1987), and Fried and
Hisrich (1994)), but our paper provides the first large sample systematic evidence on this issue,
spanning thousands of investors.
The paper is structured as follows. In section 1, we give a brief overview of the AngelList
platform. Section 2 describes the randomized experiment, and section 3 presents descriptive
statistics. In section 4 we analyze investors’ reactions to the emails. Section 5 discusses issues of
internal and external validity. In section 6 we dive deeper into the real effects of disclosed
information on investment and introductions, and section 7 concludes.
1. The AngelList platform
AngelList is a platform that connects start-ups with potential angel investors. The
platform was founded in 2010 by Naval Ravikant (the co-founder of Epinions) and Babak Nivi
(a former Entrepreneur-in-Residence at Bessemer Ventures and Atlas Capital), and has
8
experienced rapid growth since. Start-up companies looking for funding may list themselves on
the platform and post information about the company, its product, traction (e.g., revenues or
users), current investors, the amount of money they aim to raise and at which terms, and any
other information they would like to present to potential investors. Examples of well-known
companies that have raised money through AngelList are Uber, Pinterest, BranchOut, and Leap
Motion.
On the angel side, investors that are accredited following the rules set by the U.S.
Securities and Exchange Commission
5
can join the platform to look for potential investments.
Investors typically list information on their background as well as their portfolio of past and
current investments. The platform is host to many prominent and active angels with extensive
experience investing in, building, and operating early stage companies. Examples are Marc
Andreessen and Ben Horowitz (of Andreessen-Horowitz), Reid Hoffman (co-founder of
LinkedIn), Yuri Milner (founder of Digital Sky Technologies), Marissa Mayer (president and
CEO of Yahoo), Max Levchin (co-founder of Paypal), and Dave McClure (of 500 Startups).
Using AngelList, interested investors request an introduction to the start-up’s founders.
From there the parties can negotiate their way to a final investment. Usually, investors decide to
invest following a phone call with the founders or, depending on geographical closeness, a face-
to-face meeting.
There is a strong social networking component to the platform: Investors can “follow”
each other as well as start-ups, they can post comments and updates, and they can “like”
comments made by others.
5
For individuals, an accredited investor is a natural person with either at least $1 million in net worth (either
individually or jointly with their spouse, but excluding the value of their primary residence) or with income of at
least $200,000 (or $300,000 jointly with a spouse) in each of the two most recent years and a reasonable expectation
of such income in the current year.
9
By the fall of 2013, about 1,300 confirmed financings had been made through AngelList,
raising over $200 million. Most of these investments were concentrated in 2012 and 2013. The
companies funded through AngelList have gone on to raise over $2.9 billion in later rounds of
venture capital and exit money. There is no exact benchmark to compare these numbers to, but to
give a rough comparison, the University of New Hampshire’s Center of Venture Research
estimates total 2012 angel investments at $22.9 billion, while seed rounds of start-ups totaled
$731 million in 2012 and $893 million in 2013.
6
2. Randomized field experiment
The field experiment builds on correspondence testing methodology
7
and uses so-called
“featured” emails about start-ups that AngelList regularly sends out to investors listed on its
platform. These start-ups are chosen by AngelList for being promising companies that could be
appealing to a broad set of investors that have previously indicated an interest in the industry or
the location of the start-up.
An example of a featured email is shown in Figure 1. The email starts with a description
of the start-up and its product. Next, the email shows up to three categories of information about:
i) the start-up team’s background; ii) current investors; iii) traction. Outside of the experiment, a
category is shown if it passes a certain threshold as defined by AngelList. The thresholds are
AngelList’s determination of what information investors might be most interested in. For
example, the team category is shown if the founders were educated at a top university such as
6
Data on total seed funding is from CB Insights: http://www.cbinsights.com/blog/trends/2013-seed-venture-capital,
accessed February 17, 2014. The University of New Hampshire’s angel market analysis reports are accessible at
https://paulcollege.unh.edu/research/center-venture-research/cvr-analysis-reports.
7
This approach has been used in the context of job application recruiting in various studies. Written applications are
sent to job openings and applications are constructed such that they differ only in the aspect of interest. These
studies explore the employer reactions to the fictitious job applications. A few recent examples include Bertrand and
Mullainathan (2004), Weichselbaumer (2003), and Nisbett and Cohen (1996). Our study does not rely on fictitious
subjects, but rather uses real investors and real early stage ventures seeking for capital.
10
Stanford, Harvard, or MIT, or if they worked at a top company such as Google or Paypal prior to
starting the company. As we discuss below, this algorithm is important for the interpretation of
the experiment’s results, and to provide a sense of the information content in featured emails, the
appendix shows the information that passed the disclosure threshold for each start-up in the
experiment. Finally, the email shows information about the amount of money that the company
aims to raise, and how much has been raised to date.
In the experiment, we randomly choose which of the team, current investors, or traction
categories are shown in each email, from the set of categories that exceed their threshold. For
example, suppose 3,000 angels receive a featured email about a given start-up. Outside of the
experiment, all investors would receive the same email, and let’s assume that this email would
show information about the team and traction, while the current investors category for this
company does not meet the threshold to be included in the email. In the experiment, 1,000
investors receive the original email with both team and traction shown, 1,000 receive the
identical email except that it does not show the team category, and another 1,000 receive the
email that shows the team information but with the traction category hidden. We do not send any
emails with all categories hidden, as this would not happen outside of the experiment, and could
raise suspicion among investors.
Investors respond to the emails using the View and “Get an Intro” buttons that are
included in each email (see Figure 1). If an investor is interested in the start-up, she can click on
the “View” button to be taken to the AngelList website and view the detailed company profile.
We record if this happens. If the investor is particularly interested, she can click the “Get an
Intro” button to request an introduction to the company straight away. However, this is a very
rare event as nearly all investors take a look at the full company profile on the AngelList website
11
before asking for an introduction. Hence, instead of clicks on the Get an Intro” button, we
record whether the angel asks for an introduction within three days of viewing the email through
either the email or the website. Naturally, we need to exercise caution in interpreting the results
on introductions, as investors will likely have gleaned more information from the website.
3. Summary statistics
A. Emails
We ran the experiment over an eight week period in the summer of 2013. Table 1 reports
descriptive statistics. Panel A shows that a total of 16,981 emails were sent to 4,494 active
investors, spanning 21 unique start-ups. Active investors are angels that have requested at least
one introduction to a start-up while they have been enrolled in AngelList. Investors come to the
platform for a variety of purposes: to research, to confirm their affiliation with a startup that is
fundraising, or to invest. Restricting the sample to active investors excludes those are not on the
platform to seek new investments
For each start-up, we sent an average (median) of 2.76 (3) versions of the email, each
with an exogenously different information set. This means that in total we sent 58 unique emails
(2.76 emails per start-up times 21 start-ups). Each unique email was sent to 293 recipients on
average (median 264). Within a start-up, the number of recipients per unique email is roughly
equal, but there is some variation across start-ups in how many angels receive the featured
emails, as some start-ups are in more popular industries or locations than others. On average, 809
investors receive a featured email about a given start-up, with a minimum of 202 and a
maximum of 1,782 recipients per start-up. An investor in the sample receives on average 3.78
12
emails (median: 3 emails) of different featured start-ups, and importantly, no investor receives
more than one email for a given start-up.
In terms of response, recipients opened nearly half (48.3%) of their emails. Some
investors open none of their emails, but 2,925 investors open at least one. Of the opened emails,
16.45% of investors clicked on the “View” button to see more information about the start-up.
This click rate provides the first hint that investors pay attention to the emails: they do not click
on every company, but they also do not ignore the emails and the information therein altogether.
Of the investors who clicked on the email, we see that 15.1% requested an introduction within
three days of viewing the email. However, this includes not only direct introductions from the
email but also introductions that were made later, when investors have seen more information
about the start-up. Finally, 2.98% of investors who clicked on one of the email buttons ended up
investing in the company. Since investors are not required to report their investment to
AngelList, this statistic may underestimate the real click-to-investment conversion. AngelList
estimates that only half of the investments are being reported, potentially leading to a click-to-
investment rate of approximately 6%.
Panel B of Table 1 shows that there is no statistically significant difference in the
frequency with which each information category passes the threshold set by AngelList. This
means that the salience of the presence of an information category in an email is roughly equal
across information categories. Outside of the experiment, categories that pass the threshold
would always be shown in the email. Within the experiment, these categories are randomly
excluded. Conditional on passing the threshold, the information regarding, team, current
investors, and traction is shown about 73% of the time, with no material difference in
frequencies across categories. Note that these frequencies are different from 50% because we
13
randomize across different versions of the emails. For example, if team and traction pass the
threshold, there are three versions of the email: one that shows team only, one that shows traction
only, and one that shows both (we don’t use the empty set to avoid raising suspicion amongst
investors). In that case, if each email is shown at random then team and traction would each be
shown 67% of the time.
B. Start-ups
Table 2 presents detailed descriptive statistics of the 21 start-ups in the randomized email
experiment. Panel A shows the geographical distribution of firms. The most popular location is
Silicon Valley with six firms, but the dispersion is quite wide, with firms spread across the
United States, Canada, the United Kingdom, and Australia. Panel B shows that most firms
operate in the Information Technology and the Consumers sectors. Other represented sectors are
Business-to-business, Cleantech, Education, Healthcare, and Media. Note that the sector
designations are not mutually exclusive. For example, a Consumer Internet firm such as Google
would be classified as belonging to both the Information Technology and Consumers sectors. In
terms of company structure, panel C shows that the median start-up has two founders, and 17
start-ups (81%) have (non-founder) employees. The median firm with employees has three
workers, though there is some variation, with the largest company having as many as nine
employees. Counting both founders and employees, the largest start-up consists of 11 people.
Only a quarter of firms have a board of directors at this stage of fund-raising. Of those that do
have a board, the median board size is two, and no board is larger than three members.
8
Almost
8
It is not clear how much of an outside governance role the board fulfills at this stage of the firm, rather than simply
fulfilling a legal requirement of incorporation.
14
all companies (19 out of 21) have advisors,
9
and the median number of advisors for the
companies that have any, is three.
Panel D reports details on the financing of the sample firms. Twelve companies (57%)
had previously gone through an incubator or accelerator program. Eleven companies (52%)
received funding prior to coming to AngelList for further financing, and had raised an average
(median) of $581 thousand ($290 thousand). For the sixteen companies for which a pre-money
valuation is available, the average (median) valuation is $5.5 million ($5 million), and ranges
between a minimum of $1.2 million and a maximum of $10 million. Eighteen companies
explicitly state their fundraising goal, which ranges from $500 thousand to $2 million (not
tabulated), with an average (median) of $1.2 million ($1.3 million). Most companies (76%) are
selling shares, with the remaining 24% selling convertible notes.
C. Investors
Table 3 reports descriptive statistics of the 2,925 angel investors who received the
featured emails in the field experiment, and who opened at least one email. This is the set of
investors that is the focus of our empirical analysis in the next section. Panel A shows that
virtually all investors are interested in investing in the Information Technology and Consumers
sectors, while other key sectors of interest are Business-to-business, Healthcare and Media. Panel
B reveals that investors are very active on the platform, with the average (median) investor
requesting ten (three) introductions to start-ups from the time that they joined the platform until
we harvested the data in the late summer of 2013. However, there is considerable heterogeneity
in the number of introductions requested, with the lowest decile of investors requesting only one
introduction, while the top decile requested over twenty.
9
Advisors are typically high profile individuals, and are compensated with stocks and options.
15
In order to provide an indication of the past success of investors, AngelList computes a
“signal” for each investor and start-up that ranges from zero to ten. The algorithm that assigns
signals works recursively, and is seeded with high exit value companies (from Crunchbase) such
as Google or Facebook getting assigned a value of ten, as well as a set of hand-picked (by
AngelList) highly credible investors. The signal then spreads to start-ups and investors through
past investments: any start-up that has a high signal investor gets a boost in its own signal.
Likewise, an investor who invests in a high signal company gets a boost in his or her signal. This
signal construction, rather than crediting investors only for realized past successes, also gives
credit for investing in very young but highly promising firms that may have great exits in the
future, but are still too young to have made it to the exit stage. The average (median) investor
signal is 6.4 (6.3), with a standard deviation of 2.3. The wide distribution of the signal in Figure
2 shows that there is significant heterogeneity in signal across investors.
10,11
The social network on the platform is extensive, and the investors in the sample are well-
connected: The average (median) investor had 591 (202) followers at the time of data collection.
Again, we see large heterogeneity in investors, with the 10
th
percentile having only 26 followers
while the 90
th
percentile investor has 1,346 followers.
Over 90% of investors are actively involved with start-ups (as with the signal calculation,
these numbers are not limited to start-ups that tried to raise money through AngelList). Panel B
shows that most (82%) have a track record as investors. Conditional on making an investment,
10
There are few signal scores below three, because we limit the set of investors to those that have requested at least
one introduction through the platform.
11
The signal calculations use all declared investments on the AngelList platform. This data represents self-declared
investments by both angels and start-ups on the platform that were subsequently verified by AngelList with the party
on the other end of the transaction (i.e., investments declared by start-ups are verified with the investors and vice
versa). Importantly, the data are not limited to companies that (tried to) raise money through the platform. There are
many thousands of companies, such as Facebook, that are on the platform but never have, or ever intend to, raise
money through AngelList. Instead, they are there only because an investor declared to have invested in them (or
declared to have served another role in the firm, such as founder or advisor) that was subsequently verified by
AngelList. In addition, the signal calculation includes investment data available from Crunchbase.
16
the average (median) number of investments is 13 (8), though some investors invest in as many
as 30 companies. Roughly 44% of angels are active as advisors to start-ups, with the median
advisor advising two firms. Also, 17% of investors served as a board member on a start-up. Last,
but certainly not least, 60% of investors were at one point founders themselves.
12
The median of
these founder-investors founded two companies.
The investors in Table 3, those that opened the emails, tend to be more active and
involved than the investors that received features emails but did not open any of them: they
request more introductions (9.72 on average versus 4.97 for the investors who did not open any
emails), have a higher signal (average 6.44 versus 5.89), more followers (average 591 versus
480), more of them are involved with start-ups (91.93% versus 85.15%), and conditional on
being involved, they are involved with more start-ups (average 12.55 versus 10.56) . These
differences are all statistically significant at the 1% level (results not tabulated).
Taken together, the evidence presented here shows that the group of investors in our
sample are active, successful, connected, and highly experienced not only in investing in very
early-stage firms, but also in building companies from the ground up. As such, these individuals
form a sample that is ideally suited to inform about the assets that are most important to very
early stage firms. Moreover, there is significant heterogeneity within this group that may help to
distinguish between theories.
4. Analysis of investors’ responses in the randomized experiment
Table 4 shows results of regressions that explore how the three randomized categories of
information (team, traction, and current investors) affect angels’ interest level and trigger
12
Declarations of advisor, board member, or founder roles are verified using the same procedure as was followed
for investments.
17
response. The dependent variable equals one when an investor clicked on the “View” button in
the email, and zero otherwise. All models in Table 4 have standard errors clustered at the
investor level, to account for investors making correlated decisions across the emails they receive
for various start-ups.
13
In column (1) we run a simple ordinary least squares (OLS) regression
that explores how the three information categories affect click rates.
14
Revealing information
about the team significantly changes the click rate, raising the unconditional click rate by 2.6%.
Given a base click rate of 16.5% (Table 1), this represents a 16% increase. Recall that investors
are calibrated to think that if the information is not shown, it has not crossed the threshold and is
therefore of insufficient significance for AngelList to report. This helps for interpretation, as the
increase in the click rate is thus the effect of the team’s background being above the importance
threshold. Showing information about the current investors or traction does not significantly alter
the click rate. This means that knowing whether a notable investor (by AngelList’s definition) is
investing in the company, or if the start-up has material traction, does not make investors more
likely to click.
In column (2) we introduce controls for investors’ pre-existing knowledge of the start-up
company, and the number of emails an investor has already received in the experiment. We will
discuss these results in more detail in the next section. What is important at this stage of the
analysis is that adding these controls does not change the coefficients on the randomized
information categories.
13
The results are nearly identical if we include investor fixed effects. We do not cluster standard errors at the start-
up level because there are too few clusters to produce reliable estimates (see Angrist and Pischke (2009), chapters
8.2.1 and 8.2.3). The start-up fixed effects in the regressions remove unobserved heterogeneity in click rates for each
start-up.
14
The regressor indicator variables equal one when the information is shown in the email, and zero otherwise. It is
not necessary to interact these indicators with dummy variables whether the disclosure threshold was passed, as the
results are mechanically the same.
18
Columns (3) and (4) replicate the regressions of the first two columns with the addition of
start-up fixed effects. These fixed effects control for the effect on click rates of any information
conveyed in the descriptive paragraph, the amount that the company aims to raise, has already
raised, or any other common knowledge about the specific start-up. The coefficients on the
information categories are slightly lower, but remain significant at the 5% level. The final four
columns show that the results are robust to using a logit model instead of OLS regressions.
A unique feature of our setting is that we can show the importance of the randomized
experiment for identification, by re-running the regressions of Table 4 on the subset of 2,992
opened emails that show every piece of information that crossed the threshold. These are the
only emails that would have been sent outside of the experiment. Note that with this subsample
of emails we cannot include start-up fixed effects as there is by construction no variation across
emails for a given start-up. Moreover, given the random allocation of these “full-information”
emails, the regression results should reflect the non-randomized population estimates.
Focusing on the OLS regression with the information categories as the only explanatory
variables, Table 5 shows that the coefficients on revealed information about the team, investors,
and traction are 0.046, 0.013, and 0.037, respectively, where team is significant at the 5% level,
investors is insignificant and traction is significant at the 10% level. These coefficients are
uniformly higher than the coefficients of 0.026, 0.011 and 0.010 using the full set of randomized
emails (replicated for ease of comparison in the four right-most columns of Table 5), and where
the coefficients on both investors and traction are insignificant. Clearly, the randomization of
information is important: without the experiment, we would overestimate the importance of
traction, and to some extent, team. In fact, one would expect to overestimate the importance of
19
good teams, investors, and traction if they are positively correlated with good ideas, which is
likely to be the case. The results for the other models are similar, as seen in Table 5.
A key question at this point is whether team matters because it is a measure of quality of
the business idea, or whether there is something special about the team as a human asset that is
critical to the firm? For example, if a team was trained at MIT, does this signal high human
capital necessary for execution, or does it serve as a signal of the quality of the technology or the
business plan separate from management and future implementation? If it is the latter case, and
investors care only about observing signals of idea quality, then one would expect traction and
current investor information to also correlate with click rates. This is not what we observe,
suggesting that there is something special about team information. Still, it is possible in theory
that the team information carries a higher signal-to-noise ratio then the other information
categories. We exploit the rich heterogeneity in angel investors in the sample, and in particular
heterogeneity in investment experience, in order to disentangle these stories.
The regression results in Table 6 show the difference in response between experienced
and inexperienced investors, where we use investors’ total number of investments as a measure
of experience. The first column shows that investors who have made at least one investment
behave similarly to the overall sample, and react only to the team information. The relatively
inexperienced investors, with no prior investments, who make up about 18% of the sample, not
only react to the team information, but also to the traction and current investor information.
Columns (2) and (3) redefine the cutoff between inexperienced and experienced investors at the
25
th
and 50
th
percentile of investors, ranked by their number of investments, respectively. The
results for the experienced investors remains the same, as they respond only to information about
20
the team. The significance of the response to traction and current investors categories among
inexperienced investors weakens somewhat as we broaden the definition of inexperience.
The results of Table 6 are consistent with the most inexperienced investors interpreting
all information categories as signals of the quality of the start-up. This has important
implications for the interpretation of the reaction of the experienced investors. In particular, it
means that the absence of a reaction to the traction and current investors categories is not due the
fact that the quality signal contained in this information is too low relative to the noise. Rather, it
suggests that the experienced investors believe these categories are simply less relevant to the
success of the company, and that there is something special about the information regarding the
team.
We should be careful to point out that the fact that human capital appears to matter more
to experienced investors than information about traction, does not mean that the business idea of
the start-up is irrelevant. We explore variation about information shown on human capital
conditional on the information about the company that is shown in the descriptive paragraph of
the email. This description contains information on the market, technology and other aspects of
the idea that may be important to investors. We do claim that, conditional on this information,
that is, conditional on the idea, the information about human capital matters to investors. In other
words, our results point to the jockey being important at this stage of the firm, irrespective of
whether the horse matters or not.
In Tables 7 to 9 we explore other measures of experience as well as measures of an
investor’s importance in the network. In Table 7 we use investors’ signal as an alternative
measure of investor experience and importance. In Table 8 we use the number of followers as a
measure of an investor’s importance, and in Table 9 we use the weighted number of followers
21
(weighted by investors’ signal quality). All these measures are as defined in Table 3. Overall, the
results are very robust: investors in the lowest quartile of experience or importance respond to all
categories of information, whereas investors in the top quartile only respond to the information
in the team category.
5. Internal and external validity
The experiment is run in a highly controlled information environment, where angel
investors are making decisions about the same start-up company at the same time, with
exogenously varying information sets. Still, we should be careful to consider any concerns about
the internal and external validity of the experiment.
One potential internal validity concern is that the coefficients on the disclosed
information in the regressions in Tables 4 to 9 may be affected by investors who already know
the information in the emails, especially if these are “hot” and promising start-ups. We control
for this using an indicator variable that captures whether investors already follow the start-up
before receiving the email, and a variable that counts prior connections between the investor and
the start-up, measured as the number of people on the profile of the startup (in any role) that the
investor already follows prior to receiving the email. Not surprisingly, investors are more likely
to click if they already follow the start-up, or have pre-existing connections. To the extent that
these proxies are not perfect, our results are biased towards not finding an effect of the disclosed
information, and our estimates should be interpreted as lower bounds on the importance of the
information categories. Still, the fact that even the most experienced and well-connected
investors react to the information in the emails suggests that this is not a first-order concern.
22
Another common concern with experiments that involve repeated measurements on
subjects (here: investors) is that subjects may learn about the existence of the experiment,
contaminating the results. This concern is mitigated by three features of the experiment: First, the
experiment window of eight weeks is short. Second, the randomized information categories are
not always shown outside of the experiment, so a missing category is not out of the ordinary.
Third, no investor received more than one email for any given featured start-up, so there is no
risk of the same investor receiving and comparing emails across the same start-up and noticing
different information sets. Still, to check whether investors realize that the experiment is going
on, we also include the number of prior experiment emails that the investor received as a control
in the regressions of Tables 4 to 9. The insignificant coefficients imply that click rates do not
change as the investor receives more emails in the experiment. Unreported regression results
show that including interactions of this control with the information category dummies also
yields insignificant results, showing that investor responsiveness to the information categories
also does not change as the experiment progresses.
AngelList chooses which companies to feature through email, and this could raise
validity concerns. Since our inference exploits the variation within each start-up, internal validity
is not an issue. Similarly, the choice of recipients does not violate internal validity, as
information is varied randomly across the recipients. However, the endogenous choice of start-
ups and investors does raise questions regarding external validity (i.e., generalization) of the
results.
The experiment covers a large proportion of the active angels on the AngelList platform:
of the 5,869 angels who are active on the platform, 4,494 (77%) received at least one featured
email over the course of the experiment, and 2,925 (50%) opened at least one of these emails. To
23
get a sense of representativeness of the sample of 21 start-ups in the randomized field
experiment, Table 10 compares them to a larger sample of 5,538 firms raising money on the
AngelList platform. This larger sample consists of “serious” firms in the sense that these
companies received at least one introduction request while listed on AngelList. Table 10 shows
that the field experiment firms are slightly larger in terms of the number of founders (2.6 versus
2.1 on average), pre-money valuation ($5.6 million versus $4.9 million), funding targets for the
AngelList round ($1.2 million versus $0.9 million)), are more likely to have employees (81%
versus 53%), and are more likely to have attended an incubator or accelerator program (57%
versus 30%). Still, for the most part the differences are small on economic grounds, and the
samples are comparable on other dimensions such as board size, the fraction of companies that
get funding prior to AngelList, and the prior amount raised. Also, in both samples about three out
of four firms sell equity, while the remainder sells convertible notes. Altogether, the two samples
do not look vastly different, which mitigates the concern about generalization of the results of the
field experiment.
6. Real outcomes
The analysis up to this point has focused on the click rates. It is reasonable to ask how
meaningful these clicks really are. We address this concern in three ways. First, consider the base
click rate of 16.45%. This shows that investors do not ignore the information in the emails, nor
do they click on every featured company that lands in their inbox. Second, if investors did not
care about the information in the emails, clicks would be random and we would not see strong
reactions to any information. The fact that we find economically and statistically significant
results suggests that investors do care, and pay attention to the information provided at this stage.
24
Third, we can measure the conversion rate from clicks into actual investments. This addresses a
common weakness of the correspondence testing literature, where real outcomes can usually not
be observed. For example, in their study on racial discrimination in hiring, Bertrand and
Mullainathan (2004) cannot measure the conversion rate of call backs into actual hiring, since
the randomized resumes that they send out are fictitious. Since our experiment involves real
companies and real investors, we can observe the real outcomes.
Table 1 reports that the conversion rate from clicks to ultimate investment is 2.98%. This
is fairly high compared to venture capital, as venture capitalists invest in about one in every 50 to
100 deals that they look into. Moreover, no investments were made without an initial click in
these emails. Thus, clicks are a prerequisite for investment.
The investment results should be interpreted with some degree of caution, as investors are
not required to report their investment to AngelList. The company’s best estimate is that only
half of investments are reported.
15
This means that the click-to-investment rate could be as high
as 6%.
We look at introductions as an alternate measure of real outcome that is less subjective to
the underreporting issue. AngelList records whenever an investor requests an introduction with a
start-up through the website, and introductions are therefore more precisely measured than
investments. Table 1 shows that the conversion rate of clicks to introductions is 15.14%. This
rate is higher than the investment rate not only due to the underreporting of investments, but also
because introductions are a lower screen than investments, as only a certain proportion of
introductions lead to investments. Still, the click-to-introduction conversion rate shows that
clicks are meaningful for real outcomes as they lead to a significant number of introductions.
15
We have tried to supplement the investments data using Crunchbase but did not identify any further investors in
the angel round.
25
7. Conclusion
This paper uses a field experiment to study early stage investors’ responses to
information about start-up firms. We randomly vary investors’ information sets in a tightly
controlled information environment that uses emails regarding featured start-ups, sent through
AngelList’s platform. We find that investors react most strongly to the information about the
start-up’s founding team. However, there is considerable heterogeneity among investors, and
while experienced and successful investors react only to the team information, inexperienced
investors also react to information about the firm’s traction and current investors.
We also present evidence that clicks on emails, our response measure that indicates
investor interest in the start-up, matters for real outcomes such as investors requesting
introductions to start-ups, and ultimately making investments.
Our results suggest that, conditional on the quality of the idea, human assets are
important to the success of the early stage firm. This is important in light of the debate in the
literature about which assets are central to the firm at an early stage. However, our results do not
suggest that non-human assets are not essential.
Finally, this paper opens up a set of new questions for future work. In particular, the
study of long-run outcomes for the companies that obtain funding is an important avenue for
future work.
26
References
Angrist, Joshua, and Jörn-Steffen Pischke, 2009, Mostly harmless econometrics, Princeton
University Press, Princton, NJ.
Bernstein, Shai, Xavier Giroud, Richard Townsend, 2013, The impact of venture capital
monitoring: Evidence from a natural experiment, Working paper.
Bertrand, Marianne, and Sendhill Mullainathan, 2004, Are Emily and Greg more employable
than Lakisha and Jamal? A field experiment on labor market discrimination, American
Economic Review 94, 991-1013.
Coase, Ronald, 1937, The nature of the firm, Economica 4, 386–405.
Ewens, Michael, Ramana Nanda, and Matthew Rhodes-Kropf, 2014, Entrepreneurship and the
cost of experimentation, Working paper, Carnegie Mellon University and Harvard
University.
Fried, Vance H, and Robert D. Hisrich, 1994, Toward a model of venture capital investment
decision making, Financial Management 23, 28-37.
Gompers, Paul, and Josh Lerner, 2001, The money of invention, Harvard Business School Press,
Boston, MA.
Grossman, Sanford J., and Oliver D. Hart, 1986, The costs and benefits of ownership: A theory
of vertical and lateral integration, Journal of Political Economy 94, 691-719.
Hart, Oliver D., and John Moore. 1990, Property rights and the nature of the firm, Journal of
Political Economy 98, 1119-1158.
Holmstrom, Bengt R., 1999, The firm as a subeconomy, Journal of Law, Economics, and
Organization 15, 74-102.
27
Kaplan, Steven N., and Antoinette Schoar, 2005, Private equity performance: Returns,
persistence, and capital flows, Journal of Finance 60, 1791-1823.
Kaplan, Steven N., Berk A. Sensoy, and Per Stromberg, 2009, Should investors bet on the jockey
or the horse? Evidence from the evolution of firms from early business plans to public
companies, Journal of Finance 64, 75-115.
Kerr, William R., Josh Lerner, and Antoinette Schoar. "The consequences of entrepreneurial
finance: Evidence from angel financings." Review of Financial Studies (2011).
Korteweg, Arthur G., and Morten Sorensen, 2013, Skill and luck in private equity performance,
Working paper, Stanford University and Columbia University.
Kortum, Samuel and Josh Lerner, 1998, Stronger protection or technological revolution: What is
behind the recent surge in patenting?, Carnegie-Rochester Conference Series on Public
Policy 48, 247-304.
MacMillan, Ian C, Robin Siegel, and P.N. Narasimha, 1986, Criteria used by venture capitalists
to evaluate new venture proposals, Journal of Business Venturing 1, 119-128.
MacMillan, Ian C, Lauriann Zemann, and P.N. Subbanarasimha, 1987, Criteria distinguishing
successful from unsuccessful ventures in the venture screening process, Journal of Business
Venturing 2, 123-137.
Metrick, Andrew, 2007, Venture capital and the finance of innovation, John Wiley & Sons,
Hoboken, NJ.
Nisbett, Richard E. and Cohen, 1996, Culture of honor: The psychology of violence in the South,
Boulder, CO: Westview Press.
Pence, Christine Cope, 1982, How venture capitalists make investment decisions, UMI Research
Press.
28
Puri, Manju, and Rebecca Zarutskie, 2012, On the lifecycle dynamics of venture-capital- and
non-venture-capital-financed firms, Journal of Finance 67, 2247-2293.
Quindlen, Ruthann, 2000, Confessions of a venture capitalist, Warner Books, New York, NY.
Rajan, Raghuram G., 2012, Presidential address: The corporation in finance, Journal of Finance
57, 1173-1217.
Rajan, Raghuram G., and Luigi Zingales. 1998, Financial dependence and growth, American
Economic Review 88, 559-586.
Rajan, Raghuram G., and Luigi Zingales. 2001, The firm as a dedicated hierarchy: A theory of
the origins and growth of firms, Quarterly Journal of Economics 116, 805-851.
Rin, Marco Da, Thomas F. Hellmann, and Manju Puri. 2013, A survey of venture capital
research, in: Handbook of the Economics of Finance Volume 2 Part A, eds. George M.
Constantinides, Milton Harris and Rene M. Stulz, Elsevier, 573-648.
Samila, Sampsa, and Olav Sorenson. 2010, Venture capital, entrepreneurship, and economic
growth, Review of Economics and Statistics 93, 338–349.
Solow, Robert M., 1957, Technical change and the aggregate production function, Review of
Economic and Statistics 39, 312-320.
Sorensen, Morten, 2007, How smart is smart money? a two-sided matching model of venture
capital, Journal of Finance 62, 2725–2762.
Weichselbaumer, Doris, 2003, Sexual orientation discrimination in hiring, Labour Economics
10, 629-642.
Wernerfelt, Birger, 1984, A resource-based view of the firm, Strategic Management Journal 5,
171-180.
Zingales, Luigi, 2000, In search of new foundations, Journal of Finance 55, 1623-1653.
29
Appendix: Information disclosed in featured emails
For each start-up in the randomized experiment, we show what information passed AngelList’s
disclosure threshold. This information would be shown in the featured emails outside of the experiment.
To protect the companies’ identities each column is shown in a different order, so that rows do not
correspond to companies. The team, investors, and traction information passed AngelList’s disclosure
threshold for 19, 17, and 18 start-ups, respectively.
Team information
Investors information
Traction information
Team worked at Microsoft,
Google and Ask.com.
Great Oaks and Josh
Abramowitz are investing in this
round.
$125K revenue in first 5 months,
15 companies, 1.7K testers.
Team worked at Starbucks and
Nabisco.
500 Startups is investing.
Incubated by Startmate.
$30M in transaction volume.
Team worked at Royal Bank of
Canada and went to University
of Toronto.
Summit Partners are investing in
this round.
3.2K health providers, 15%
monthly growth.
Team worked at IBM and went
to the University of Waterloo.
Hadi Partovi, Keith Rabois and
Tony Hsieh are investing in this
round.
$1M/year revenue, 70K users,
20% monthly growth.
Team worked at Accel. Went to
Cambridge and Oxford.
Quest Venture Partners are
investing in this round.
Incubated by Y Combinator.
$800K revenue/year, 60%
annual growth, 1K customers.
Team went to Stanford and
Berkeley.
Laurent Drion is investing
$500K in this round.
$250K in pre-sales, 960 pre-
orders.
Team worked at Microsoft,
Groupon and went to Stanford
GSB.
Grishin Robotics is investing
$500K in this round.
40 vending machines, 3 pilot
contracts.
Team members worked at
Accenture.
Lightbank is investing in this
round.
350 subscribing businesses, 700
active users, 125% monthly
growth.
Team worked at E*TRADE and
studied at Stanford.
Adventure Capital is investing in
this round.
$10K/month revenue, 120K
users, 7.5K courses.
Team includes the founder of
SIMMS - radiology software
used by 2 million patients.
Incubated by AngelPad.
$20K/month revenue, 25%
monthly growth, 12K monthly
active users.
Team worked at Intel and went
to The University of Chicago.
SoftTech VC and Matt
Mullenweg are investing in this
round.
$1K revenue, 12 customers.
Team worked at JPMorgan and
went to MIT.
Jeff Fluhr and Great Oaks are
investing in this round.
130K users, 10% monthly
growth, $6K/month revenue.
30
Team worked at Yahoo!, Oracle
and went to Stanford.
Dave McClure is investing in
this round.
80 users. Waiting list includes
BHP Billiton, USGS and the
WWF.
Team members went to Harvard.
Sandbox Industries are investing.
90K items for sale, 10K monthly
active users, 30% monthly
growth.
Team worked at Google and
went to The University of
Cambridge. Includes 2 Artificial
Intelligence PhDs.
Golden Gate Ventures is
investing in this round.
$20K revenue/month, 10K
engineers.
Team worked at Microsoft, GE
and went to Cornell.
Patrick Condon (co-founder of
Rackspace) investing. Incubated
by TechStars.
$1.4M revenue/year, 10K units
sold.
Team founded well.ca
($40M/year revenue), worked at
IBM and RIM.
Boris Wertz is investing in this
round. Incubated by Y
Combinator.
$10M/year revenue, 60% annual
growth, 13 stores, 25% profit
margin.
Team went to University of
British Columbia.
$70K revenue/month, 500K
monthly active users, 100K daily
active users.
The founders last company
designed and built 12
composting facilities in the US.
31
Table 1: Descriptive Statistics of Emails in Randomized Field Experiment
This table reports summary statistics for the sample of emails about featured start-ups in the randomized
field experiment. Each featured start-up has up to three information categories (team, traction, and current
investors) that would normally be shown in the email if the information for that category reaches a
threshold as defined by AngelList (see Figure 1 for an example). For each start-up, various unique
versions of each email are generated that randomly hide these pieces of information. These emails are
sent to investors registered on the AngelList platform. The sample is limited to active investors who have
in the past requested at least one introduction to a start-up on AngelList. Panel A shows basic descriptive
statistics regarding the emails, the investors who received the emails, and the start-ups covered by the
experiment. Each email contains a button that, when Clicked, takes the investor to the AngelList platform
where more information about the company is shown, and introductions to the company’s founders can
be requested. Intro means an investor requested an introduction to the start-up’s founders within three
days of viewing the email. Investment means the investor invested in the company at some time after
receipt of the email. Panel B shows the frequency with which each information category passed the
threshold where it would normally be shown, and how often this information was actually shown in the
emails conditional on the threshold being passed. The rightmost column shows the p-value for Pearson’s
chi-squared test with null hypothesis that the proportions in the first three columns are all equal.
Panel A: Experiment descriptive statistics
mean
st. dev.
percentile
10
50
90
Emails
Total
16,981
Unique
58
Investors / unique email
293
149
86
264
468
Active investors emailed
4,494
Active investors who opened at least one email
2,925
Start-ups
21
Investors / start-up
809
468
338
676
1,451
Unique emails / start-up
2.76
0.62
2
3
4
Start-ups / investor
3.78
2.45
1
3
7
Emails opened (%)
48.28
Of opened emails:
Clicked (%)
16.45
Of clicked emails:
Intro (%)
15.14
Investment (%)
2.98
32
Panel B: Information in emails
Team
Investors
Traction
p-value
Information passed threshold (% of start-ups)
90.48
80.95
85.71
0.678
Information shown if passed threshold (% of
unique emails)
73.24
73.02
72.06
0.987
33
Table 2: Descriptive Statistics of Start-ups
This table shows descriptive statistics of the 21 start-ups in the randomized field experiment at the time of
fundraising. Panel A shows the distribution across cities and countries. Panel B reports the distribution
across sectors, where sectors are not mutually exclusive. Panel C shows the structure of the start-up in
terms of number of founders, employees, board size, advisors, and whether or not the company has an
attorney. Employees (%) is the fraction of start-ups that has non-founder employees. The If > 0, #
employees variable shows how many employees are working for those start-up that have employees. The
variables for board members, advisors and attorney follow a similar pattern. Panel D reports the
percentage of start-ups that had funding prior to the current round (Pre-round funding (%)), and if any
prior money was raised, the amount raised (If > 0, pre-round funding raised). Incubator (%) is the
fraction of start-ups that have been part of an incubator or accelerator program in the past, and Equity
financing (%) is the percentage of firms selling stock, with the remainder selling convertible notes.
Panel A: Start-up Distribution across Cities
N
fraction (%)
Austin, TX
1
4.76
Chicago, IL
1
4.76
Kitchener, Canada
1
4.76
London, United Kingdom
1
4.76
Melbourne, Australia
1
4.76
New York City, NY
3
14.28
San Antonio, TX
1
4.76
Silicon Valley, CA
6
28.57
Singapore
1
4.76
Sydney, Australia
1
4.76
Toronto, Canada
3
14.28
Vancouver, Canada
1
4.76
Panel B: Start-up Distribution across Sectors
N
fraction (%)
Information Technology
18
85.71
Consumers
13
61.90
Clean Technology
1
4.76
Healthcare
3
14.28
Business-to-business
8
38.10
Media
2
9.52
Education
2
9.52
34
Panel C: Start-up Structure
N
mean
st. dev.
percentile
10
50
90
# Founders
21!
2.62
0.92
2
2
4
Employees (%)
21
80.95
If >0, # employees
17
3.35
2.21
1
3
7
Board members (%)
21
23.81
!
If >0, # board members
5
1.80
0.84
1
2
3
Advisor (%)
21
90.48
!
If >0, # advisors
19
4.74
6.00
1
3
7
Attorney (%)
21
71.43
Panel D: Start-up Funding
N
mean
st. dev.
percentile
10
50
90
Incubator (%)
21
57.14
Pre-round funding (%)
21
52.38
If > 0, pre-round funding raised ($000s)
11
580.95
855.33
50.00
290.00
950.00
Pre-money valuation ($000s)
16
5,465.63
2,133.60
3,000.00
5,000.00
8,000.00
Fundraising goal ($000s)
18
1,183.06
462.88
570.00
1,250.00
2,000.00
Equity financing (%)
21
76.19
35
Table 3: Descriptive Statistics of Investors
This table reports descriptive statistics of the active investors (defined as having requested at least one
introduction through the AngelList platform) who received featured emails about the start-ups in the
randomized field experiment, and opened at least one such email. Panel A shows in which sectors
investors have stated they are interested in investing. A single investor can indicate multiple sectors of
interest. Panel B shows the number of introductions requested by investors, the signal of an investors’
success as computed by AngelList (see the main text for a description of the algorithm), the number of
followers that investors have on the platform, both the raw number and weighted by the followers’ signals,
the percentage of investors that were involved with start-ups in the past, and for those involved with start-
ups, the number of start-ups the investor was involved with. Panel C breaks down these involvements into
various roles. Investor (%) shows the percentage of angels who have invested in start-ups. For the subset
of angels who invested in start-ups, If > 0, # start-ups funded reports the number of start-ups that they
invested in. The variable definitions for advisor, board member, and founder follow a similar pattern.
Panel A: Investor Stated Interest across Sectors
Sector
N
fraction (%)
Information Technology
2,884
98.59
Consumers
2,769
94.66
Clean Technology
861
29.43
Healthcare
1,239
42.35
Business-to-business
2,328
79.58
Finance
949
32.44
Media
1,420
48.54
Energy
165
5.64
Education
685
23.41
Life Sciences
414
14.15
Transportation
307
10.49
Other
26
0.8
Panel B: Investor Characteristics
N
mean
st. dev.
percentile
10
50
90
# Introductions requested
2,925
9.72
31.09
1
3
21
Signal
2,925
6.44
2.26
3.28
6.30
9.87
# Followers
2,925
591.12
1,493.10
26
202
1346
Weighted number of followers
2,925
2,527.30
5,763.70
108.97
915.70
5,896.90
Involved in start-ups (%)
2,925
91.93
If > 0, # start-ups involved with
2,689
12.55
17.18
2
8
27
36
Panel C: Investor Roles in Start-up Companies
N
mean
st. dev.
percentile
10
50
90
Investor (%)
2,925
82.36
If > 0, # start-ups funded
2,409
13.10
16.81
2
8
28
Advisor (%)
2,925
43.49
If > 0, # start-ups as advisor
1,272
3.47
4.54
1
2
7
Board member (%)
2,925
16.92
If > 0, # start-ups as board member
495
1.93
1.82
1
1
4
Start-up founder (%)
2,925
60.00
If > 0, # start-ups founded
1755
2.05
1.44
1
2
4
Table 4: Investor Response to Randomized Emails
This table reports regression results of investor responses to the featured emails in the randomized field experiment. The dependent variable is one
when an angel investor clicked on the “View” button in the featured email, and zero otherwise. Only opened emails are included in the sample.
Team = 1 is an indicator variable that equals one if the team information is shown in the email, and zero otherwise. Similarly, Investors = 1 and
Traction = 1 are indicator variables for the current investors, and traction information, respectively. Connections counts the number of people on
the start-up’s profile (in any role) that the investor already follows prior to receiving the email. Prior follow = 1 is an indicator variables that
equals one if the investor was already following the start-up on AngelList prior to receiving the featured email. Prior emails is the number of
emails that the investor has received in the experiment prior to the present email. R2 is the adjusted R
2
for OLS regressions, and pseudo R
2
for
logit models. Standard errors are in parentheses, and are clustered at the investor level. ***, ** and * indicate statistical significance at the 1, 5 and
10 percent level, respectively.
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
Model
OLS
OLS
OLS
OLS
Logit
Logit
Logit
Logit
Team = 1
0.026***
0.028***
0.022**
0.023**
0.193***
0.208***
0.162**
0.172**
(0.009)
(0.009)
(0.010)
(0.010)
(0.064)
(0.064)
(0.073)
(0.074)
Investors = 1
0.011
0.013
0.010
0.009
0.077
0.100
0.070
0.067
(0.010)
(0.010)
(0.013)
(0.013)
(0.072)
(0.073)
(0.097)
(0.097)
Traction = 1
0.010
0.013
0.016
0.017
0.073
0.097
0.122
0.123
(0.010)
(0.010)
(0.014)
(0.014)
(0.075)
(0.076)
(0.106)
(0.106)
Connections
0.016***
0.010
0.093***
0.064*
(0.006)
(0.006)
(0.034)
(0.038)
Prior follow = 1
0.129***
0.143***
0.739***
0.835***
(0.033)
(0.033)
(0.163)
(0.166)
Prior emails
-0.007***
0.001
-0.051***
0.006
(0.002)
(0.003)
(0.014)
(0.022)
Start-up fixed effects
N
N
Y
Y
N
N
Y
Y
Number of observations
8,189
8, 189
8, 189
8, 189
8, 189
8, 189
8, 189
8, 189
R2
0.001
0.008
0.001
0.005
0.001
0.009
0.028
0.033
38
Table 5: Investor Response to Non-randomized Emails
This table replicates the regressions in Table 4 for the subset of featured emails that show all information that has crossed the disclosure threshold,
in the columns labeled “Full-information emails only. The model numbers in the second row correspond to the model numbers in Table 4. For
ease of comparison, the columns labeled “Randomized sample” show the results from Table 4 for the same set of models. The dependent variable
is one when an angel investor clicked on the “View” button in the featured email, and zero otherwise. The explanatory variables are as defined in
Table 4. R2 is the adjusted R
2
for OLS regressions, and pseudo R
2
for logit models. Standard errors are in parentheses, and are clustered at the
investor level. ***, ** and * indicate statistical significance at the 1, 5 and 10 percent level, respectively.
Full-information emails only
Randomized sample
(1)
(2)
(5)
(6)
(1)
(2)
(5)
(6)
OLS
OLS
Logit
Logit
OLS
OLS
Logit
Logit
Team = 1
0.046**
0.045**
0.336**
0.337*
0.026***
0.028***
0.193***
0.208***
(0.022)
(0.022)
(0.171)
(0.172)
(0.009)
(0.009)
(0.064)
(0.064)
Investors = 1
0.013
0.022
0.091
0.155
0.011
0.013
0.077
0.100
(0.018)
(0.019)
(0.127)
(0.133)
(0.010)
(0.010)
(0.072)
(0.073)
Traction = 1
0.037*
0.043**
0.265*
0.311**
0.010
0.013
0.073
0.097
(0.020)
(0.020)
(0.149)
(0.154)
(0.010)
(0.010)
(0.075)
(0.076)
Connections
0.010
0.058
0.016***
0.093***
(0.010)
(0.054)
(0.006)
(0.034)
Prior follow = 1
0.150**
0.822***
0.129***
0.739***
(0.059)
(0.277)
(0.033)
(0.163)
Prior emails
-0.006
-0.042*
-0.007***
-0.051***
(0.003)*
(0.025)
(0.002)
(0.014)
Start-up fixed effects
N
N
N
N
N
N
N
N
Number of observations
2,992
2,992
2,992
2,992
8,189
8,189
8,189
8,189
R2
0.001
0.006
0.002
0.008
0.001
0.008
0.001
0.009
Table 6: Investor Response by Number of Investments
This table reports regression results of investor responses to the featured emails in the randomized field
experiment. The dependent variable is one when an angel investor clicked on the “View” button in the
featured email, and zero otherwise. Only opened emails are included in the sample. Team = 1, Investors =
1 and Traction = 1 are indicator variables that equal one if the team, current investors, or traction
information, respectively, are shown in the email. # Investments <= cutoff is an indicator variable that
equals one if number of investments by a given investor is less than or equal to the percentile of the
investments count distribution shown in the row labeled Cutoff. The other variables are as defined in
Table 4. R2 is the adjusted R
2
for OLS regressions, and pseudo R
2
for logit models. Standard errors are in
parentheses, and are clustered at the investor level. ***, ** and * indicate statistical significance at the 1,
5 and 10 percent level, respectively.
(1)
(2)
(3)
(4)
(5)
(6)
OLS
OLS
OLS
Logit
Logit
Logit
Team shown = 1
0.017*
0.021*
0.026**
0.130*
0.162*
0.203**
(0.010)
(0.011)
(0.013)
(0.079)
(0.087)
(0.099)
Investors shown = 1
-0.001
0.004
0.003
-0.012
0.025
0.016
(0.013)
(0.014)
(0.016)
(0.104)
(0.115)
(0.125)
Traction shown = 1
0.009
0.003
0.010
0.066
0.024
0.076
(0.014)
(0.016)
(0.017)
(0.108)
(0.117)
(0.129)
# Investments <= cutoff
0.037
0.007
-0.007
0.235
0.029
-0.061
x Team shown = 1
(0.025)
(0.018)
(0.017)
(0.176)
(0.137)
(0.131)
# Investments <= cutoff
0.070**
0.021
0.014
0.476**
0.146
0.103
x Investors shown = 1
(0.028)
(0.021)
(0.020)
(0.189)
(0.151)
(0.149)
# Investments <= cutoff
0.063**
0.047**
0.015
0.427*
0.351**
0.116
x Traction shown = 1
(0.031)
(0.021)
(0.020)
(0.220)
(0.163)
(0.154)
# Investments <= cutoff
-0.080*
-0.028
-0.001
-0.530*
-0.193
0.001
(0.043)
(0.031)
(0.030)
(0.320)
(0.241)
(0.231)
Connections
0.010
0.010
0.010
0.066*
0.068*
0.067*
(0.006)
(0.006)
(0.006)
(0.038)
(0.038)
(0.038)
Prior follow
0.145***
0.144***
0.145***
0.847***
0.849***
0.852***
(0.033)
(0.033)
(0.033)
(0.166)
(0.166)
(0.166)
Prior emails
0.002
0.002
0.001
0.013
0.014
0.010
(0.003)
(0.003)
(0.003)
(0.022)
(0.022)
(0.022)
Startup fixed effects
Y
Y
Y
Y
Y
Y
Cutoff
Zero
25%
50%
Zero
25%
50%
Number of observations
8,189
8,189
8,189
8,189
8,189
8,189
R2
0.007
0.006
0.005
0.035
0.035
0.034
40
Table 7: Investor Response by Signal
This table reports regression results of investor responses to the featured emails in the randomized field
experiment. The dependent variable is one when an angel investor clicked on the “View” button in the
featured email, and zero otherwise. Only opened emails are included in the sample. Team = 1, Investors =
1 and Traction = 1 are indicator variables that equal one if the team, current investors, or traction
information, respectively, are shown in the email. Signal < cutoff is an indicator variable that equals one
if the investor signal is below the percentile of the signal distribution shown in the row labeled Signal
cutoff. See the main text for the algorithm used to compute the signals. The other variables are as defined
in Table 4. R2 is the adjusted R
2
for OLS regressions, and pseudo R
2
for logit models. Standard errors are
in parentheses, and are clustered at the investor level. ***, ** and * indicate statistical significance at the
1, 5 and 10 percent level, respectively.
(1)
(2)
(3)
(4)
(5)
(6)
OLS
OLS
OLS
Logit
Logit
Logit
Team shown = 1
0.019*
0.024*
0.035*
0.147*
0.184*
0.265*
(0.011)
(0.013)
(0.019)
(0.085)
(0.097)
(0.139)
Investors shown = 1
-0.003
0.001
-0.005
-0.036
0.005
-0.045
(0.014)
(0.016)
(0.022)
(0.112)
(0.127)
(0.170)
Traction shown = 1
0.005
0.010
0.009
0.037
0.077
0.067
(0.015)
(0.017)
(0.022)
(0.112)
(0.123)
(0.156)
Signal < cutoff
0.013
-0.003
-0.016
0.067
-0.027
-0.127
x Team shown = 1
(0.020)
(0.017)
(0.020)
(0.145)
(0.131)
(0.154)
Signal < cutoff
0.055**
0.016
0.019
0.385**
0.121
0.141
x Investors shown = 1
(0.022)
(0.020)
(0.024)
(0.159)
(0.151)
(0.184)
Signal < cutoff
0.063**
0.015
0.012
0.440**
0.122
0.093
x Traction shown = 1
(0.024)
(0.020)
(0.023)
(0.180)
(0.155)
(0.171)
Signal < cutoff
-0.049
-0.013
0.001
-0.320
-0.102
0.010
(0.034)
(0.030)
(0.036)
(0.261)
(0.231)
(0.276)
Connections
0.011*
0.010
0.010
0.075*
0.066*
0.067*
(0.006)
(0.006)
(0.006)
(0.038)
(0.038)
(0.038)
Prior follow
0.144***
0.144***
0.145***
0.843***
0.841***
0.846***
(0.033)
(0.033)
(0.033)
(0.166)
(0.166)
(0.166)
Prior emails
0.003
0.001
0.001
0.020
0.008
0.006
(0.003)
(0.003)
(0.003)
(0.022)
(0.022)
(0.022)
Startup fixed effects
Y
Y
Y
Y
Y
Y
Signal cutoff
25%
50%
75%
25%
50%
75%
Number of observations
8,189
8,189
8,189
8,189
8,189
8,189
R2
0.007
0.005
0.005
0.036
0.033
0.034
41
Table 8: Investor Response by Number of Followers
This table reports regression results of investor responses to the featured emails in the randomized field
experiment. The dependent variable is one when an angel investor clicked on the “View” button in the
featured email, and zero otherwise. Only opened emails are included in the sample. Team = 1, Investors =
1 and Traction = 1 are indicator variables that equal one if the team, current investors, or traction
information, respectively, are shown in the email. # Followers < cutoff is an indicator variable that equals
one if number of followers of a given investor is less than the percentile of the followers count
distribution shown in the row labeled Cutoff. The other variables are as defined in Table 4. R2 is the
adjusted R
2
for OLS regressions, and pseudo R
2
for logit models. Standard errors are in parentheses, and
are clustered at the investor level. ***, ** and * indicate statistical significance at the 1, 5 and 10 percent
level, respectively.
(1)
(2)
(3)
(4)
(5)
(6)
OLS
OLS
OLS
Logit
Logit
Logit
Team shown = 1
0.017
0.023*
0.036**
0.135
0.182*
0.293**
(0.011)
(0.013)
(0.017)
(0.085)
(0.104)
(0.141)
Investors shown = 1
-0.007
-0.018
-0.018
-0.064
-0.163
-0.155
(0.013)
(0.016)
(0.020)
(0.109)
(0.127)
(0.164)
Traction shown = 1
0.006
0.003
0.010
0.046
0.022
0.078
(0.015)
(0.017)
(0.021)
(0.114)
(0.127)
(0.158)
# Followers < cutoff
0.021
-0.001
-0.018
0.115
-0.031
-0.163
x Team shown = 1
(0.020)
(0.017)
(0.020)
(0.144)
(0.132)
(0.158)
# Followers < cutoff
0.064***
0.053***
0.035
0.443***
0.412***
0.277
x Investors shown = 1
(0.024)
(0.020)
(0.022)
(0.165)
(0.151)
(0.177)
# Followers < cutoff
0.049**
0.030
0.010
0.324*
0.232
0.081
x Traction shown = 1
(0.025)
(0.020)
(0.022)
(0.174)
(0.156)
(0.177)
# Followers < cutoff
-0.041
-0.023
0.014
-0.247
-0.163
0.132
(0.034)
(0.029)
(0.034)
(0.252)
(0.230)
(0.277)
Connections
0.013**
0.013**
0.013**
0.088**
0.089**
0.085**
(0.006)
(0.006)
(0.006)
(0.040)
(0.040)
(0.039)
Prior follow
0.145***
0.146***
0.145***
0.855***
0.860***
0.854***
(0.033)
(0.033)
(0.033)
(0.165)
(0.166)
(0.166)
Prior emails
0.003
0.001
0.001
0.020
0.008
0.004
(0.003)
(0.003)
(0.003)
(0.022)
(0.022)
(0.021)
Startup fixed effects
Y
Y
Y
Y
Y
Y
Cutoff
25%
50%
75%
25%
50%
75%
Number of observations
8,189
8,189
8,189
8,189
8,189
8,189
R2
0.008
0.007
0.007
0.037
0.037
0.036
42
Table 9: Investor Response by Weighted Number of Followers
This table reports regression results of investor responses to the featured emails in the randomized field
experiment. The dependent variable is one when an angel investor clicked on the “View” button in the
featured email, and zero otherwise. Only opened emails are included in the sample. Team = 1, Investors =
1 and Traction = 1 are indicator variables that equal one if the team, current investors, or traction
information, respectively, are shown in the email. Weighted # followers < cutoff is an indicator variable
that equals one if number of followers of a given investor, weighted by their signal, is less than the
percentile of the weighted followers count distribution shown in the row labeled Cutoff. The other
variables are as defined in Table 4. R2 is the adjusted R
2
for OLS regressions, and pseudo R
2
for logit
models. Standard errors are in parentheses, and are clustered at the investor level. ***, ** and * indicate
statistical significance at the 1, 5 and 10 percent level, respectively.
(1)
(2)
(3)
(4)
(5)
(6)
OLS
OLS
OLS
Logit
Logit
Logit
Team shown = 1
0.020*
0.026**
0.033*
0.158*
0.207**
0.268*
(0.011)
(0.013)
(0.017)
(0.085)
(0.104)
(0.140)
Investors shown = 1
-0.003
-0.017
-0.013
-0.036
-0.156
-0.111
(0.013)
(0.016)
(0.020)
(0.109)
(0.127)
(0.164)
Traction shown = 1
0.006
0.003
0.014
0.048
0.023
0.112
(0.015)
(0.017)
(0.021)
(0.113)
(0.126)
(0.158)
Weighted # followers < cutoff
0.009
-0.008
-0.014
0.035
-0.082
-0.129
x Team shown = 1
(0.020)
(0.017)
(0.020)
(0.144)
(0.132)
(0.157)
Weighted # followers < cutoff
0.051**
0.051***
0.028
0.354**
0.399***
0.221
x Investors shown = 1
(0.024)
(0.020)
(0.022)
(0.165)
(0.152)
(0.176)
Weighted # followers < cutoff
0.049**
0.030
0.003
0.338*
0.229
0.023
x Traction shown = 1
(0.024)
(0.020)
(0.022)
(0.177)
(0.157)
(0.176)
Weighted # followers < cutoff
-0.035
-0.018
0.021
-0.216
-0.117
0.183
(0.034)
(0.029)
(0.034)
(0.254)
(0.231)
(0.276)
Connections
0.013**
0.013**
0.013**
0.083**
0.090**
0.084**
(0.006)
(0.006)
(0.006)
(0.039)
(0.040)
(0.039)
Prior follow
0.145***
0.146***
0.145***
0.855***
0.861***
0.854***
(0.033)
(0.033)
(0.033)
(0.166)
(0.166)
(0.166)
Prior emails
0.002
0.001
0.001
0.018
0.008
0.005
(0.003)
(0.003)
(0.003)
(0.022)
(0.022)
(0.021)
Startup fixed effects
Y
Y
Y
Y
Y
Y
Cutoff
25%
50%
75%
25%
50%
75%
Number of observations
8,189
8,189
8,189
8,189
8,189
8,189
R2
0.007
0.008
0.006
0.036
0.037
0.035
Table 10: Start-ups in Field Experiment Sample versus Broad Sample
This table compares the sample of 21 start-ups in the randomized field experiment (the “experiment firms”) with a broad sample of 5,538 firms
raising funding on AngelList (the “non-experiment firms”). The non-experiment firms are those firms that attempted to raise money through
AngelList and received at least one introduction request. The variables are as defined in Table 2. The rightmost column shows the p-value for a
differences-in-means test between the experiment and non-experiment samples.
Experiment firms (N = 21)
Non-experiment firms (N = 5,538)
Means
N
mean
median
st. dev.
N
mean
median
st. dev.
test p
# Founders
21
2.62
2
0.92
5,538
2.11
2
1.06
0.028
Employees (%)
21
80.95
5,538
52.56
0.009
If > 0, # employees
17
3.35
3
2.21
2,911
2.91
2
2.45
0.453
Board members (%)
21
23.81
5,538
16.78
0.390
If > 0, # board members
5
1.80
2
0.84
929
1.96
2
1.14
0.749
Advisor (%)
21
90.48
5,538
60.74
0.005
If > 0, # advisors
19
4.74
3
6.00
3,364
2.94
2
2.18
0.000
Incubator (%)
21
57.14
5,538
29.70
0.006
Pre-round funding (%)
21
47.62
5,538
45.76
0.865
If > 0, amount raised ($000s)
10
605.05
234.00
897.66
2,534
674.27
250.00
1,874.28
0.904
Pre-money valuation ($000s)
12
5,579.17
5,000.00
2,383.22
2,616
4,857.83
3,500.00
15,747.91
0.873
Fundraising goal ($000s)
15
1,226.33
1,325.00
488.96
4,321
923.99
500.00
1,135.56
0.303
Equity financing (%)
21
76.19
4,912
69.04
0.603
Figure 1: Sample featured start-up email to investors
This figure shows an example of a featured start-up email that is sent to investors. Each featured start-up
has up to three information categories (team, traction, and current investors) that would normally be
shown in the email if the information for that category reaches a threshold as defined by AngelList. For
each start-up, various unique versions of each email are generated that randomly hide these pieces of
information (the Randomization categories). Each email contains a View button that, when clicked,
takes the investor to the AngelList platform where more information about the company is shown, and
introductions to the company’s founders can be requested. The Get an Intro button requests such an
introduction straight from the email.
Randomization
categories
45
Figure 2: Distribution of investor signal measure
This figure shows the histogram of the investor signal for the 2,925 active investors that received emails
about featured start-ups in the randomized field experiment, and opened at least one such email. The
signal ranges from zero to ten. See the main text for a description of the algorithm used to compute the
signal.
0
.1 .2 .3 .4
Density
0 2 4 6 8 10
Signal