Category Archives: Uncategorized

Empirics in the very long-run

Much of my empirical work uses proprietary data from firms. I’m fully aware of the problem this creates for science—my work is, by and large, not (easily) reproducible. There are some things I do to try to enhance the credibility of my work despite this limitation, which I’ll save for another blog post, but I have an idea that I want to try and I think other similarly situated researchers should try: ask the data provider to agree to some release date in the future—potentially far in the future.

Research: Can I release this data next year?

Lawyer: No way.

Research: How about 5 years?

Lawyer: No.

How about 15?

Lawyer: Hmm (thinks they might be around in 15 years). Still no.

How about 40 years from now?

Lawyer: (50 year old lawyer contemplates own mortality) Uh, sure, OK.

Given my age, I’m not likely to be the one to work with this 40 year old data, but I’m pretty sure there will be empirical economists in 40 years who might like to revisit some aspect of the data I’ve worked with, hopefully with much more sophisticated methods and theories.  

How could you actually make this happen? Well, obviously picking a storage medium that can stand the test of time could be challenging. I’ve heard good things about M-DISC and I’ve been burning 100 GB back-ups with a burner I bought. The discs supposedly can last for 1,000+ years.

The second part is the software and data. It’s too much of a burden to rewrite all your code in some future proof language and picking winners will be hard anyway. I think the most promising approach is to just use whatever you normally use, but save not just your code and data, but all the dependencies for the whole OS. In other words, use something like Docker image of the code and data needed to produce you paper, with a script that orchestrates the production of the paper from raw data to final PDF. I feel pretty confident that 50 years from now, some variant of linux will still be in use that can run—or easily run a virtual environment that mimics what we use today.

In addition to the technical issue, there is also a social one, which is how do you get your data out there in 50 or 100 years? If the timeline is long but not too long, I think adding a section to your will with instructions to your executor could be sufficient. I’ve been working on a “research will” which is how I’d like my various projects wrapped up if I were to suddenly pass. If there were some time-related data releases, you could add sections to your will that assigned the data releasing to a young colleague or perhaps some institution that will be persistent (e.g., the head of the dept at your university). If the timeline is really long (say 100+ years), I don’t have a great answer, but perhaps a university library would be willing to take on this role. I’d be curious to hear other ideas on how to make sure your “time capsule” gets opened.    

Celebrating RJZ’s 50 years of teaching at Harvard

This weekend attended a celebration of Richard Zeckhauser’s 50 years of teaching at Harvard. The event started with two panels focusing on different aspects of Richard’s scholarship, followed by a dinner that included several speeches and a video tribute, with testimonials from friends, students, co-authors, and family. It was a wonderful event and it  gave me the urge to write a little bit about Richard. I’ve had the privilege of knowing Richard for over 10 years now, as he was my primary advisor in graduate school and co-author. I also TA’d his notoriously challenging course at the Kennedy School, API-301.  

As a graduate student, Richard gave me the freedom to pursue what I was interested in. It worked in part because he’s basically interested in everything. He had the patience for my bad ideas and the instinct to nurture the maybe-not-terrible ideas. What I recall most strongly was just how much he made time for his students. It wasn’t just quality time—it was quantity time. In graduate school, I saw him constantly. We’d get lunch weekly. He’d invite me to a Red Sox or Celtics games. Or we’d just grab a soda and talk in the afternoons.

Our time together was *always* interesting and frequently challenging. I learned at the dinner I was not the only one subject to his questions and quizzes. So many of our meetings would start with some version of “suppose you have a coin that comes up heads 45% of the time…” I always felt flattered he thought I was worth quizzing, even if I probably did not do much better than chance on the quizzes.    

When it came time for the job market, I think he spent more time thinking about my career than I did. And his thinking on *anything* is gold. When I face a hard question in life—how to handle some tricky professional or personal decision—I think “What would Richard do?” He’d collect information. Probe his assumptions. Come up with subjective probabilities. Consider the subjective utility from various states of the world. It’s amazing to experience his thinking and his passion for a rational, reasoned approach to decision-making. I think probably his most deeply held belief is that we can all make better decisions by being more analytical about our decisions.   

As one can imagine, the traits described above are part of why he’s such tremendous scholar. One of the reasons he’s been so prolific is how he operates as a co-author. If you send him a draft, you will, in short order, get 10 pages of typed, detailed, brilliant notes—very likely typed up in the middle of the night. If you can get him a word document—he’s not a LaTeX devotee—the tracked-changes edits to your writing could be the basis for a composition course on how to be a better writer.   

Although we have written several things together, I don’t have the breadth or the quickness of mind to be “Richard” in my research. I’m not sure anyone really can anymore. It’s a cliche, but that mold is probably broken. I think the best I or anyone can hope for is to bring some Zeckhauserian sensibilities to my work.

What are those Zeckhauserian sensibilities? To be clear. To be interesting. To care deeply about writing. To find great examples. To keep working and polishing until your thinking and writing are straight. To take economic theory seriously, but also not ignore what we can see in front of us. To be humble about what we know and don’t know.

I hope to live up to the example he’s set as an academic. There are a lot of joys to academia. But I think one of the greatest is to be a part of a great chain of teacher and student. I cherish the fact that I now have the chance to influence my students and pass on something of him. He’s a model for living a life of the mind—and more generally, for living a life well. 

Why I don’t like going to academic conferences

Academics go to academic conferences. I generally go to a few each year—always the AEA and NBER Summer Institute, usually SOLE (Society of Labor Economists) and INFORMS and ICIS (International Conference on Information Systems). I also typically go to a few one-off, non-annual conferences. I don’t like going to conferences—and have felt that way for a long time—but I never really thought about what it was I didn’t like (beyond the things that everyone dislikes about travel & missing their family).

I think the main reason I don’t like going to conferences is the research in aggregatge. Seeing up close the amount of work being done—and the knowledge that I could know about only a small fraction—and that most likely, only a small fraction of people will know about my work—always makes me melancholy. I get this at the AEA meeting especially, which is vast. It always give me this feeling of how limited our time is, and how much we will never know. Given the scarcity of attention and the somewhat artificial scarcity of journal pages, a conference is a salient reminder of the bad odds most research faces in terms of being noticed.   

As the old joke goes, somewhat modified—not only are the portions too large, the food is often terrible. Most presentations are bad—and not because the research is bad, but because giving good presentations is challenging. You have to summarize complex things to a diverse room of people, in a linear fashion. There are dangers everywhere: focus too much on details and robustness and your talk is boring; skip over details and you sow doubts about the work. Just getting the visuals on the slides right is an art that I feel like I’m still far away from mastering.

Part of the general badness of presentations is that the “medium” of presentations makes it harder for them to get better. Presentations seem disadvantaged relative to papers in terms of the potential for improvement. You are never in the audience for your own talk, and so you can only reflect ex post on how it went. In contrast, you can read your own paper. With a paper, I can iterate alone: I can write a bit, let it mellow for a bit, then read and revise. And so all by myself, I can improve my writing. Presentations, no so much.

In principle, the downsides of presentations would be balanced by some upsides. There are some—you can use tone of voice, pointing, etc. to convey more information. You can also make jokes and have some fun that you probably should keep out of your papers. One big upside is that they can be interactive—questions and discussions during seminars can be magic—but at big conferences, this rarely happens.

So what’s the solution? Well, I’m going to keep going to conferences – it’s part of the job. One thing I’d like to try is make some really high-quality screencast presentations. I bought Camtasia and I’ve played around with it a bit and it seems like, in principle, I could take a paper-like style of iteration and improvement to presentations (e.g, multiple takes, editing out bad parts and so on). The goal might be just to give better live talks, but it would also be interesting to see if screencasts could be halfway between live talks and papers as a means of scientific communication.

 

Extending geofacet to algorithmically generate grids from lat/long data

This morning I learned about the very cool geofacet R package for making geographic facets with ggplot2. Unfortunately, my cities aren’t one of the existing grids. There is a neat web-based tool for arranging cities, but after 5 minutes, I was feeling frustrated by my lack of relative geographical knowledge (“Is Chicago really south of Boston? Seems wrong.”).

Anyway, I forked it and wrote a function for generating new grids based on lat/long. The resultant grids are intended to be (a) small and (b) not violate any cardinal ordering (e.g., a city that is south of one city will never shown as strictly north, though they can be on the same row).

To see how it works, suppose we have a collection of 50 cities w/ lat and long data (here plotted at their actual position):

If we have n cities, we can always put them in an n \times n grid such that all cities are in the correct relative position to each other (by sorting east-west and north-south):

Now we want to start squishing this matrix to make it smaller. What my algorithm does is pick the long side and then find columns/rows of that side it can combine without causing two cities to occupy the same cell. Every time it finds a pair, it combines them in removes a row or column, as the case may be. The algorithm keeps doing this until it cannot squish the matrix down any more in either direction. Here’s an example squished version of the plot from above:

As it runs quickly, I just brute force this some number of times and take the most compact one. There’s definitely more that could be done, say relaxing some constraints for greater compactness. Even this one, I’d hand-tune it a bit, but it was already much better than what I could generate by hand, starting from scratch.

Here’s my code for this example:

Bartik instrument exploration in R

Paul Goldsmith-Pinkham and co-authors have a nice paper on “shift share” or Bartik instruments that I’m working my way through. Whenever I’m doing some econometric exploration, I like to create a simulation where I “know” the right answer and confirm I can recover it with the method. I was doing this for the Bartik instrument and thought other people might find it useful to see a simple example.

And here we have the results—the biased OLS in the left column, and the IV estimate in the right column. We can see that w/ our 1,000 example, we can recover the correct supply elasticity (in the code, I passed an elasticity of labor supply of 1).

 

 

The problem with “it’s just an app”

Middlemen seemingly exist just to get “cut out.” Online marketplaces like Airbnb and Uber are described as middlemen, and I frequently read about how they could be replaced, say, by worker cooperatives or open source apps. After all, “it’s just an app.” Replacement is an appealing idea—imagine Uber/Lyft, but with drivers making X% more money because they don’t have to kick up money to the platform. I’m deeply skeptical, as I think there is not enough appreciation of what these marketplaces do and what it would take to replace them. Setting aside network effects, I think even recreating the basic functionality of most popular marketplace-intermediating apps as they *currently* exist would be challenging.

Anyone who has built software will tell you that the complexity of the software grows much faster than the apparent complexity of what that software does. For example, the original Facebook was a couple thousand lines of code. Two plus years ago, the web codebase, excluding back-end services was over 62 million lines of code. Facebook doesn’t “seem” 1,000x more complex, but running an always-on, globally distributed system that can do all the things that modern Facebook does well is enormously complex. Part of this complexity is that it has to handle ever-more edge cases and be more robust to different failures. Another driver towards greater complexity is that with scale, the platform has an incentive to do small things marginally better. It makes sense to nail some user interface interaction if users will be subjected to it  billions of times.

For-profit platforms can get talented people to work very hard on boring problems. Right now, there is some Stanford CS PhD, some Harvard MBA and RISD-trained designer figuring out the right sequence of dialog boxes and push/in-app notifications to guide Lyft passengers to the arrivals level at an airport. And a PhD data scientist who is going to analyze GPS data, customer support tickets and server logs to try to figure out if whatever they come up with is working. It’s Einsteins designing refrigerators.

The idea of worker cooperatives or open source apps doing the above reminds me of the quip about the problem with socialism is that it wastes too many evenings. You’re not going to get volunteers to work through the kind of tedious problems that make an app seem smooth and intuitive. Most successful open source software  typically (1) solve a technical problem the creator had, (2) is inherently interesting to the creator, (3) or has a strong corporate sponsor/ecosystem ala Android. Where you don’t see open software succeeding is in solving some end-user boring, non-technical problems. And these are precisely the problems for-profit platforms focus on.

We already know what a marketplace looks like that ignores the boring stuff. Consider Craigslist, which although clearly a success, has seen nearly every one of its “verticals” attacked by for-profit platforms, some with great success. Those entrants realized that they can compete with “free” by solving problems that users had. 

It would be foolish to think that incumbents can never be brought down, but the “just an app” or “just a website” perspective doesn’t consider how much hard-to-duplicate work lead to that app in its current form, and how much of that work was tedious and unlikely to be done by any entity without a strong profit motive.

Why I think ride-sharing companies will win the self-driving future

There is growing interest in who will dominate the self-driving future: car companies,  ride-sharing companies or someone else?  My bet is on ride-sharing companies, and the reason is that I think that the self-driving car industry will require expertise that currently resides with ride-sharing companies. Furthermore, the kinds of things car companies have traditionally differentiated themselves on will not matter very much, pushing them into a commodity role.

**Important full disclosure: A close family member of mine works for a ride-sharing company.**

On the consumer side, I suspect that in the future people will think about the manufacturer of the self-driving vehicle they are in as much as they think about the manufacturer of the bus, train car or airplane they are in now, which is to say “very little.” They will not care about styling, aesthetics, mileage, maintenance, parts availability, re-sale value, and service costs, what it says about them as a person to own one, and so on. All things that would be car-buyers care about a great deal will matter very little. There’s a reason there’s no magazine called “Bus & Passenger.”

Some of these vehicle attributes will affect *costs* to platforms—and buyers of fleets of cars will presumably try to buy high quality at low cost, but they aren’t going to pay a premium for design or a brand. The self-driving world is going to be an ugly, commodified world focused on cost and reliability.

Customers *will* care about fleet reputations for cleanliness, timeliness, routing quality, customer service support, safety, and so on—the things they care about with airlines—but these are things that will be determined more by the *operational* excellent of whoever manages the fleet. Unfortunately for car makers, operations is not their forte, whereas the ride-sharing companies are extremely operations/customer-service focused.

Ride-sharing companies have expertise in customer service, marketing, analytics, developing maps, developing routing algorithms, creating software interfaces, and fighting regulatory/legal battles. All of these things are still likely to matter a great deal in the self-driving future. The customer service aspect in an area where these companies are actually quite different form the Silicon Valley mold—they deal with paying customers constantly in a high-touch way, unlike say Google or Facebook.

Ride-sharing companies don’t have expertise in managing fleets of cars (fueling, cleaning, repair and so on), but neither do car companies. But even then, this part of the business doesn’t seem that hard relative to the others, and there are lots of companies and people with this expertise (UPS, FedEx, Hertz, Avis etc.). Car manufacturers have almost no experience with operations. And they have almost no experience dealing with customers.

On the actual self-driving technology side, to extent that improvements will be driven mainly by more and better data rather than hardware, the ride-sharing companies are also the ones well-poised to collect the most data about driving under actual and varied conditions in the long run up to a fully-automated future.

Perhaps the biggest advantage held by ride-sharing companies is that they have a very natural way to transition to the future—start slipping self-driving vehicles into the mix, pacing the introduction as the technology develops. In contrast, the car companies (or Google/Waymo) essentially need to clone Uber or Lyft functionality, but do it with an unproven technology from day one. This will be very hard. Instead, car companies will largely choose to partner with ride-sharing companies, but the ride-sharing companies will have lots of manufacturer options to choose from, and are not going to give away the company to do so, or recklessly form exclusive partnerships.

One might argue that you can’t have a fleet without a car maker willing to sell to you, but there are many, many car markers out there. Just because the car is an essential component of the productive process doesn’t mean that the maker of that input will control everything—McDonalds isn’t a subsidiary of a stove company.

Anyway, for course this is all highly speculative and maybe I’ll wistfully read this naive blog post from my Ford-brand self-driving vehicle in a few years, but I think it’s more likely it will be an Uber/Lyft/Didi/Ola/Gett and some nameless, white-label vehicle made by Honda.

AI, Labor, and the Parable of the Horse

Today I attended the 5th year anniversary celebration for MSR NYC. There was a great group of speakers and panelists—I’m super impressed by what MSR has accomplished. One topic that came up at several points during the day was the labor market effects of technological developments—particularly that powerful AI might displace many workers.

Economists have traditionally been sanguine about the effects of technological change on the labor market, viewing widespread technological unemployment as unlikely. This perspective is based on the historical experience of substantial technological change not having persistent disemployment effects. However, it has been pointed out that we have one vivid example where this optimism has not been warranted—what I call the parable of the horse.

The story is that the internal combustion engine came along and horses saw their marginal product decline below the cost of their feed and so horses disappeared, at least in the “labor” market. This is undoubtedly true—the figure below shows the number of horses (and mules) in the US (from The Demographics of the U.S. Equine Population). The implication is that today’s “horses”—low skilled labor or maybe even labor in general—will disappear as AI can do more and more tasks.

I find the horse parable interesting, but unpersuasive—at least with respect to how it is likely to affect relatively low-skilled workers—because I think the analogy misses the reason why horses fared so poorly. The problem was not that they were “low skilled”  but rather their extreme specialization. Horses did one job and one job only—exerting physical force, which could be used to pull or push things. That they could be almost entirely displaced by a superior pushing/pulling technology is, in some sense, not surprising. But what I think is important in the human labor market is that being able to do one thing—and one thing only—is typically a characteristic of high skilled labor, not low skilled labor.

Most low-skilled labor is no longer like horse labor, in that the low-skilled jobs that exist now are those that require some mix of physical, intellectual and even “emotional” skills. This mix makes full automation challenging. But even when some specific job does “fall” to automation, there is a still a very large pool of remaining jobs that could still need to be done and require relatively little skill or new training, by definition of being low-skilled. In short, one advantage of the low skilled labor market is that there are lots of jobs you qualify for. The downside, of course, is that precisely because lots of people can qualify for those jobs and so wages are low. The workers that are vulnerable to technical change—in the sense that they are likely to experience large declines in income—are those workers with highly specialized skills.

Truck driving as a low-skilled example

To give a more concrete example, consider the job of truck driver, which might be on the automation chopping block. First, many people with the job “truck driver” are actually some combination of sales representative, inventory-taker, first line mechanic, warehouse worker, forklift operator and so on. As such, it is far from obvious that even if substantial amounts of driving end up being done through automation that labor demand for “truck driver” would fall.

Second, even if the truck driver occupation sees a large negative demand shock, what other jobs could a truck driver do that pay about the same? Well, let’s look at the BLS  occupational data. The table below shows the most recent BLS occupational data, w/ US employment totals and average hourly wages, sorted by average hourly wage. I restricted the list to occupations with more than 500K employees. We can see that being a light truck driver (i.e., not a heavy truck transporting freight or heavy equipment) pays about $16.50/hour, which is below the median wage in the US but still substantially higher than the minimum wage. 

Retail Salespersons 4,612,510 12.67
Nursing Assistants 1,420,570 12.89
Landscaping and Groundskeeping Workers 895,600 13.20
Laborers and Freight, Stock, and Material Movers, Hand 2,487,680 13.39
Receptionists and Information Clerks 975,890 13.67
Security Guards 1,097,660 13.68
Substitute Teachers 626,750 14.25
Bus Drivers, School or Special Client 505,560 14.70
Team Assemblers 1,115,510 15.17
Office Clerks, General 2,944,420 15.33
Medical Assistants 601,240 15.34
Shipping, Receiving, and Traffic Clerks 674,820 15.55
First-Line Supervisors of Food Preparation and Serving Workers 884,090 16.02
Light Truck or Delivery Services Drivers 826,510 16.38
Industrial Truck and Tractor Operators 539,810 16.39
Medical Secretaries 530,360 16.50
Customer Service Representatives 2,595,990 16.62
Secretaries and Administrative Assistants, Except Legal, Medical, and Executive 2,281,120 16.92
Construction Laborers 887,580 17.57
Maintenance and Repair Workers, General 1,314,560 18.73
Bookkeeping, Accounting, and Auditing Clerks 1,580,220 18.74
Inspectors, Testers, Sorters, Samplers, and Weighers 508,590 18.95
Automotive Service Technicians and Mechanics 638,080 19.58
Heavy and Tractor-Trailer Truck Drivers 1,678,280 20.43

We can see that both above and below, there are a number of jobs that are plausible substitute occupations for a displaced truck driver. For example, of those below, most require no formal education or certification, perhaps except for substitute teachers or medical assistants. If we go higher, we start to see jobs that require more skills or that are more physically taxing or dangerous (e.g., construction laborer), but are still reasonable substitutes. For example, many truck drivers are also decent mechanics and perhaps with some more training, could find work as “Maintenance and Repair Workers, General.”

Not only do displaced truck drivers have lots of “nearby” occupations that pay about the same with little additional human capital requirements, the displaced drivers are not likely to drive down wages very much in their new occupations. There are, of course, lots of truck drivers, but if they split into a reasonably large chunk of other occupations, the new entrants would not be much of a supply shock.

A specialized occupation example

Now we’ll look at a more specialized occupation. Let’s consider accountants and auditors (even this one still seems far away from being even remotely automatable). It pays quite nicely and requires substantial specialized skill. If we look at nearby jobs, very few would be open to a displaced accountant without substantial re-training.

Lawyers 609,930 65.51
Financial Managers 531,120 64.58
General and Operations Managers 2,145,140 57.44
Software Developers, Applications 747,730 49.12
Management Analysts 614,110 44.12
Computer Systems Analysts 556,660 43.36
Accountants and Auditors 1,226,910 36.19
Business Operations Specialists, All Other 926,610 35.33
Registered Nurses 2,745,910 34.14
Market Research Analysts and Marketing Specialists 506,420 33.67
First-Line Supervisors of Construction Trades and Extraction Workers 517,560 32.13
Sales Representatives, Wholesale and Manufacturing, Except Technical and Scientific Products 1,409,550 32.11
Sales Representatives, Services, All Other 886,580 29.98
Police and Sheriff’s Patrol Officers 653,740 29.45
Secondary School Teachers, Except Special and Career/Technical Education 962,820 *

Accountants are the “horses” here—the ones that are vulnerable to large drop offs in earnings because of their specialization. Fortunately, if we care about inequality, this is the “right” group to be affected. Because of their existing financial wealth and their general human capital, they are likely better able to deal with the disequilibria created by technological change.

How would we know if “The Bay Area Should Levy a 5% Equity Tax on Startups”

In a recent Information article, Sam Lessin proposed a Bay Area 5% equity tax on startups. It’s an interesting idea; I don’t know whether it’s “good” idea. This blog post will not answer the “good” question, but I’d like to use this proposal to explore some ideas in public policy and economics and talk about some of my work that bears on the question.

If a 5% equity tax were imposed, what would happen? Ideally, we’d have a true experiment to settle the question: say we had 300 more or less equivalent Silicon Valleys, half of which got the tax, half of which didn’t, and then we’d check in on them in 5 or 10 years. Yeah, so that’s not going to work.

The problem is clear—we don’t have that many Silicon Valleys, we don’t have that much time, and we certainly don’t have the political power to impose such a tax randomly. Further, it is not clear what we should even look at to assess “good”—we could see how much revenue that tax generated, but what we care about is the revenue generated at what cost to society. If the shadow of a 5% tax causes a huge reduction in the number of startups, then whatever is raised could be very costly indeed. Though even saying something strong here would require some notion of the “quality” of the startups the tax displaced or prevented and whether some other startup would have just filled its place (e.g., kill Uber, get Lyft). We’d also care about who ultimately paid the tax, as the incidence is unclear—is it entrepreneurs? VCs? Workers in the tech sector? Landlords? Consumers of what Silicon Valley makes?

To assess the proposal, we’re going to need to be less empirical and more theoretical. I am highly empirical. I’m a card-carrying member of the credibility revolution. Most of my papers are not just empirical but experimental. That being said, there are important policy questions we care about that we need to answer quickly that existing empirical work just does not speak to. That leaves economic theory or guessing.

Screenshot 2017-04-24 11.18.28

My working paper, “A Model of Entrepreneurial Clusters, with an Application to the Efficiency of Entrepreneurship” is theory paper designed to answer this kind of question (among others). The model is not complex, but it has a few too many moving pieces for a blog post, but I can sketch out the relevant parts and show how to apply it.

In a nutshell, the paper describes a model with three important markets: the market for venture capital, the market for “engineers” and the product market for what successful startups sell. In the paper, would-be entrepreneurs weigh the expected returns to doing a startup to the “safe” returns to being an engineer/employee. A key feature of the model is the notion that lots of would-be entrepreneurs can pursue the same “idea” but that there is a single winner on each idea. This has some implications for the entrepreneurial system. One less startup does mean one less shot at commercializing some innovation, but if lots of startups were pursuing more of less the same idea, the welfare consequences of “losing” that startup to employment is not so bad. Furthermore, it doesn’t have much of a labor market consequences either—there is no “missing” successful startup that is no longer demanding labor.

Anyway, getting back to the tax question. We can think of the tax as increasing the cost of doing a startup. The effects of such a shock are worked out in Section 3.8 in the paper. This increase in cost shifts some would-be entrepreneurs back into the labor market, which lowers wages. This, to some extent, offsets the effect of the tax from the entrepreneurs perspective, as it lowers startup labor costs, making startups ex ante more attractive (imagine Google, but getting to pay 3% lower wages—starting Google is more attractive). So some of the tax gets borne by workers. How much? Well, in the model, the effect of a small change in startup costs on wages is

Screenshot 2017-04-24 11.37.58

which, uh, may still leave you with some questions. The “g” is the fraction of the labor force that is entrepreneurs. This part just says that when a large fraction of the labor force is entrepreneurs, a tax on that has a big spill-over effect on wages, and vice versa when it is small.

The term inside the parentheses has an economic interpretation, in that it captures how large a flow of engineers must leave entrepreneurship to re-establish an equilibrium, with larger flows leading to greater reductions in wages. Suppose that the startup success probability was completely inelastic, meaning that a reduction in the number of startups doesn’t “help” the startups that remain succeed. The increase in startup costs drives engineers from entrepreneurship, but because the startup success probability does not change, there is no compensating increase in success probability that would occur if the success probability was elastic. As such, a larger flow out of entrepreneurship is needed to re-establish the equilibrium, which means that employees see a larger fall off in wages. With a highly elastic success probability, a smaller number of exiting entrepreneurs is needed to establish a new equilibrium, and so there is less downward wage pressure and so less pass through of startup costs.

The model says that the overall surplus of the system is proportional to engineer wages in equilibrium. As such, what we would hope, as a social planner, is that the tax does not lower wages much in equilibrium. This happens when the startup success probability is highly elastic. A key feature of the model is that a highly elastic startup success probability is the sign in the model of too much entrepreneurship, in the sense that there are lots of entrepreneurs pursuing more of less the same ideas. In the model, ideas differ in their perceived “quality” and obviously good ideas get lots of entrants pursuing them, while only the marginal ideas get the efficient number of entrepreneurs (perhaps the ideas-that-seem-bad-but-are-actually-good). The figure below is the key figure from the paper:

Screenshot 2017-04-24 11.54.03

Conclusion

To wrap it up, the model says that if you think there is lots of duplicative entrepreneurship right now—too many entrepreneurs pursuing more or less the same idea—the model says that Sam’s tax is very likely to be a good idea, as it will mostly reduce, on the margin, startups pursuing ideas that were already being pursued, and hence the social welfare consequences will be minimal (interestingly, I think this elasticity question probably can be pursued empirically, using booms and busts in startup funding and/or technological shocks). Is my model the right way to model things? I have no idea, but it’s *a* model and we have to make choices. Of course, there are lots of considerations this analysis doesn’t consider, but I think it’s a  starting point for thinking about the issue, and also potentially the impetus for newer, better models.

 

2SLS in Mathematica

2SLS data setup. Note that there is RV u that appears in both x and in the error term. There is also an IV, z, that affects x but not u.

n = 10000;
z = Table[Random[NormalDistribution[0, 1]], {n}];
B0 = 1;
B1 = 2;
gamma0 = -2;
gamma1 = 4;
u = Table[Random[NormalDistribution[0, 1]], {n}];
x = u + gamma0 + gamma1 * z +
Table[Random[NormalDistribution[0, 1]], {n}];
e = 5*u + Table[Random[NormalDistribution[0, 1]], {n}];
y = B0 + B1*x + e;

Screen Shot 2017-04-23 at 9.53.52 AM

Note that the real coefficient on x is 2, but the estimated coefficient biased upwards. Now we can do the first stage:

First stage


iota = Table[1, {n}];
Z = Transpose[{iota, z}];
Gammahat = Inverse[Transpose[Z].Z].Transpose[Z].x;
xhat = Z.Gammahat;
Xhat = Transpose[{iota, xhat}];
Bhat = Inverse[Transpose[Xhat].Xhat].Transpose[Xhat].y

and now the coefficient estimates are close to the true values:

Screen Shot 2017-04-23 at 9.55.10 AM