Category Archives: Uncategorized

Causes of the Silicon Valley real estate crunch (and some potential solutions)

I unexpectedly got into two Twitter discussions recently about Silicon Valley (SV) and its effects on the local real estate market. I felt constrained by the 140 character limit, so I thought I’d write a blog post explaining my thinking (and add supply & demand diagrams!).

To understand what is happening in SV, we need to think about three markets:
(1) the product market for what SV tech companies sell
(2) the SV tech labor market and
(3) the SV housing market.

First, what’s obvious: there’s been a huge increase in demand for what Silicon Valley sells: the world is using way more IT than it used to. Someone has to build & program this stuff, and so there’s been a large increase in demand for certain kinds of high-skilled labor—namely software engineers, designers, product managers and so on. Let’s call them “tech people.”

First thing google shows for

Most tech people are transplants, coming to SV specifically to work in tech. They need a place to live. As such, a demand shock for tech labor is also a demand shock for housing in SV.

How the labor demand shock plays out

In the figure below, the top diagram is the labor market and the bottom diagram is the housing market. The y-axes are wages and real estate prices, respectively. The x-axes are tech people hired and units of housing consumed, respectively. The connection between these two markets is so tight that I assume that changes in tech people employed must be met one for one with changes in housing units consumed. This is why the two diagrams are stacked on top of each other.

Pre-boom equilibrium:

Here comes the iPhone: Tech Boom!

Let’s consider how a product market demand shock leads to a new equilibrium. First, the demand curve for labor shifts out (in red, top panel). If we ignored the housing market, we would just see higher wages and more tech people hired. However, these new tech hires want a place to live, so they shift out the demand curve in the housing market (in bottom panel, also in red).

But the tech people labor supply curve depends on housing costs

At this new higher price for housing, fewer tech people are willing to work at each wage (i.e., “I’ll stay in Seattle and work for Jeff Bezos, spending more on tissues and psychological counseling, but spending less on rent”). The higher housing prices shifts in the tech people labor supply curve. This shift takes some pressure off the housing demand, pushing down housing prices a little. This tatonment goes back and forth until a new equilibrium is reached with:

(1) more tech employees (but not as many as would be in absence of housing effects)
(2) higher wages and
(3) higher real estate prices

Where we are now:

The importance of the housing supply elasticity

As you might expect, how this process works out depends a great deal upon how these curves are shaped and how big these shocks are. One critical piece is the slope of that one curve that didn’t move around—the housing supply curve. From the perspective of of tech and non-tech workers and tech firms, we can say “elastic”  = “good” and “inelastic” = “bad” (existing, homeowners are another story).

Elastic supply = good. Imagine a better world in which the housing supply is completely elastic. The housing supply curve is flat. This means that no matter how large the positive demand shock in the housing market, house prices stay the same. Here, the demand shock in the tech labor market has no effect on non-tech workers through the housing channel (because housing prices do not rise). Also note that there is no pulling in of the tech worker supply curve—the workers get the “full” benefit in higher wages.

Inelastic supply = bad. Now, let us imagine a world where housing is completely inelastic, making housing a vertical line. In this inelastic case, housing is fixed. We already “know” that tech companies aren’t going to be able to hire more. Tech wages are going to rise, but the main beneficiaries will existing owners of housing because of the price increase. They get enormous rents—literally. Of course, the curve is not completely inelastic because of one very controversial “source” of elasticity is displacement. The tech people move in, the non-tech person moves out. This is why people throw yogurt (at best) at tech buses.

How do non-tech people fare?

A more complete analysis might consider the effect of the tech boom on non-tech wages. Presumably they get some benefit from increase demand on their services from tech people. And to some extent, non-tech sectors have to increase wages to get people to still live and work in SV. It seems unlikely to me that this is fully off-setting.

The main adjustment is probably housing displacement, meaning longer commutes. It makes more economic sense for them to move farther away (i.e., travel an hour a day and save $20/day on rent). That being said, they are almost certainly worse off with these horrendously long commutes than they were pre-boom.

What are the solutions?

  1. Do nothing. One “solution” is do nothing, under the belief that things will run their course and the tech boom will fizzle. To the extent that the boom does not subside, other places in the world will become relatively more appealing for tech as the high cost of labor in SV persists (because of housing). However, to date, SV seems to be becoming more important and tech becoming more centralized in SV, not less, so this might be a slow-acting solution. Further, it seems bad for SV as a region: If I were king of SV, I wouldn’t be sanguine about the “Detroit solution” to too much product market demand for what my region specializes in.
  2. Build more housing. Another solution (of course) is to increase the housing stock. This should push prices down. A better solution might be to enact structural changes to make the supply of housing more elastic. Given how much housing prices have risen, it seems that the supply is very inelastic (more on this later).
  3. Let people work remotely.  Another solution is radically different, which is to try to sever or at least attenuate the connection between the housing and labor markets. This is the “Upwork” solution in a nutshell, which their CEO outlined in a recent Medium postIf a tech company is open to remote hiring, then those remote hires never enter the local rental market and do not drive up prices. It does not even have to be either/or, as letting your employees work remotely some of the time helps: if I only have to be in San Francisco three days a week, living in Half Moon Bay rather than Cole Valley becomes much more attractive (Uber also helps here and autonomous vehicles would help a lot).

I’m particularly optimistic about, (3), the tech-focused solution, as it seems more likely “work” right away and it requires little political change. Also, somewhat ironically, the increasing maturation of technology for remote collaboration means that this approach should become more attractive over time.

Incidentally, why is the supply of housing in SV so inelastic?
Some of it is surely geography, about which little can be done. The peninsula is just not that wide and there aren’t large, nearby tracts of undeveloped land. I imagine that the, uh, interesting geological properties of the area matter for construction. However, the main cause seems not to be so much the quantity of land, but rather the intensity with which the land is used.

Take a Google streetview of walk of Palo Alto or Melo Park. When you consider how large the demand for housing is and then look at the built environment of those cities, there is a wild disconnect. These cities should be Manhattan-dense or at least Cambridge, MA-dense but they are not—it is mostly single family homes, some on quite large lots. They could be nice suburbs more or less anywhere. These are *very* nice place to live, of course, and I can understand the instinct to preserve them as they are. But the unchanging neighborhood character of Palo Alto is part of the reason why tech is having a huge negative externality on non-tech people, through the channel of higher housing costs.

Relevant disclosures: I used to work at Upwork’s predecessor company, oDesk. I still consult with them and I conducted academic research with their data. I also visited Uber as a visiting economist last summer and my wife works for them still. When I worked for Upwork, I lived in Redwood City until my landlord decided to not renew our lease so he could sell the place. We rented somewhere else that was a little cheaper, but my commute got longer. I might go back to SF to work for a bit this summer, if I can find a cheap enough place on Airbnb.

Reputation systems are great for buyers (and good for sellers too)

In a recent NYTimes article about Uber drivers organizing in response to fare cuts, there was a description of the rating system and how it affects drivers:

They [drivers] are also constrained by the all-important rating system — maintain an average of around 4.6 out of 5 stars from customers in many cities or risk being deactivated — to behave a certain way, like not marketing other businesses to passengers.

Using “marketing a side business” as an example of behavior the reputation system curtails is like saying “the police prevent many crimes, like selling counterfeit maple syrup“—technically true, but it gives the wrong impression about what’s typical.

Bad experiences on ride-sharing apps presumably mirrors bad experiences in taxis: drivers having a dirty car, talking while driving, being rude, driving dangerously or inefficiently and so on. I’d wager that “marketing a side business” complaints more or less never happen. If they do happen, it’s probably because the driver was particularly aggressive or annoying about promoting their business (or the passenger was idiosyncratically touchy). It certainly doesn’t seem to be against Uber’s policy—an Uber spokesperson said recently that Uber not only condones it, but encourages it.

Being subject to a reputation system is certainly personally costly to drivers—who likes being accountable?—but it’s not clear to me that even drivers as a whole should dislike them, so long as they apply to every driver. Bad experiences from things like poor driving or unclean vehicles are not just costly to passengers, but are also costly to other drivers, as they reduce the total demand for transportation services (NB: Chris Nosko & Steve Tadelis have a really nice paper quantifying the effects of these negative spillovers on other sellers, in the context of eBay). The problem with quality in the taxi industry historically is that competition doesn’t “work” to fix quality problems.

Competition can’t solve quality problems because a passenger only learns someone was bad after already having the bad experience. Because of the way taxi hails work, passengers can’t meaningfully harm the driver by taking their business elsewhere in the future, like they could with a bad experience at a restaurant. As such, the bad apple drivers don’t have incentives up front to be good or to improve. (The same also goes for the other problem of bad passengers, which there are and the reputation system helps deal with.) Reputation systems—while far from perfect—solve this problem.

While reputation systems seem like something only a computer-mediated platform like Uber and Lyft can have,  there’s no reason (other than cost) why regulated taxis couldn’t also start having reputation systems. Taxis could ask for passenger feedback in the car using the touch screen, and then use some of the advertising real estate outside the car to show average driver feedback scores to would-be passengers. This would probably be more socially useful than the usual NYC advertisements on top of yellow cabs, such as for gentleman’s clubs, e-cigarettes, and yellow cabs.

real_new_yorkers

Disclosure: I worked with their data science team in the summer of 2015. However, the direction of causality is that I wanted to work with Uber because they are amazing; I don’t think Uber is amazing because I worked for them.     

Sign-up to receive my research updates by email:

One weird trick for eye-balling a means comparison

Often, I’m in a seminar or reading a paper and I want to quickly see if the the difference in two means is likely to be due to chance or not. This comparison requires computing the standard error of the difference in means, which is SE(\Delta) = \sqrt{SE_1^2 + SE_2^2}, where SE_1 is the standard error of the first mean and SE_2 is the standard error of the second mean. (Let’s call the difference in means \Delta.)

Squaring and taking square roots in your head (or on paper for that matter) is a hassle, but if the two standard errors are about the same, we can approximate this as SE(\Delta) \approx \frac{3}{2} \times SE_1, which is a particularly useful approximation. The reason is that the 95% CI for \Delta is 4 \times SE(\Delta) = 6 SE_1 (i.e., 6 of our “original” standard errors).  As such, we can construct the 95% CI for the difference Greek-geometer style, by taking the origin CI, diving it into 4ths and then adding one more SE to each end.

The figure below illustrates the idea – we’re comparing A & B and so we construct a confidence interval for the difference between them, that is 6 SE’s in height. And we can easily see if that CI includes the top of B.

blog_post_se_trick

What if the SE’s are different?

Often the means we compare don’t have the same standard error, and so the above approximation would be poor. However, so long as the standard errors are not so different, we can compute a better approximation without any squaring or taking square roots.  One approximation for the true standard error that’s fairly easy to remember is:

\sqrt{SE_1^2 + SE_2^2} \approx \frac{3}{2}SE_1 + \frac{2}{3}(SE_1 - SE_2).

This is just the Taylor series approximation of the correct formula about SE_1 - SE_2 \approx 0 (and using \sqrt{2} \approx 3/2 and 1/\sqrt{2} \approx 2/3).

Monte Carlo Clusterjerk

Chris Blattman recently lamented reviewers asking him to cluster standard errors for a true experiment, which he viewed as incorrect, but had no citation to support his claim. It seems intuitive to me that Chris is right (and everyone commenting on his blog post agreed), but no one could point to something definitive.

I asked on Twitter whether a blog post with some simulations might help placate reviewers and he replied “beggars can’t be choosers”—and so here it is. My full code is on github.

To keep things simple, suppose we have a collection of individuals that are nested in groups, indexed by g. For some outcome of interest y, there’s a individual-specific effect, \epsilon and a group-specific effect, \eta. This outcome also depends on whether a binary treatment has been applied (status indicated by W), which has an effect size of \beta.

y = \beta \times W + \eta_g + \epsilon

We are interested in estimating \beta and correctly reporting the uncertainty in that estimate.

First, we need create a data set with a nested structure. The R code below does this, with a few things hard-wired: the \eta and \epsilon are both drawn from a standard normal and the probability of treatment assignment is 1/2. Note that the function takes a boolean parameter randomize.by.group that lets us randomize by group instead of by individual. We can specify the sample size, the number of groups and the size of the treatment effect.

This function returns a data frame that we can analyze. Here’s an example of the output. Note that for two individuals with the same group assignment, the \eta term is the same, but that the treatment varies within groups.

Now we need a function that simulates us running an experiment and analyzing the data using a simple linear regression of the outcome on the treatment indicator. This function below returns the estimate, \hat{\beta} and the standard error, SE(\hat{\beta}) from one “run” of an experiment:

Let’s simulate running the experiment a 1,000 times (NB If the “%>%” notation looks funny to you— I’m using the magrittr package)

The standard error also has a sampling distribution but let’s just take the median value from all our simulations:

If we compare this to the standard deviation of our collection of \hat{\beta} point estimates, we see the two values are nearly identical (which is good news):

If we plot the empirical sampling distribution of \hat{\beta} and label the 2.5% and 97.5% percentiles as well as the 95% CI (constructed using that median standard error) around the true \beta, the two intervals are right on top of each other:

comparison

Code for the figure above:

Main takeaway: Despite the group structure, the plain vanilla OLS run with data from a true experiment returns the correct standard errors (at least for the parameters I’ve chosen for this particular simulation).

What if we randomize at the group level but don’t account for this group structure?

At the end of his blog post, Chris adds another cluster-related complaint:

Reviewing papers that randomize at the village or higher level and do not account for this through clustering or some other method. This too is wrong, wrong, wrong, and I see it happen all the time, especially political science and public health.

Let’s redo the analysis but change the level of randomization to group and see what happens if we ignore this level of randomization change. As before, we simulate and then compare the median standard error we observed from our simulations to the standard deviation of the sampling distribution of our estimated treatment effect:

The OLS standard errors are (way) too small—the median value from OLS is still about 0.08 (as expected) but the sampling distribution of the estimated treatment effect is 0.45. The resultant CIs looks like this:

no_cluster

Eek. Here are two R-specific fixes, both of which seem to work fine. First, we can use a random effects model (from the lme4 package):

random_effects

or we can cluster standard errors. The package I use for this is lfe, which is really fantastic. Note that you put the factor you want to cluster by in the 3rd position following the formula:

clustering_fix

One closing thought, a non-econometric argument why clustering can’t be necessary for a true experiment with randomization at the individual level: for *any* experiment, presumably there is some latent (i.e., unobserved to the researcher) grouping of the data such that the errors within that group are correlated with each other. As such, we could never use our standard tools for analyzing experiments to get the right standard errors if taking this latent grouping into account was necessary.

Experimenting with targeted features

Suppose you run a website and you have some experience or feature that you think might be good for some subset of your users (but ineffective, at best, for others).  You might try to (1) identify who would benefit based on observed characteristics  then (2) alter the experience only for a targeted subset users expected to benefit.

To make things concrete, in some cities, Uber offers “UberFamily” which means the Uber comes with a car seat. For us (I have two kids), UberFamily is awesome, but the option takes up valuable screen real estate and for a user that Uber thinks does not have kids, adding it to the app screen is a waste. So Uber would like to both (a) figure out if it is likely that I have kids and then (b) adjust the experience based on that model. But they’d also like to know if it’s worth it in general to offer this service even among those they think could use it. This isn’t the example that motivated this blog post, but it makes the scenario clear.

If you are testing features of this sort, then you want to both (a) assess your targeting and (b) assess the feature itself. How should you proceed? I’m sure there’s probably some enormous literature on this question (there’s a literature on everything), but I figure by offering my thoughts and potentially being wrong on the Internet, I can be usefully corrected.

I think what you want to do is not test your targeting experimentally but rather role out the feature for everyone you reasonably can than evaluate your targeting algorithms on your experimental data. So, you would run the experiment with a design that maximizes power to detect treatment effects (e.g., 50 to treatment, 50 control). In other words, completely ignore your targeting algorithm recommendations.

Then, after the experimental data comes in, look for heterogeneous treatment effects conditioned on the predictive model score, where the score can be thought of as a measure of how much we think a person should have benefitted from the treatment. The simplest thing you could do to would be to normalize all scores (so the scores have the same mean and variance across algorithms, making model coefficients directly interpretable across algorithms). Then just run the regression:

y = \beta_0 + \beta_1 (score \times trt) + \beta_2 score + \beta_3 trt

Hopefully, if the treatment was better for people the model thought would be helped, then \hat{\beta_1} should be positive (assuming the y is such that bigger is better).

You’d also want to finding the minimum score such that you should be targeting people i.e., the score such that the expected benefit from targeting is first positive. You can then simply select the algorithm with the greatest expected improvement, given the minimum score for targeting.

This seems like a reasonable approach (and maybe bordering on obvious but it wasn’t obvious to me at first). Any other suggestions?

Simple distributed text editing with Mechanical Turk

When I write a sentence, there’s about a 10% chance it will have typo or grammatical error of some kind.  It’s often painful to find them later,  as like most people, I tend to “fill in the gaps” or glide over typos when reading my own writing.  Fortunately, this kind of editing, unlike, say, reading for structure or consistency, is very parallelizable. In fact, reading each sentence alone, out of order, might even be better than reading the whole document, sentence by sentence.

As an experiment, I wrote a little script that splits a document up into sentences, with one sentence per line (the script is here). With this CSV, I can use Mechanical Turk to create HITs, with one HIT per sentence. The instructions for workers to label each sentence as “OK” or “Not OK” with an optional field to explain their reasoning. The Mturk interface looks like this:

Screenshot 2015-12-02 10.53.31

 

After splitting the sentences, I went through the CSV file to remove blank lines and LaTeX commands by hand, though one could easily add this feature to the script.

I posted the HITs on MTurk this morning, paying 2 cents, with 4 HITS per sentence (so each sentence will be checked 4 times by different workers).  The text was a paper I’m working on. Results starting coming in remarkably quickly—here it as after 30 minutes:

Screenshot 2015-12-02 10.50.13

I’m not thrilled with the hourly rate (I try to shoot for $5/hour) but this average is always very sensitive to workers who take a long time. So far, the comments are very helpful, especially since with multiple ratings, you can find problematic sentences—for example:

Screenshot 2015-12-02 10.58.50

The “86” is the line number from the LaTeX document, which is nice because it makes it easier to go find the appropriate sentence to fix. Here are some more samples of the kinds of responses I’m getting:

Screenshot 2015-12-02 11.06.07

Overall, I think it’s a successful experiment, though it was already well known that MTurk workers can do editing tasks well, from soylent.

 

Documenting your work as you go with GNU Make

I’ve long been a convert to using ‘Make’ to turn LaTeX into a PDF. However, you can also easily use Make to backup your work as you go along and take “snapshots” of what a draft looked like at a moment in time (see example Makefile below). This complements using both github, Dropbox and some external backup (I might add another block to the Makefile that pushes a snapshot to Amazon S3).

My folder structure for a project:

  • writeup (where I store the LaTeX & BibTeX)
  • code
  • data
  • backups (where I store entire snapshots of the directory, w/o backups included, for obvious reasons)
  • snapshots (where just the PDF draft is stored)

How much *should* MTurk cost (if you’re Amazon)?

Amazon recently announced that they are going to double their percentage fee, from 10% to 20%. At least among the people I follow on Twitter (lots of academics that use MTurk for research), this has caused much consternation. A price increase is clearly bad for workers and bad for requestors (who gets hurt worse will depend on relative elasticities), but what about Amazon?

When I first got interested in online labor markets, I wrote  a short paper (“Online Labor Markets” ) in which I tried to figure out what was the optimal ad valorem charge from the platform’s perspective.  The relevant section is below, but the main conclusion was that most online platforms were pricing as if demand and/or supply was highly elastic.  In other words, that even a small increase in price would send nearly all customers elsewhere.

Screenshot 2015-06-25 09.04.14

The basic reason is that when the platform doubles its fees from 10% to 20%, they double their revenue (if everything stays the same) but only increase the cost of Using MTurk for users by about 10%. Of course, there is some decline in usage (demand curves slope down)  which reduces profits, but it has to be a huge reduction to make up for the direct increase in revenue to the platform. This is more or less the same argument for why cutting taxes doesn’t increase revenue unless tax rates are incredibly high—the consensus estimate is that the revenue maximizing tax rate is in the mid 70%.

My guess is that MTurk has fairly few substitutes and someone at Amazon decided it should be making more money as a service. Fortunately for us as observers, because of the great work of Panos Ipeirotis, we’ll get to see what happens.

 

 

 

 

The impact & potential of online work – some new references

In the last month, two new reports have come up looking at online work and online labor markets, with a focus on their potential for economic development.

There is:

  • A report from the McKinsey Global Institute, “Connecting talent with opportunity in the digital age” The McKinsey report covers a lot of the economic rationale for online work and digitization a bit more broadly.
  • A report from the World Bank “Jobs without Borders” (pdf) which focuses more on what these markets could do for workers in less developed countries. It brings together quite  a bit of disparate data on the size on online marketplaces, worker composition and so on.

The McKinsey report relies on some of the work that  went into my NBER working paper w/ Ajay Agrawal, Liz Lyons and Nico Lacetera on “Digitization and the Contract Labor Market” which in turn leans heavily on data from oDesk (now Upwork). This paper—along with all the others from the conference—are now available as a book from the University of Chicago Press.