One sociological critique of economics is that unlike the physical sciences, economic research can affect the thing it studies. I might not be using the jargon the correct way, but the basic idea is that economics is “performative“—it’s not just a magnifying glass—it’s a magnifying glass that sometimes focuses the light and burns what you’re looking at. I have an example of this from my own work that bugs me more than a little bit, but is ultimately, my own fault. Let me explain.
So back in graduate school, Lydia Chilton and I wrote a paper called “The Labor Economics of Paid Crowdsourcing” (data & code here). In a nutshell, we introduced the labor economics way of thinking about labor supply to the crowdsourcing/computer science crowd. We also did some experiments where we varied earnings to see how workers reacted on MTurk. We thought the really cool “so what” of the paper was that we presented strong evidence for target earning—that workers had a preference for earning amounts of money evenly divisible by 5 (here’s the key figure–note that taller black histogram bars):
Almost as an afterthought, we estimated the distribution of worker reservation wages for our (*very* unusual) task. We came up with a median value of $1.38/hour, using some strong assumptions about functional form. We put this in the abstract & we even discussed how it could be used to predict how many people would accept a task, because every paper has to make some claim to how it is useful.
Anyway, every once in a while, I see something on twitter like this (7 *years* later):
Hmmm, I wonder where that $1.38/hour figure came from. Anyway, mea culpa. If you’re a MTurk worker, my apologies. Feel free to cite this blog post as the authority that $1.38/hour is a silly number that shouldn’t anchor wages.