Monthly Archives: December 2015

One weird trick for eye-balling a means comparison

Often, I’m in a seminar or reading a paper and I want to quickly see if the the difference in two means is likely to be due to chance or not. This comparison requires computing the standard error of the difference in means, which is SE(\Delta) = \sqrt{SE_1^2 + SE_2^2}, where SE_1 is the standard error of the first mean and SE_2 is the standard error of the second mean. (Let’s call the difference in means \Delta.)

Squaring and taking square roots in your head (or on paper for that matter) is a hassle, but if the two standard errors are about the same, we can approximate this as SE(\Delta) \approx \frac{3}{2} \times SE_1, which is a particularly useful approximation. The reason is that the 95% CI for \Delta is 4 \times SE(\Delta) = 6 SE_1 (i.e., 6 of our “original” standard errors).  As such, we can construct the 95% CI for the difference Greek-geometer style, by taking the origin CI, diving it into 4ths and then adding one more SE to each end.

The figure below illustrates the idea – we’re comparing A & B and so we construct a confidence interval for the difference between them, that is 6 SE’s in height. And we can easily see if that CI includes the top of B.

blog_post_se_trick

What if the SE’s are different?

Often the means we compare don’t have the same standard error, and so the above approximation would be poor. However, so long as the standard errors are not so different, we can compute a better approximation without any squaring or taking square roots.  One approximation for the true standard error that’s fairly easy to remember is:

\sqrt{SE_1^2 + SE_2^2} \approx \frac{3}{2}SE_1 + \frac{2}{3}(SE_1 - SE_2).

This is just the Taylor series approximation of the correct formula about SE_1 - SE_2 \approx 0 (and using \sqrt{2} \approx 3/2 and 1/\sqrt{2} \approx 2/3).

Monte Carlo Clusterjerk

Chris Blattman recently lamented reviewers asking him to cluster standard errors for a true experiment, which he viewed as incorrect, but had no citation to support his claim. It seems intuitive to me that Chris is right (and everyone commenting on his blog post agreed), but no one could point to something definitive.

I asked on Twitter whether a blog post with some simulations might help placate reviewers and he replied “beggars can’t be choosers”—and so here it is. My full code is on github.

To keep things simple, suppose we have a collection of individuals that are nested in groups, indexed by g. For some outcome of interest y, there’s a individual-specific effect, \epsilon and a group-specific effect, \eta. This outcome also depends on whether a binary treatment has been applied (status indicated by W), which has an effect size of \beta.

y = \beta \times W + \eta_g + \epsilon

We are interested in estimating \beta and correctly reporting the uncertainty in that estimate.

First, we need create a data set with a nested structure. The R code below does this, with a few things hard-wired: the \eta and \epsilon are both drawn from a standard normal and the probability of treatment assignment is 1/2. Note that the function takes a boolean parameter randomize.by.group that lets us randomize by group instead of by individual. We can specify the sample size, the number of groups and the size of the treatment effect.

This function returns a data frame that we can analyze. Here’s an example of the output. Note that for two individuals with the same group assignment, the \eta term is the same, but that the treatment varies within groups.

Now we need a function that simulates us running an experiment and analyzing the data using a simple linear regression of the outcome on the treatment indicator. This function below returns the estimate, \hat{\beta} and the standard error, SE(\hat{\beta}) from one “run” of an experiment:

Let’s simulate running the experiment a 1,000 times (NB If the “%>%” notation looks funny to you— I’m using the magrittr package)

The standard error also has a sampling distribution but let’s just take the median value from all our simulations:

If we compare this to the standard deviation of our collection of \hat{\beta} point estimates, we see the two values are nearly identical (which is good news):

If we plot the empirical sampling distribution of \hat{\beta} and label the 2.5% and 97.5% percentiles as well as the 95% CI (constructed using that median standard error) around the true \beta, the two intervals are right on top of each other:

comparison

Code for the figure above:

Main takeaway: Despite the group structure, the plain vanilla OLS run with data from a true experiment returns the correct standard errors (at least for the parameters I’ve chosen for this particular simulation).

What if we randomize at the group level but don’t account for this group structure?

At the end of his blog post, Chris adds another cluster-related complaint:

Reviewing papers that randomize at the village or higher level and do not account for this through clustering or some other method. This too is wrong, wrong, wrong, and I see it happen all the time, especially political science and public health.

Let’s redo the analysis but change the level of randomization to group and see what happens if we ignore this level of randomization change. As before, we simulate and then compare the median standard error we observed from our simulations to the standard deviation of the sampling distribution of our estimated treatment effect:

The OLS standard errors are (way) too small—the median value from OLS is still about 0.08 (as expected) but the sampling distribution of the estimated treatment effect is 0.45. The resultant CIs looks like this:

no_cluster

Eek. Here are two R-specific fixes, both of which seem to work fine. First, we can use a random effects model (from the lme4 package):

random_effects

or we can cluster standard errors. The package I use for this is lfe, which is really fantastic. Note that you put the factor you want to cluster by in the 3rd position following the formula:

clustering_fix

One closing thought, a non-econometric argument why clustering can’t be necessary for a true experiment with randomization at the individual level: for *any* experiment, presumably there is some latent (i.e., unobserved to the researcher) grouping of the data such that the errors within that group are correlated with each other. As such, we could never use our standard tools for analyzing experiments to get the right standard errors if taking this latent grouping into account was necessary.

Experimenting with targeted features

Suppose you run a website and you have some experience or feature that you think might be good for some subset of your users (but ineffective, at best, for others).  You might try to (1) identify who would benefit based on observed characteristics  then (2) alter the experience only for a targeted subset users expected to benefit.

To make things concrete, in some cities, Uber offers “UberFamily” which means the Uber comes with a car seat. For us (I have two kids), UberFamily is awesome, but the option takes up valuable screen real estate and for a user that Uber thinks does not have kids, adding it to the app screen is a waste. So Uber would like to both (a) figure out if it is likely that I have kids and then (b) adjust the experience based on that model. But they’d also like to know if it’s worth it in general to offer this service even among those they think could use it. This isn’t the example that motivated this blog post, but it makes the scenario clear.

If you are testing features of this sort, then you want to both (a) assess your targeting and (b) assess the feature itself. How should you proceed? I’m sure there’s probably some enormous literature on this question (there’s a literature on everything), but I figure by offering my thoughts and potentially being wrong on the Internet, I can be usefully corrected.

I think what you want to do is not test your targeting experimentally but rather role out the feature for everyone you reasonably can than evaluate your targeting algorithms on your experimental data. So, you would run the experiment with a design that maximizes power to detect treatment effects (e.g., 50 to treatment, 50 control). In other words, completely ignore your targeting algorithm recommendations.

Then, after the experimental data comes in, look for heterogeneous treatment effects conditioned on the predictive model score, where the score can be thought of as a measure of how much we think a person should have benefitted from the treatment. The simplest thing you could do to would be to normalize all scores (so the scores have the same mean and variance across algorithms, making model coefficients directly interpretable across algorithms). Then just run the regression:

y = \beta_0 + \beta_1 (score \times trt) + \beta_2 score + \beta_3 trt

Hopefully, if the treatment was better for people the model thought would be helped, then \hat{\beta_1} should be positive (assuming the y is such that bigger is better).

You’d also want to finding the minimum score such that you should be targeting people i.e., the score such that the expected benefit from targeting is first positive. You can then simply select the algorithm with the greatest expected improvement, given the minimum score for targeting.

This seems like a reasonable approach (and maybe bordering on obvious but it wasn’t obvious to me at first). Any other suggestions?

Simple distributed text editing with Mechanical Turk

When I write a sentence, there’s about a 10% chance it will have typo or grammatical error of some kind.  It’s often painful to find them later,  as like most people, I tend to “fill in the gaps” or glide over typos when reading my own writing.  Fortunately, this kind of editing, unlike, say, reading for structure or consistency, is very parallelizable. In fact, reading each sentence alone, out of order, might even be better than reading the whole document, sentence by sentence.

As an experiment, I wrote a little script that splits a document up into sentences, with one sentence per line (the script is here). With this CSV, I can use Mechanical Turk to create HITs, with one HIT per sentence. The instructions for workers to label each sentence as “OK” or “Not OK” with an optional field to explain their reasoning. The Mturk interface looks like this:

Screenshot 2015-12-02 10.53.31

 

After splitting the sentences, I went through the CSV file to remove blank lines and LaTeX commands by hand, though one could easily add this feature to the script.

I posted the HITs on MTurk this morning, paying 2 cents, with 4 HITS per sentence (so each sentence will be checked 4 times by different workers).  The text was a paper I’m working on. Results starting coming in remarkably quickly—here it as after 30 minutes:

Screenshot 2015-12-02 10.50.13

I’m not thrilled with the hourly rate (I try to shoot for $5/hour) but this average is always very sensitive to workers who take a long time. So far, the comments are very helpful, especially since with multiple ratings, you can find problematic sentences—for example:

Screenshot 2015-12-02 10.58.50

The “86” is the line number from the LaTeX document, which is nice because it makes it easier to go find the appropriate sentence to fix. Here are some more samples of the kinds of responses I’m getting:

Screenshot 2015-12-02 11.06.07

Overall, I think it’s a successful experiment, though it was already well known that MTurk workers can do editing tasks well, from soylent.