Category Archives: r

Would a job by any other name pay as much?

I’m working on a project where it would be useful to know what an oDesk job is likely to pay at the time it is posted. Although there are plenty of structured predictors available (e.g., the category, skills, estimated duration etc.), presumably the job description and the job title contain lots of wage-relevant information. The title in particular is likely to identify the main skill needed, the task to be done and perhaps the quality of the person the would-be employer is looking for (e.g., “beginner”, or “senior”).

Unfortunately, I haven’t done any natural language processing before, so I’m a bit out of my element. However, there are good tutorials online as well as R packages that can guide you through the rough parts. I thought writing up my explorations might be useful to others that want to get started with this approach. The gist of the code I wrote is available at here.

What I did:

1)

I took 20K recent hourly oDesk jobs that where the freelancer worked at least 5 hours. I calculated the log wage over the course of the contract. Incidentally, oDesk wages—like real wages—are pretty well approximated by a normal distribution. 
 
2) I used the RTextTools  package to create a document term matrix from the job titles (this is just a matrix of 1 & 0 where the rows are jobs and the columns are relatively frequent words that are not common English words—if the job title contained that word, it gets a 1, otherwise a 0).

3) I fit a linear model using the lasso for regularization (using the glmnet package). I used cross validation to select the best lambda. A linear model probably isn’t ideal for this, but at least it gives nicely interpretable coefficients.

So, how does it do?  Here are a sample of the coefficients that didn’t get set to zero by the lasso, ordered by magnitude (point sizes are scaled by the log number of times that word appears in the 10K training sample): 

The coefficients can be interpreted as % changes from the mean wage in the sample when that corresponding word (or word fragment) is present in the title. Nothing too surprising I think: at the extremes, SEO is a very low paying job, whereas developing true applications is high paying.

In terms of out of sample prediction, the R-squared was a little over 0.30. I’ll have to see how much of an improvement can be obtained from using some of the structured data available, but explaining 30% of the variation just using the titles is a higher than I would have expected before fitting the model.

Some light data munging with R, with an application to ranking NFL Teams

I recently submitted this blog to R-bloggers, which aggregates R-related blog posts. It’s a fantastic site and has been invaluable to me as I’ve learned R. One of my favorite kinds of articles is the hands-on, “hello world”-style weekend project that dips into a topic/technology, so here’s my first attempt at one in this style.

First, some background: I’ve been working with Greg on a project that analyzes the results of two-person contests. An important part of the problem is comparing different ranking systems that can adjust for the strength of the opponent (e.g., Elo rating systemTrueSkill, Glicko, etc.). As I understand it, all of these systems are working around the intractability of treating this as a purely Bayesian solution and try to deal with things like trends in ability, the distribution of the unobserved component, etc.

We’re still collecting data from a pilot, but in the interim, I wanted to start getting my feet wet with some real competition data. Sports statistics provide a readily available source of competition data, so my plan was:

  1. Pull some data on NFL games on the 2011 season to date.
  2. Fit a simple model that produces a rank ordering of teams.
  3. Pull data on ESPN’s PowerRanking of NFL teams (based on votes by their columnists), using the XML package. 
  4. Make a comparison plot, showing how the two ranks compare, using ggplot2.
For the model, I wanted something really simple (hoping no one from FootballOutsiders is reading this). In my model, the difference in scores between the two teams is simply the difference in their “abilities,” plus an error term:

\Delta S_{ij} = \alpha^H_i - \alpha^A_j + \epsilon


where the alpha’s are team-and-venue (e.g., home or away) specific random effects. For our actual rating, we can order teams based on the sum of their estimate home and away effects, i.e.:

\hat{\alpha}_i^H + \hat{\alpha}_i^A

Estimating the 32 x 2 parameters—given how little data we actually have—would probably lead to poor results. Instead, I used the excellent lme4 package which approximates a Bayesian estimation where we start with a prior that the alpha parameters are normally distributed.

Putting the last thing first, here’s the result of 4), comparing my “homebrew” ranking to the ESPN ranking, as of Week 5 (before the October 9th games):

No real comment on my model other than it thinks (a) that ESPN vastly overrates the Chargers and (b) more highly of the Ravens.

The code for all the steps is posted below, with explanatory comments: