Using covariates to increase the precision of randomized experiments
A simple difference-in-means estimator of the average treatment effect (ATE) from a randomized experiment is, being unbiased, a good start, but may often leave a lot of additional precision on the table. Even if you haven’t used covariates (pre-treatment variables observed for your units) in the design of the experiment (e.g., this is often difficult to do in streaming random assignment in Internet experiments; see our paper), you can use them to increase the precision of your estimates in the analysis phase. Here are some simple ways to do that. I’m not including a whole range of more sophisticated/complicated approaches. And, of course, if you don’t have any covariates for the units in your experiments — or they aren’t very predictive of your outcome, this all won’t help you much.
Post-stratification
Prior to the experiment you could do stratified randomization (i.e. blocking) according to some categorical covariate (making sure that there there are same number of, e.g., each gender, country, paid/free accounts in each treatment). But you can also do something similar after: compute an ATE within each stratum and then combine the strata-level estimates, weighting by the total number of observations in each stratum. For details — and proofs showing this often won’t be much worse than blocking, consult Miratrix, Sekhon & Yu (2013).
Regression adjustment with a single covariate
Often what you most want to adjust for is a single numeric covariate,1 such as a lagged version of your outcome (i.e., your outcome from some convenient period before treatment). You can simply use ordinary least squares regression to adjust for this covariate by regressing your outcome on both a treatment indicator and the covariate. Even better (particularly if treatment and control are different sized by design), you should regress your outcome on: a treatment indicator, the covariate centered such that it has mean zero, and the product of the two.2 Asymptotically (and usually in practice with a reasonably sized experiment), this will increase precision and it is pretty easy to do. For more on this, see Lin (2012).
Higher-dimensional adjustment
If you have a lot more covariates to adjust for, you may want to use some kind of penalized regression. For example, you could use the Lasso (L1-penalized regression); see Bloniarz et al. (2016).
Use out-of-sample predictions from any model
Maybe you instead want to use neural nets, trees, or an ensemble of a bunch of models? That’s fine, but if you want to be able to do valid statistical inference (i.e., get 95% confidence intervals that actually cover 95% of the time), you have to be careful. The easiest way to be careful in many Internet industry settings is just to use historical data to train the model and then get out-of-sample predictions Yhat from that model for your present experiment. You then then just subtract Yhat from Y and use the simple difference-in-means estimator. Aronow and Middleton (2013) provide some technical details and extensions. A simple extension that makes this more robust to changes over time is to use this out-of-sample Yhat as a covariate, as described above.3
- As Winston Lin notes in the comments and as is implicit in my comparison with post-stratification, as long as the number of covariates is small and not growing with sample size, the same asymptotic results apply. [↩]
- Note that if the covariate is binary or, more generally, categorical, then this exactly coincides with the post-stratified estimator considered above. [↩]
- I added this sentence in response to Winston Lin’s comment. [↩]
Hi Dean,
Nice post! A couple other things might be of interest:
The ideas under “Regression adjustment with a single covariate” carry over to adjustment with multiple covariates if the no. of covariates is small relative to the treatment and control group sample sizes (e.g., my paper assumes the no. of covariates is fixed as the sample size goes to infinity).
Wager et al. have interesting results on high-dimensional adjustment and inference using the Lasso, random forests, etc.:
https://arxiv.org/abs/1607.06801
Cheers,
winston
Thanks, Winston. Added a note about that.
Yeah, that paper provides justification for some useful flexibility beyond the Bloniarz et al. (2016) results. In practice in the Internet industry, I would recommend that folks use out-of-sample predictions from a model trained on recent data, and perhaps use that Yhat as a covariate — thus combining the ability to use many covariates without dealing with the statistical inferential challenges or problems that could arise from sudden changes to the true model…