Loading Events
  • This event has passed.
Stochastics and Statistics Seminar

Regression-Robust Designs of Controlled Experiments

May 30, 2014 @ 11:00 am

Nathan Kallus (MIT)


Achieving balance between experimental groups is a cornerstone of causal inference. Without balance any observed difference may be attributed to a difference other than the treatment alone. In controlled/clinical trials, where the experimenter controls the administration of treatment, complete randomization of subjects has been the golden standard for achieving this balance because it allows for unbiased and consistent estimation and inference in the absence of any a priori knowledge or measurements. However, since estimator variance under complete randomization may be slow to converge, experimental designs that balance pre-treatment measurements (baseline covariates) are in pervasive use, including randomized block designs, pairwise-matched designs, and re-randomization. We formally argue that absolutely no balance better than complete randomization’s can be achieved without partial structural knowledge about the treatment effects. Therefore, that balancing designs are in popular use and are advocated means that partial knowledge is in fact available to the researcher, just as one would expect from a researcher conducting an experiment in her own domain of expertise. We propose a novel framework for formulating such knowledge using functional analysis that subsumes all of the aforementioned designs in that it recovers them as optimal under certain modeling choices, thus theoretically characterizing their comparative power under different assumptions and suggesting extensions of these to multi-arm trials. Furthermore, it suggests new optimal designs that are based on more robust nonparametric modeling and that offer extensive gains in precision and power. In certain cases we are able to argue linear convergence O(1/2^n) to the sample average treatment effect (as compared to the usual logarithmic convergence O(1/n)). We characterize generally the unbiasedness, variance, and consistency of resulting estimators; solve the design problem; and develop appropriate inferential algorithms for Fisherian and Neymanian hypotheses. We uncover connections to Bayesian experimental design and make extensions to dealing with non-compliance.

© MIT Statistics + Data Science Center | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |