Views Navigation

Event Views Navigation

On Shape Constrained Estimation

Shape constraints such as monotonicity, convexity, and log-concavity are naturally motivated in many applications, and can offer attractive alternatives to more traditional smoothness constraints in nonparametric estimation. In this talk we present some recent results on shape constrained estimation in high and low dimensions. First, we show how shape constrained additive models can be used to select variables in a sparse convex regression function. In contrast, additive models generally fail for variable selection under smoothness constraints. Next, we introduce graph-structured…

Find out more »

On Complex Supervised Learning Problems, and On Ranking and Choice Models

Shivani Agarwal (Indian Institute of Science/Radcliffe)
32-123

While simple supervised learning problems like binary classification and regression are fairly well understood, increasingly, many applications involve more complex learning problems: more complex label and prediction spaces, more complex loss structures, or both. The first part of the talk will discuss recent advances in our understanding of such problems, including the notion of convex calibration dimension of a loss function, unified approaches for designing convex calibrated surrogates for arbitrary losses, and connections between supervised learning and property elicitation. The…

Find out more »

Randomized Controlled Trials and Policy Making in Developing Countries

Twenty years ago, randomized controlled trials testing social policies were essentially unheard of in developing countries, although there were prominent examples in developed economies. Today their number, scale and scope is much greater than could probably have been imagined. This talk will take stock of the role that randomized controlled trials have played to date, and can play in the future, in guiding policy. We will try to assess both successes and tribulations, challenges and promises.

Find out more »

Universal Laws and Architectures: Theory and Lessons from Brains, Nets, Hearts, Bugs, Grids, Flows, and Zombies

This talk will aim to accessibly describe progress on a theory of network architecture relevant to neuroscience, biology, medicine, and technology, particularly SDN/NFV and cyberphysical systems. Key ideas are motivated by familiar examples from neuroscience, including live demos using audience brains, and compared with examples from technology and biology. Background material and additional details are in online videos (accessible from website cds.caltech.edu/~doyle) for which this talk can be viewed as a short trailer. More specifically, my research is aimed at…

Find out more »

Pairwise Comparison Models for High-Dimensional Ranking

Martin Wainwright (UC Berkeley)
32-123

Data in the form of pairwise comparisons between a collection of n items arises in many settings, including voting schemes, tournament play, and online search rankings. We study a flexible non-parametric model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity (SST). The SST class includes a large family of classical parametric models as special cases, among them the Bradley-Terry-Luce and Thurstone models, but is substantially richer. We provide…

Find out more »

Sub-Gaussian Mean Estimators

 Roberto Oliveira (IMPA)
32-123

We discuss the possibilities and limitations of estimating the mean of a real-valued random variable from independent and identically distributed observations from a non-asymptotic point of view. In particular, we define estimators with a sub-Gaussian behavior even for certain heavy-tailed distributions. We also prove various impossibility results for mean estimators. These results are in http://arxiv.org/abs/1509.05845, to appear in Ann Stat. (Joint work with L. Devroye, M. Lerasle, and G. Lugosi.)

Find out more »

Double Machine Learning: Improved Point and Interval Estimation of Treatment and Causal Parameters

Most supervised machine learning (ML) methods are explicitly designed to solve prediction problems very well. Achieving this goal does not imply that these methods automatically deliver good estimators of causal parameters. Examples of such parameters include individual regression coefficients, average treatment effects, average lifts, and demand or supply elasticities. In fact, estimates of such causal parameters obtained via naively plugging ML estimators into estimating equations for such parameters can behave very poorly, for example, by formally having inferior rates of…

Find out more »

Distributed Learning Dynamics Convergence in Routing Games

With the emergence of smartphone based sensing for mobility as the main paradigm for sensing in the last decade, radically new information sets have become available for the driving public. This information enables commuters to make repeated decisions on a daily basis based on anticipated state of the network. This repeated decision-making process creates interesting patterns for the transportation network, in which users might (or might not) reach an equilibrium, depending on the information at their disposal (for example knowing…

Find out more »

Confidence Intervals for High-Dimensional Linear Regression: Minimax Rates and Adaptivity

Tony Cai (U Penn)
32-123

Confidence sets play a fundamental role in statistical inference. In this paper, we consider confidence intervals for high dimensional linear regression with random design. We first establish the convergence rates of the minimax expected length for confidence intervals in the oracle setting where the sparsity parameter is given. The focus is then on the problem of adaptation to sparsity for the construction of confidence intervals. Ideally, an adaptive confidence interval should have its length automatically adjusted to the sparsity of…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764