Views Navigation

Event Views Navigation

Linear Regression with Many Included Covariates

Whitney Newey (MIT Economics)
E62-587

We consider asymptotic inference for linear regression coefficients when the number of included covariates grows as fast as the sample size. We find a limiting normal distribution with asymptotic variance that is larger than the usual one. We also find that all of the usual versions of heteroskedasticity consistent standard error estimators are inconsistent under this asymptotics. The problem with these standard errors is that they do not make a correct "degrees of freedom" adjustment. We propose a new heteroskedasticity…

Find out more »

Central Limit Theorems and Bootstrap in High Dimensions

Denis Chetverikov (UCLA)
E62-450

We derive central limit and bootstrap theorems for probabilities that centered high-dimensional vector sums hit rectangles and sparsely convex sets. Specifically, we derive Gaussian and bootstrap approximations for the probabilities Pr(n−1/2∑ni=1Xi∈A) where X1,…,Xn are independent random vectors in ℝp and A is a rectangle, or, more generally, a sparsely convex set, and show that the approximation error converges to zero even if p=pn→∞ and p≫n; in particular, p can be as large as O(eCnc) for some constants c,C>0. The result…

Find out more »

Random polytopes and estimation of convex bodies

Victor-Emmanuel Brunel (Yale)
E17-133

In this talk we discuss properties of random polytopes. In particular, we study the convex hull of i.i.d. random points, whose law is supported on a convex body. We propose deviation and moment inequalities for this random polytope, and then discuss its optimality, when it is seen as an estimator of the support of the probability measure, which may be unknown. We also define a notion of multidimensional quantile sets for probability measures in a Euclidean space. These are convex…

Find out more »

The exact k-SAT threshold for large k

Nike Sun (MSR New England and MIT Mathematics)
E62-450

We establish the random k-SAT threshold conjecture for all k exceeding an absolute constant k0. That is, there is a single critical value α∗(k) such that a random k-SAT formula at clause-to-variable ratio α is with high probability satisfiable for αα∗(k). The threshold α∗(k) matches the explicit prediction derived by statistical physicists on the basis of the one-step replica symmetry breaking (1RSB) heuristic. In the talk I will describe the main obstacles in computing the threshold, and explain how they…

Find out more »

How good is your model? Guilt-free interactive data analysis.

Moritz Hardt (IBM Almaden)
E62-450

Reliable tools for model selection and validation are indispensable in almost all applications of machine learning and statistics. Decades of theory support a widely used set of techniques, such as holdout sets, bootstrapping and cross validation methods. Yet, much of the theory breaks down in the now common situation where the data analyst works interactively with the data, iteratively choosing which methods to use by probing the same data many times. A good example are data science competitions in which…

Find out more »

From Bandits to Ethical Clinical Trials. Optimal Sample Size for Multi-Phases Problems.

Vianney Perchet (Université Paris Diderot)
E62-450

In the first part of this talk, I will present recent results on the problem of sequential allocations called “multi-armed bandit”. Given several i.i.d. processes, the objective is to sample them sequentially (and thus get a sequence of random rewards) in order to maximize the expected cumulative reward. This framework simultaneously encompasses issues of estimation and optimization (the so-called “exploration vs exploitation” dilemma). A recent example of applications is the ad placement on web sites. In the second part, I…

Find out more »

Tensor Prediction, Rademacher Complexity and Random 3-XOR

Ankur Moitra (MIT CSAIL)
E62-450

Here we study the tensor prediction problem, where the goal is to accurately predict the entries of a low rank, third-order tensor (with noise) given as few observations as possible. We give algorithms based on the sixth level of the sum-of-squares hierarchy that work with roughly m=n3/2 observations, and we complement our result by showing that any attempt to solve tensor prediction with fewer observations through the sum-of-squares hierarchy would run in moderately exponential time. In contrast, information theoretically roughly…

Find out more »

Nonparametric Graph Estimation

Han Liu (Princeton)
E62-450

Undirected graphical model has proven to be a useful abstraction in statistics and machine learning. The starting point is the graph of a distribution. While often the graph is assumed given, we are interested in estimating the graph from data. In this talk we present new nonparametric and semiparametric methods for graph estimation. The performance of these methods is illustrated and compared on several real and simulated examples.

Find out more »

Measuring Sample Quality with Stein’s Method

Lester Mackey (Stanford)
E62-450

To carry out posterior inference on datasets of unprecedented sizes, practitioners are turning to biased MCMC procedures that trade off asymptotic exactness for computational efficiency. The reasoning is sound: a reduction in variance due to more rapid sampling can outweigh the bias introduced. However, the inexactness creates new challenges for sampler and parameter selection, since standard measures of sample quality like effective sample size do not account for asymptotic bias. To address these challenges, we introduce a new computable quality…

Find out more »

An Extended Frank-Wolfe Method with Application to Low-Rank Matrix Completion

Rob Freund (MIT Sloan)
32-124

We present an extension of the Frank-Wolfe method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region. We present computational guarantees for the method that trade off efficiency in computing near-optimal solutions with upper bounds on the dimension of minimal faces of iterates. We apply our method to the low-rank matrix completion problem, where low-dimensional faces correspond to low-rank solutions. We present computational results for large-scale low-rank matrix completion problems that demonstrate significant speed-ups in…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764