Views Navigation

Event Views Navigation

High Dimensional Covariance Matrix Estimations and Factor Models

Yuan Liao (University of Maryland)
E62-587

Large covariance matrix estimation is crucial for high-dimensional statistical inferences, and has also played an central role in factor analysis. Applications are found in analyzing financial risks, climate data, genomic data and PCA, etc. Commonly used approaches to estimating large covariances include shrinkages and sparse modeling. This talk will present new theoretical results on estimating large (inverse) covariance matrices under large N large T asymptotics, with a focus on the roles it plays in statistical inferences for large panel data…

Find out more »

Beyond Berry Esseen: Structure and Learning of Sums of Random Variables

Constantinos Daskalakis (MIT EECS)
E62-587

The celebrated Berry-Esseen theorem, and its variants, provide a useful approximation to the sum of independent random variables by a Gaussian. In this talk, I will restrict attention to the important case of sums of integer random variables, arguing that Berry-Esseen theorems fall short from characterizing their general structure. I will offer stronger finitary central limit theorems, tightly characterizing the structure of these distributions, and show their implications to learning. In particular, I will present algorithms that can learn sums…

Find out more »

Optimal stochastic transport

Alfred Galichon (Sciences Po, Paris)
E62-587

We explore the link between the Monge-Kantorovich problem and the Skorohod embedding problem. This question arises in particular in Mathematical Finance when seeking model-free bounds on some option prices when the marginal distributions of the underlying at various maturities are implied by European options prices. We provide a stochastic control approach which we connect to several important constructions. Finally we revisit in this light the celebrated Azéma-Yor solution of the Skorohod embedding problem. This talk is based on joint works…

Find out more »

Sparse Canonical Correlation Analysis: Minimaxity and Adaptivity

Harrison Huibin Zhou (Yale University)
E62-587

Canonical correlation analysis is a widely used multivariate statistical technique for exploring the relation between two sets of variables. In this talk we consider the problem of estimating the leading canonical correlation directions in high dimensional settings. Recently, under the assumption that the leading canonical correlation directions are sparse, various procedures have been proposed for many high dimensional applications involving massive data sets. However, there has been few theoretical justification available in the literature. In this talk, we establish rate-optimal…

Find out more »

Linear Regression with Many Included Covariates

Whitney Newey (MIT Economics)
E62-587

We consider asymptotic inference for linear regression coefficients when the number of included covariates grows as fast as the sample size. We find a limiting normal distribution with asymptotic variance that is larger than the usual one. We also find that all of the usual versions of heteroskedasticity consistent standard error estimators are inconsistent under this asymptotics. The problem with these standard errors is that they do not make a correct "degrees of freedom" adjustment. We propose a new heteroskedasticity…

Find out more »

Central Limit Theorems and Bootstrap in High Dimensions

Denis Chetverikov (UCLA)
E62-450

We derive central limit and bootstrap theorems for probabilities that centered high-dimensional vector sums hit rectangles and sparsely convex sets. Specifically, we derive Gaussian and bootstrap approximations for the probabilities Pr(n−1/2∑ni=1Xi∈A) where X1,…,Xn are independent random vectors in ℝp and A is a rectangle, or, more generally, a sparsely convex set, and show that the approximation error converges to zero even if p=pn→∞ and p≫n; in particular, p can be as large as O(eCnc) for some constants c,C>0. The result…

Find out more »

Random polytopes and estimation of convex bodies

Victor-Emmanuel Brunel (Yale)
E17-133

In this talk we discuss properties of random polytopes. In particular, we study the convex hull of i.i.d. random points, whose law is supported on a convex body. We propose deviation and moment inequalities for this random polytope, and then discuss its optimality, when it is seen as an estimator of the support of the probability measure, which may be unknown. We also define a notion of multidimensional quantile sets for probability measures in a Euclidean space. These are convex…

Find out more »

The exact k-SAT threshold for large k

Nike Sun (MSR New England and MIT Mathematics)
E62-450

We establish the random k-SAT threshold conjecture for all k exceeding an absolute constant k0. That is, there is a single critical value α∗(k) such that a random k-SAT formula at clause-to-variable ratio α is with high probability satisfiable for αα∗(k). The threshold α∗(k) matches the explicit prediction derived by statistical physicists on the basis of the one-step replica symmetry breaking (1RSB) heuristic. In the talk I will describe the main obstacles in computing the threshold, and explain how they…

Find out more »

How good is your model? Guilt-free interactive data analysis.

Moritz Hardt (IBM Almaden)
E62-450

Reliable tools for model selection and validation are indispensable in almost all applications of machine learning and statistics. Decades of theory support a widely used set of techniques, such as holdout sets, bootstrapping and cross validation methods. Yet, much of the theory breaks down in the now common situation where the data analyst works interactively with the data, iteratively choosing which methods to use by probing the same data many times. A good example are data science competitions in which…

Find out more »

From Bandits to Ethical Clinical Trials. Optimal Sample Size for Multi-Phases Problems.

Vianney Perchet (Université Paris Diderot)
E62-450

In the first part of this talk, I will present recent results on the problem of sequential allocations called “multi-armed bandit”. Given several i.i.d. processes, the objective is to sample them sequentially (and thus get a sequence of random rewards) in order to maximize the expected cumulative reward. This framework simultaneously encompasses issues of estimation and optimization (the so-called “exploration vs exploitation” dilemma). A recent example of applications is the ad placement on web sites. In the second part, I…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764