Views Navigation

Event Views Navigation

Sampling through optimization of divergences on the space of measures

Anna Korba, ENSAE/CREST
E18-304

Abstract: Sampling from a target measure when only partial information is available (e.g. unnormalized density as in Bayesian inference, or true samples as in generative modeling) is a fundamental problem in computational statistics and machine learning. The sampling problem can be cast as an optimization one over the space of probability distributions of a well-chosen discrepancy,  e.g. a divergence or distance to the target. In this talk, I will discuss several properties of sampling algorithms for some choices of discrepancies (standard ones,…

Find out more »

A Flexible Defense Against the Winner’s Curse

Tijana Zrnic, Stanford University
E18-304

Abstract: Across science and policy, decision-makers often need to draw conclusions about the best candidate among competing alternatives. For instance, researchers may seek to infer the effectiveness of the most successful treatment or determine which demographic group benefits most from a specific treatment. Similarly, in machine learning, practitioners are often interested in the population performance of the model that empirically performs best. However, cherry-picking the best candidate leads to the winner’s curse: the observed performance for the winner is biased…

Find out more »

The Conflict Graph Design: Estimating Causal Effects Under Interference

Christopher Harshaw, Columbia University
E18-304

Abstract: From clinical trials to corporate strategy, randomized experiments are a reliable methodological tool for estimating causal effects. In recent years, there has been a growing interest in causal inference under interference, where treatment given to one unit can affect outcomes of other units. While the literature on interference has focused primarily on unbiased and consistent estimation, designing randomized network experiments to insure tight rates of convergence is relatively under-explored. Not only are the optimal rates of estimation for different…

Find out more »

Scaling Limits of Neural Networks

Boris Hanin, Princeton University
E18-304

Abstract: Neural networks are often studied analytically through scaling limits: regimes in which taking to infinity structural network parameters such as depth, width, and number of training datapoints results in simplified models of learning. I will survey several such approaches with the goal of illustrating the rich and still not fully understood space of possible behaviors when some or all of the network’s structural parameters are large. Bio: Boris Hanin is an Assistant Professor at Princeton Operations Research and Financial…

Find out more »

Evaluating a black-box algorithm: stability, risk, and model comparisons

Rina Foygel Barber, University of Chicago
E18-304

Abstract: When we run a complex algorithm on real data, it is standard to use a holdout set, or a cross-validation strategy, to evaluate its behavior and performance. When we do so, are we learning information about the algorithm itself, or only about the particular fitted model(s) that this particular data set produced? In this talk, we will establish fundamental hardness results on the problem of empirically evaluating properties of a black-box algorithm, such as its stability and its average…

Find out more »

Statistical Inference with Limited Memory

Ofer Shayevitz, Tel Aviv University
E18-304

Abstract:  In statistical inference problems, we are typically given a limited number of samples from some underlying distribution, and we wish to estimate some property of that distribution, under a given measure of risk. We are usually interested in characterizing and achieving the best possible risk as a function of the number of available samples. Thus, it is often implicitly assumed that samples are co-located, and that communication bandwidth as well as computational power are not a bottleneck, essentially making the number…

Find out more »

Winners with Confidence: Discrete Argmin Inference with an Application to Model Selection

Jing Lei, Carnegie Mellon University
E18-304

Abstract:  We study the problem of finding the index of the minimum value of a vector from noisy observations. This problem is relevant in population/policy comparison, discrete maximum likelihood, and model selection. By integrating concepts and tools from cross-validation and differential privacy, we develop a test statistic that is asymptotically normal even in high-dimensional settings, and allows for arbitrarily many ties in the population mean vector. The key technical ingredient is a central limit theorem for globally dependent data characterized…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764