Detection Thresholds for Distribution-Free Non-Parametric Tests: The Curious Case of Dimension 8

Bhaswar B. Bhattacharya, UPenn Wharton
online

Abstract: Two of the fundamental problems in non-parametric statistical inference are goodness-of-fit and two-sample testing. These two problems have been extensively studied and several multivariate tests have been proposed over the last thirty years, many of which are based on geometric graphs. These include, among several others, the celebrated Friedman-Rafsky two-sample test based on the minimal spanning tree and the K-nearest neighbor graphs, and the Bickel-Breiman spacings tests for goodness-of-fit. These tests are asymptotically distribution-free, universally consistent, and computationally efficient…

Find out more »

Self-regularizing Property of Nonparametric Maximum Likelihood Estimator in Mixture Models

Yury Polyanskiy, MIT
online

Abstract: Introduced by Kiefer and Wolfowitz 1956, the nonparametric maximum likelihood estimator (NPMLE) is a widely used methodology for learning mixture models and empirical Bayes estimation. Sidestepping the non-convexity in mixture likelihood, the NPMLE estimates the mixing distribution by maximizing the total likelihood over the space of probability measures, which can be viewed as an extreme form of over parameterization. In this work we discover a surprising property of the NPMLE solution. Consider, for example, a Gaussian mixture model on…

Find out more »

Faster and Simpler Algorithms for List Learning

Jerry Li, Microsoft Research
online

Abstract: The goal of list learning is to understand how to learn basic statistics of a dataset when it has been corrupted by an overwhelming fraction of outliers. More formally, one is given a set of points $S$, of which an $\alpha$-fraction $T$ are promised to be well-behaved. The goal is then to output an $O(1 / \alpha)$ sized list of candidate means, so that one of these candidates is close to the true mean of the points in $T$.…

Find out more »

MIT Sports Summit 2021

online

The MIT Sports Lab invites you to the MIT Sports Summit 2021, a virtual event hosted on Thursday, Feb. 4th and Friday, Feb. 5th! It is an opportunity for the MIT community to interface with the Sports Lab’s affiliates and partners, sharing advances, challenges, and passions at the intersection of engineering and sports. We are featuring talks from leaders in industry and academia, as well as interactive sessions showcasing student research posters and sports tech startups. This is an invitation-only event for current MIT community…

Find out more »

A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Net

Rong Ge - Duke University
online

Abstract: The training of neural networks optimizes complex non-convex objective functions, yet in practice simple algorithms achieve great performances. Recent works suggest that over-parametrization could be a key ingredient in explaining this discrepancy. However, current theories could not fully explain the role of over-parameterization. In particular, they either work in a regime where neurons don't move much, or require large number of neurons. In this paper we develop a local convergence theory for mildly over-parameterized two-layer neural net. We show…

Find out more »

Perfect Simulation for Feynman-Kac Models using Ensemble Rejection Sampling

Arnaud Doucet - University of Oxford
online

Abstract: I will introduce Ensemble Rejection Sampling, a scheme for perfect simulation of a class of Feynmac-Kac models. In particular, this scheme allows us to sample exactly from the posterior distribution of the latent states of a class of non-linear non-Gaussian state-space models and from the distribution of a class of conditioned random walks. Ensemble Rejection Sampling relies on a high-dimensional proposal distribution built using ensembles of state samples and dynamic programming. Although this algorithm can be interpreted as a…

Find out more »

Sharp Thresholds for Random Subspaces, and Applications

Mary Wootters - Stanford University
online

Abstract: What combinatorial properties are likely to be satisfied by a random subspace over a finite field? For example, is it likely that not too many points lie in any Hamming ball? What about any cube?  We show that there is a sharp threshold on the dimension of the subspace at which the answers to these questions change from "extremely likely" to "extremely unlikely," and moreover we give a simple characterization of this threshold for different properties. Our motivation comes…

Find out more »

Valid hypothesis testing after hierarchical clustering

Daniela Witten - University of Washington
online

Abstract:  As datasets continue to grow in size, in many settings the focus of data collection has shifted away from testing pre-specified hypotheses, and towards hypothesis generation. Researchers are often interested in performing an exploratory data analysis in order to generate hypotheses, and then testing those hypotheses on the same data; I will refer to this as 'double dipping'. Unfortunately, double dipping can lead to highly-inflated Type 1 errors. In this talk, I will consider the special case of hierarchical…

Find out more »

Statistical Aspects of Wasserstein Distributionally Robust Optimization Estimators

Jose Blanchet - Stanford University
online

Abstract: Wasserstein-based distributional robust optimization problems are formulated as min-max games in which a statistician chooses a parameter to minimize an expected loss against an adversary (say nature) which wishes to maximize the loss by choosing an appropriate probability model within a certain non-parametric class. Recently, these formulations have been studied in the context in which the non-parametric class chosen by nature is defined as a Wasserstein-distance neighborhood around the empirical measure. It turns out that by appropriately choosing the…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764