The Optimality of Coarse Menues
Please join us on Monday, December 6, 2021 at 4:00pm for the Distinguished Speaker Seminar with Dirk Bergemann (Yale University).
Please join us on Monday, December 6, 2021 at 4:00pm for the Distinguished Speaker Seminar with Dirk Bergemann (Yale University).
Abstract: The existence of a transport map from the standard Gaussian leads to succinctrepresentations for, potentially complicated, measures. Inspired by result from optimal transport, we introduce the Brownian transport map that pushes forward the Wiener measure to a target measure in a finite-dimensional Euclidean space. Using tools from Ito's and Malliavin's calculus, we show that the map is Lipschitz in several cases of interest. Specifically, our results apply when the target measure satisfies one of the following: - More log-concave than the Gaussian, recovering…
Abstract: In this talk, we are going to discuss a new polynomial-time algorithmic framework for inference problems, based on the celebrated Lenstra-Lenstra-Lovasz lattice basis reduction algorithm. Potentially surprisingly, this algorithmic framework is able to successfully bypass multiple suggested notions of “computational hardness for inference” for various noiseless settings. Such settings include 1) sparse regression, where there is Overlap Gap Property and low-degree methods fail, 2) phase retrieval where Approximate Message Passing fails and 3) Gaussian clustering where the SoS…
Abstract: The prediction accuracy of machine learning methods is steadily increasing, but the calibration of their uncertainty predictions poses a significant challenge. Numerous works focus on obtaining well-calibrated predictive models, but less is known about reliably assessing model calibration. This limits our ability to know when algorithms for improving calibration have a real effect, and when their improvements are merely artifacts due to random noise in finite datasets. In this work, we consider the problem of detecting mis-calibration of…
Please join us on Friday, March 4 at 2:30pm for the IDSS Distinguished Speaker Seminar with Guido Imbens (Stanford University).
Abstract: Many empirical questions concern target parameters selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data. Such settings give rise to a winner's curse, where conventional estimates are biased and conventional confidence intervals are unreliable. This paper develops optimal confidence intervals and median-unbiased estimators that are valid conditional on the target selected and so overcome this winner's curse. If…
Abstract: Variational approximations provide an attractive computational alternative to MCMC-based strategies for approximating the posterior distribution in Bayesian inference. Despite their popularity in applications, supporting theoretical guarantees are limited, particularly in high-dimensional settings. In the first part of the talk, we will study bayesian inference in the context of a linear model with product priors, and derive sufficient conditions for the correctness (to leading order) of the naive mean-field approximation. To this end, we will utilize recent advances in the…
SDSCon 2022 is the fourth celebration of the statistics and data science community at MIT and beyond, organized by MIT’s Statistics and Data Science Center (SDSC).
Please join us on Monday, April 4 at 4:00pm for the IDSS Distinguished Speaker Seminar with Mikhail Belkin (UC San Diego).
Abstract: We study the problem of certification: given queries to an n-variable boolean function f with certificate complexity k and an input x, output a size-k certificate for f's value on x. This abstractly models a problem of interest in explainable machine learning, where we think of f as a blackbox model that we seek to explain the predictions of. For monotone functions, classic algorithms of Valiant and Angluin accomplish this task with n queries to f. Our main result is…