Views Navigation

Event Views Navigation

Statistical Inference Under Information Constraints: User level approaches

Jayadev Acharya, Cornell University
E18-304

Abstract: In this talk, we will present highlights from some of the work we have been doing in distributed inference under information constraints, such as privacy and communication. We consider basic tasks such as learning and testing of discrete as well as high dimensional distributions, when the samples are distributed across users who can then only send an information-constrained message about their sample. Of key interest to us has been the role of the various types of communication protocols (e.g., non-interactive protocols…

Find out more »

Fine-Grained Extensions of the Low-Degree Testing Framework

Alex Wein (University of California, Davis)
E18-304

Abstract: The low-degree polynomial framework has emerged as a versatile tool for probing the computational complexity of statistical problems by studying the power and limitations of a restricted class of algorithms: low-degree polynomials. Focusing on the setting of hypothesis testing, I will discuss some extensions of this method that allow us to tackle finer-grained questions than the standard approach. First, for the task of detecting a planted clique in a random graph, we ask not merely when this can be…

Find out more »

Source Condition Double Robust Inference on Functionals of Inverse Problems

Vasilis Syrgkanis (Stanford University)
E18-304

Abstract: We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems. Any such parameter admits a doubly robust representation that depends on the solution to a dual linear inverse problem, where the dual solution can be thought as a generalization of the inverse propensity function. We provide the first source condition double robust inference method that ensures asymptotic normality around the parameter of interest as long as either the primal or the dual inverse problem…

Find out more »

Estimation and inference for error-in-operator model

Vladimir Spokoinyi (Humboldt University of Berlin)
E18-304

Abstract: We consider the Error-in-Operator (EiO) problem of recovering the source x signal from the noise observation Y given by the equation Y = A x + ε in the situation when the operator A is not precisely known. Instead, a pilot estimate \hat{A} is available. The study is motivated by Hoffmann & Reiss (2008), Trabs (2018) and by recent results on high dimensional regression with random design; see e.g., Tsigler, Bartlett (2020) (Benign overfitting in ridge regression; arXiv:2009.14286) Cheng, and Montanari (2022) (Dimension free ridge regression; arXiv:2210.08571), among many others. Examples of EiO include regression with error-in-variables and instrumental regression, stochastic diffusion, Markov time series, interacting particle…

Find out more »

Sharper Risk Bounds for Statistical Aggregation

Nikita Zhivotovskiy (University of California, Berkeley)
E18-304

Abstract: In this talk, we revisit classical results in the theory of statistical aggregation, focusing on the transition from global complexity to a more manageable local one. The goal of aggregation is to combine several base predictors to achieve a prediction nearly as accurate as the best one, without assumptions on the class structure or target. Though studied in both sequential and statistical settings, they traditionally use the same "global" complexity measure. We highlight the lesser-known PAC-Bayes localization enabling us…

Find out more »

A proof of the RM code capacity conjecture

Emmanuel Abbé (EPFL)
E18-304

Abstract: In 1948, Shannon used a probabilistic argument to prove the existence of codes achieving channel capacity. In 1954, Muller and Reed introduced a simple deterministic code construction, conjectured shortly after to achieve channel capacity. Major progress was made towards establishing this conjecture over the last decades, with various branches of discrete mathematics involved. In particular, the special case of the erasure channel was settled in 2015 by Kudekar at al., relying on Bourgain-Kalai's sharp threshold theorem for symmetric monotone…

Find out more »

The Full Landscape of Robust Mean Testing: Sharp Separations between Oblivious and Adaptive Contamination

Sam Hopkins (MIT)
E18-304

Abstract:  We consider the question of Gaussian mean testing, a fundamental task in high-dimensional distribution testing and signal processing, subject to adversarial corruptions of the samples. We focus on the relative power of different adversaries, and show that, in contrast to the common wisdom in robust statistics, there exists a strict separation between adaptive adversaries (strong contamination) and oblivious ones (weak contamination) for this task. We design both new testing algorithms and new lower bounds to show that robust testing…

Find out more »

Hypothesis testing with information asymmetry

Stephen Bates (MIT)
E18-304

Abstract: Contemporary scientific research is a distributed, collaborative endeavor, carried out by teams of researchers, regulatory institutions, funding agencies, commercial partners, and scientific bodies, all interacting with each other and facing different incentives. To maintain scientific rigor, statistical methods should acknowledge this state of affairs. To this end, we study hypothesis testing when there is an agent (e.g., a researcher or a pharmaceutical company) with a private prior about an unknown parameter and a principal (e.g., a policymaker or regulator)…

Find out more »

Project and Forget: Solving Large-Scale Metric Constrained Problems

Anna Gilbert (Yale University)
E18-304

Abstract: Many important machine learning problems can be formulated as highly constrained convex optimization problems. One important example is metric constrained problems. In this paper, we show that standard optimization techniques can not be used to solve metric constrained problems. To solve such problems, we provide a general active set framework, called Project and Forget, and several variants thereof that use Bregman projections. Project and Forget is a general purpose method that can be used to solve highly constrained convex…

Find out more »

Analysis of Flow-based Generative Models

Jianfeng Lu (Duke University)
E18-304

Abstract: In this talk, we will discuss recent progress on mathematical analysis of flow based generative models, which is a highly successful approach for learning a probability distribution from data and generating further samples. We will talk about some recent results in convergence analysis of diffusion models and related flow-based methods. In particular, we established convergence of score-based diffusion models applying to any distribution with bounded 2nd moment, relying only on a $L^2$-accurate score estimates, with polynomial dependence on all…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764