Saddle-to-saddle dynamics in diagonal linear networks

Nicolas Flammarion (EPFL)
E18-304

Abstract: When training neural networks with gradient methods and small weight initialisation, peculiar learning curves are observed: the training initially shows minimal progress, which is then followed by a sudden transition where a new "feature" is rapidly learned. This pattern is commonly known as incremental learning. In this talk, I will demonstrate that we can comprehensively understand this phenomenon within the context of a simplified network architecture. In this setting, we can establish that the gradient flow trajectory transitions from one saddle point of the training…

Find out more »

Advances in Distribution Compression

Lester Mackey (Microsoft Research)
E18-304

Abstract This talk will introduce three new tools for summarizing a probability distribution more effectively than independent sampling or standard Markov chain Monte Carlo thinning: Given an initial n point summary (for example, from independent sampling or a Markov chain), kernel thinning finds a subset of only square-root n points with comparable worst-case integration error across a reproducing kernel Hilbert space. If the initial summary suffers from biases due to off-target sampling, tempering, or burn-in, Stein thinning simultaneously compresses the…

Find out more »

Analysis of Flow-based Generative Models

Jianfeng Lu (Duke University)
E18-304

Abstract: In this talk, we will discuss recent progress on mathematical analysis of flow based generative models, which is a highly successful approach for learning a probability distribution from data and generating further samples. We will talk about some recent results in convergence analysis of diffusion models and related flow-based methods. In particular, we established convergence of score-based diffusion models applying to any distribution with bounded 2nd moment, relying only on a $L^2$-accurate score estimates, with polynomial dependence on all…

Find out more »

Project and Forget: Solving Large-Scale Metric Constrained Problems

Anna Gilbert (Yale University)
E18-304

Abstract: Many important machine learning problems can be formulated as highly constrained convex optimization problems. One important example is metric constrained problems. In this paper, we show that standard optimization techniques can not be used to solve metric constrained problems. To solve such problems, we provide a general active set framework, called Project and Forget, and several variants thereof that use Bregman projections. Project and Forget is a general purpose method that can be used to solve highly constrained convex…

Find out more »

Hypothesis testing with information asymmetry

Stephen Bates (MIT)
E18-304

Abstract: Contemporary scientific research is a distributed, collaborative endeavor, carried out by teams of researchers, regulatory institutions, funding agencies, commercial partners, and scientific bodies, all interacting with each other and facing different incentives. To maintain scientific rigor, statistical methods should acknowledge this state of affairs. To this end, we study hypothesis testing when there is an agent (e.g., a researcher or a pharmaceutical company) with a private prior about an unknown parameter and a principal (e.g., a policymaker or regulator)…

Find out more »

The Full Landscape of Robust Mean Testing: Sharp Separations between Oblivious and Adaptive Contamination

Sam Hopkins (MIT)
E18-304

Abstract:  We consider the question of Gaussian mean testing, a fundamental task in high-dimensional distribution testing and signal processing, subject to adversarial corruptions of the samples. We focus on the relative power of different adversaries, and show that, in contrast to the common wisdom in robust statistics, there exists a strict separation between adaptive adversaries (strong contamination) and oblivious ones (weak contamination) for this task. We design both new testing algorithms and new lower bounds to show that robust testing…

Find out more »

A proof of the RM code capacity conjecture

Emmanuel Abbé (EPFL)
E18-304

Abstract: In 1948, Shannon used a probabilistic argument to prove the existence of codes achieving channel capacity. In 1954, Muller and Reed introduced a simple deterministic code construction, conjectured shortly after to achieve channel capacity. Major progress was made towards establishing this conjecture over the last decades, with various branches of discrete mathematics involved. In particular, the special case of the erasure channel was settled in 2015 by Kudekar at al., relying on Bourgain-Kalai's sharp threshold theorem for symmetric monotone…

Find out more »

Sharper Risk Bounds for Statistical Aggregation

Nikita Zhivotovskiy (University of California, Berkeley)
E18-304

Abstract: In this talk, we revisit classical results in the theory of statistical aggregation, focusing on the transition from global complexity to a more manageable local one. The goal of aggregation is to combine several base predictors to achieve a prediction nearly as accurate as the best one, without assumptions on the class structure or target. Though studied in both sequential and statistical settings, they traditionally use the same "global" complexity measure. We highlight the lesser-known PAC-Bayes localization enabling us…

Find out more »

Estimation and inference for error-in-operator model

Vladimir Spokoinyi (Humboldt University of Berlin)
E18-304

Abstract: We consider the Error-in-Operator (EiO) problem of recovering the source x signal from the noise observation Y given by the equation Y = A x + ε in the situation when the operator A is not precisely known. Instead, a pilot estimate \hat{A} is available. The study is motivated by Hoffmann & Reiss (2008), Trabs (2018) and by recent results on high dimensional regression with random design; see e.g., Tsigler, Bartlett (2020) (Benign overfitting in ridge regression; arXiv:2009.14286) Cheng, and Montanari (2022) (Dimension free ridge regression; arXiv:2210.08571), among many others. Examples of EiO include regression with error-in-variables and instrumental regression, stochastic diffusion, Markov time series, interacting particle…

Find out more »

Source Condition Double Robust Inference on Functionals of Inverse Problems

Vasilis Syrgkanis (Stanford University)
E18-304

Abstract: We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems. Any such parameter admits a doubly robust representation that depends on the solution to a dual linear inverse problem, where the dual solution can be thought as a generalization of the inverse propensity function. We provide the first source condition double robust inference method that ensures asymptotic normality around the parameter of interest as long as either the primal or the dual inverse problem…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764