Views Navigation

Event Views Navigation

A proof of the RM code capacity conjecture

Emmanuel Abbé (EPFL)
E18-304

Abstract: In 1948, Shannon used a probabilistic argument to prove the existence of codes achieving channel capacity. In 1954, Muller and Reed introduced a simple deterministic code construction, conjectured shortly after to achieve channel capacity. Major progress was made towards establishing this conjecture over the last decades, with various branches of discrete mathematics involved. In particular, the special case of the erasure channel was settled in 2015 by Kudekar at al., relying on Bourgain-Kalai's sharp threshold theorem for symmetric monotone…

Find out more »

The Full Landscape of Robust Mean Testing: Sharp Separations between Oblivious and Adaptive Contamination

Sam Hopkins (MIT)
E18-304

Abstract:  We consider the question of Gaussian mean testing, a fundamental task in high-dimensional distribution testing and signal processing, subject to adversarial corruptions of the samples. We focus on the relative power of different adversaries, and show that, in contrast to the common wisdom in robust statistics, there exists a strict separation between adaptive adversaries (strong contamination) and oblivious ones (weak contamination) for this task. We design both new testing algorithms and new lower bounds to show that robust testing…

Find out more »

Hypothesis testing with information asymmetry

Stephen Bates (MIT)
E18-304

Abstract: Contemporary scientific research is a distributed, collaborative endeavor, carried out by teams of researchers, regulatory institutions, funding agencies, commercial partners, and scientific bodies, all interacting with each other and facing different incentives. To maintain scientific rigor, statistical methods should acknowledge this state of affairs. To this end, we study hypothesis testing when there is an agent (e.g., a researcher or a pharmaceutical company) with a private prior about an unknown parameter and a principal (e.g., a policymaker or regulator)…

Find out more »

Project and Forget: Solving Large-Scale Metric Constrained Problems

Anna Gilbert (Yale University)
E18-304

Abstract: Many important machine learning problems can be formulated as highly constrained convex optimization problems. One important example is metric constrained problems. In this paper, we show that standard optimization techniques can not be used to solve metric constrained problems. To solve such problems, we provide a general active set framework, called Project and Forget, and several variants thereof that use Bregman projections. Project and Forget is a general purpose method that can be used to solve highly constrained convex…

Find out more »

Analysis of Flow-based Generative Models

Jianfeng Lu (Duke University)
E18-304

Abstract: In this talk, we will discuss recent progress on mathematical analysis of flow based generative models, which is a highly successful approach for learning a probability distribution from data and generating further samples. We will talk about some recent results in convergence analysis of diffusion models and related flow-based methods. In particular, we established convergence of score-based diffusion models applying to any distribution with bounded 2nd moment, relying only on a $L^2$-accurate score estimates, with polynomial dependence on all…

Find out more »

Advances in Distribution Compression

Lester Mackey (Microsoft Research)
E18-304

Abstract This talk will introduce three new tools for summarizing a probability distribution more effectively than independent sampling or standard Markov chain Monte Carlo thinning: Given an initial n point summary (for example, from independent sampling or a Markov chain), kernel thinning finds a subset of only square-root n points with comparable worst-case integration error across a reproducing kernel Hilbert space. If the initial summary suffers from biases due to off-target sampling, tempering, or burn-in, Stein thinning simultaneously compresses the…

Find out more »

Saddle-to-saddle dynamics in diagonal linear networks

Nicolas Flammarion (EPFL)
E18-304

Abstract: When training neural networks with gradient methods and small weight initialisation, peculiar learning curves are observed: the training initially shows minimal progress, which is then followed by a sudden transition where a new "feature" is rapidly learned. This pattern is commonly known as incremental learning. In this talk, I will demonstrate that we can comprehensively understand this phenomenon within the context of a simplified network architecture. In this setting, we can establish that the gradient flow trajectory transitions from one saddle point of the training…

Find out more »

The discrete Schrödinger bridge, and the ensuing chaos

Zaid Harchaoui (University of Washington)
E18-304

Abstract: Schrödinger studied in the 1930s a thought experiment about hot gas in which a cloud of particles evolves in time from an initial distribution to another one, possibly quite different from the initial one. He posed the problem of determining the most likely evolution among the many possible ones, a problem now known as the Schrödinger bridge problem. H. Föllmer later in the 1980s framed the problem as an entropy regularized variational problem. The Schrödinger problem underlies a number…

Find out more »

Empirical methods for macroeconomic policy analysis

Christian Wolf, MIT
E18-304

Abstract: We show that, in a general family of linearized structural macroeconomic models, the counterfactual evolution of the economy under alternative policy rules is fully pinned down by two empirically estimable objects: (i) reduced-form projections with respect to a large information set; and (ii) the causal effects of policy shocks on macroeconomic aggregates. Under our assumptions, the derived counterfactuals are fully robust to the Lucas critique. Building on these insights, we discuss how to leverage the classical ``VAR'' approach to policy…

Find out more »

Efficient Algorithms for Semirandom Planted CSPs at the Refutation Threshold

Pravesh Kothari, Princeton University
E18-304

Abstract: We present an efficient algorithm to solve semi-random planted instances of any Boolean constraint satisfaction problem (CSP). The semi-random model is a hybrid between worst-case and average-case input models, where the input is generated by (1) choosing an arbitrary planted assignment x∗, (2) choosing an arbitrary clause structure, and (3) choosing literal negations for each clause from an arbitrary distribution "shifted by x∗" so that x∗ satisfies each constraint. For an n variable semi-random planted instance of a k-arity…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764