Views Navigation

Event Views Navigation

Project and Forget: Solving Large-Scale Metric Constrained Problems

Anna Gilbert (Yale University)
E18-304

Abstract: Many important machine learning problems can be formulated as highly constrained convex optimization problems. One important example is metric constrained problems. In this paper, we show that standard optimization techniques can not be used to solve metric constrained problems. To solve such problems, we provide a general active set framework, called Project and Forget, and several variants thereof that use Bregman projections. Project and Forget is a general purpose method that can be used to solve highly constrained convex…

Find out more »

Analysis of Flow-based Generative Models

Jianfeng Lu (Duke University)
E18-304

Abstract: In this talk, we will discuss recent progress on mathematical analysis of flow based generative models, which is a highly successful approach for learning a probability distribution from data and generating further samples. We will talk about some recent results in convergence analysis of diffusion models and related flow-based methods. In particular, we established convergence of score-based diffusion models applying to any distribution with bounded 2nd moment, relying only on a $L^2$-accurate score estimates, with polynomial dependence on all…

Find out more »

Advances in Distribution Compression

Lester Mackey (Microsoft Research)
E18-304

Abstract This talk will introduce three new tools for summarizing a probability distribution more effectively than independent sampling or standard Markov chain Monte Carlo thinning: Given an initial n point summary (for example, from independent sampling or a Markov chain), kernel thinning finds a subset of only square-root n points with comparable worst-case integration error across a reproducing kernel Hilbert space. If the initial summary suffers from biases due to off-target sampling, tempering, or burn-in, Stein thinning simultaneously compresses the…

Find out more »

Saddle-to-saddle dynamics in diagonal linear networks

Nicolas Flammarion (EPFL)
E18-304

Abstract: When training neural networks with gradient methods and small weight initialisation, peculiar learning curves are observed: the training initially shows minimal progress, which is then followed by a sudden transition where a new "feature" is rapidly learned. This pattern is commonly known as incremental learning. In this talk, I will demonstrate that we can comprehensively understand this phenomenon within the context of a simplified network architecture. In this setting, we can establish that the gradient flow trajectory transitions from one saddle point of the training…

Find out more »

The discrete Schrödinger bridge, and the ensuing chaos

Zaid Harchaoui (University of Washington)
E18-304

Abstract: Schrödinger studied in the 1930s a thought experiment about hot gas in which a cloud of particles evolves in time from an initial distribution to another one, possibly quite different from the initial one. He posed the problem of determining the most likely evolution among the many possible ones, a problem now known as the Schrödinger bridge problem. H. Föllmer later in the 1980s framed the problem as an entropy regularized variational problem. The Schrödinger problem underlies a number…

Find out more »

Empirical methods for macroeconomic policy analysis

Christian Wolf, MIT
E18-304

Abstract: We show that, in a general family of linearized structural macroeconomic models, the counterfactual evolution of the economy under alternative policy rules is fully pinned down by two empirically estimable objects: (i) reduced-form projections with respect to a large information set; and (ii) the causal effects of policy shocks on macroeconomic aggregates. Under our assumptions, the derived counterfactuals are fully robust to the Lucas critique. Building on these insights, we discuss how to leverage the classical ``VAR'' approach to policy…

Find out more »

Efficient Algorithms for Semirandom Planted CSPs at the Refutation Threshold

Pravesh Kothari, Princeton University
E18-304

Abstract: We present an efficient algorithm to solve semi-random planted instances of any Boolean constraint satisfaction problem (CSP). The semi-random model is a hybrid between worst-case and average-case input models, where the input is generated by (1) choosing an arbitrary planted assignment x∗, (2) choosing an arbitrary clause structure, and (3) choosing literal negations for each clause from an arbitrary distribution "shifted by x∗" so that x∗ satisfies each constraint. For an n variable semi-random planted instance of a k-arity…

Find out more »

Entropic optimal transport: limit theorems and algorithms

Kengo Kato, Cornell University
E18-304

Abstract: In this talk, I will discuss my recent work on entropic optimal transport (EOT). In the first part, I will discuss limit theorems for EOT maps, dual potentials, and the Sinkhorn divergence. The key technical tool we use is a first and second-order Hadamard differentiability analysis of EOT potentials with respect to the marginals, from which the limit theorems, bootstrap consistency, and asymptotic efficiency of the empirical estimators follow. The second part concerns the entropic Gromov-Wasserstein (EGW) distance, which…

Find out more »

On Provably Learning Sparse High-Dimensional Functions

Joan Bruna, New York University
E18-304

Abstract: Neural Networks are hailed for their ability to discover useful low-dimensional 'features' out of complex high-dimensional data, yet such ability remains mostly hand-wavy. Over the recent years, the class of sparse (or 'multi-index') functions has emerged as a model with both practical motivations and a rich mathematical structure, enabling a quantitative theory of 'feature learning'. In this talk, I will present recent progress on this front, by describing (i) the ability of gradient-descent algorithms to efficiently learn the multi-index class over Gaussian data, and (ii) the tight Statistical-Query…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764