Lattices and the Hardness of Statistical Problems
Abstract: I will describe recent results that (a) show nearly optimal hardness of learning Gaussian mixtures, and (b) give evidence of average-case hardness of sparse linear regression w.r.t. all efficient algorithms, assuming the worst-case hardness of lattice problems. The talk is based on the following papers with Aparna Gupte and Neekon Vafa. https://arxiv.org/pdf/2204.02550.pdf https://arxiv.org/pdf/2402.14645.pdf Bio: Vinod Vaikuntanathan is a professor of computer science at MIT and the chief cryptographer at Duality Technologies. His research is in the foundations of cryptography…
Emergent outlier subspaces in high-dimensional stochastic gradient descent
Abstract: It has been empirically observed that the spectrum of neural network Hessians after training have a bulk concentrated near zero, and a few outlier eigenvalues. Moreover, the eigenspaces associated to these outliers have been associated to a low-dimensional subspace in which most of the training occurs, and this implicit low-dimensional structure has been used as a heuristic for the success of high-dimensional classification. We will describe recent rigorous results in this direction for the Hessian spectrum over the course…
Consensus-based optimization and sampling
Abstract: Particle methods provide a powerful paradigm for solving complex global optimization problems leading to highly parallelizable algorithms. Despite widespread and growing adoption, theory underpinning their behavior has been mainly based on meta-heuristics. In application settings involving black-box procedures, or where gradients are too costly to obtain, one relies on derivative-free approaches instead. This talk will focus on two recent techniques, consensus-based optimization and consensus-based sampling. We explain how these methods can be used for the following two goals: (i)…
Matrix displacement convexity and intrinsic dimensionality
Abstract: The space of probability measures endowed with the optimal transport metric has a rich structure with applications in probability, analysis, and geometry. The notion of (displacement) convexity in this space was discovered by McCann, and forms the backbone of this theory. I will introduce a new, and stronger, notion of displacement convexity which operates on the matrix level. The motivation behind this definition is to capture the intrinsic dimensionality of probability measures which could have very different behaviors along…
Adversarial combinatorial bandits for imperfect-information sequential games
Abstract: This talk will focus on learning policies for tree-form decision problems (extensive-form games) from adversarial feedback. In principle, one could convert learning in any extensive-form game (EFG) into learning in an equivalent normal-form game (NFG), that is, a multi-armed bandit problem with one arm per tree-form policy. However, doing so comes at the cost of an exponential blowup of the strategy space. So, progress on NFGs and EFGs has historically followed separate tracks, with the EFG community often having…