Stochastics and Statistics Seminar Series

Views Navigation

Event Views Navigation

Naive Feature Selection: Sparsity in Naive Bayes

Alexandre d'Aspremont (ENS, CNRS)
online

Abstract: Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our bound becomes tight as the marginal contribution of additional features decreases. Both binary and…

Find out more »

Sampler for the Wasserstein barycenter

Thibaut Le Gouic, MIT
online

Abstract: Wasserstein barycenters have become a central object in applied optimal transport as a tool to summarize complex objects that can be represented as distributions. Such objects include posterior distributions in Bayesian statistics, functions in functional data analysis and images in graphics. In a nutshell a Wasserstein barycenter is a probability distribution that provides a compelling summary of a finite set of input distributions. While the question of computing Wasserstein barycenters has received significant attention, this talk focuses on a…

Find out more »

Instance Dependent PAC Bounds for Bandits and Reinforcement Learning

Kevin Jamieson (University of Washington)
E18-304

Abstract: The sample complexity of an interactive learning problem, such as multi-armed bandits or reinforcement learning, is the number of interactions with nature required to output an answer (e.g., a recommended arm or policy) that is approximately close to optimal with high probability. While minimax guarantees can be useful rules of thumb to gauge the difficulty of a problem class, algorithms optimized for this worst-case metric often fail to adapt to “easy” instances where fewer samples suffice. In this talk, I…

Find out more »

Regularized modified log-Sobolev inequalities, and comparison of Markov chains

Konstantin Tikhomirov, Georgia Institute of Technology
E18-304

Abstract: In this work, we develop a comparison procedure for the Modified log-Sobolev Inequality (MLSI) constants of two reversible Markov chains on a finite state space. As an application, we provide a sharp estimate of the MLSI constant of the switch chain on the set of simple bipartite regular graphs of size n with a fixed degree d. Our estimate implies that the total variation mixing time of the switch chain is of order O(n log(n)). The result is optimal up to a multiple…

Find out more »

Efficient derivative-free Bayesian inference for large-scale inverse problems

Jiaoyang Huang, University of Pennsylvania
E18-304

Abstract: We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for the repeated evaluations of an expensive forward model, which is often given as a black box or is impractical to differentiate. In this talk I will propose a new derivative-free algorithm Unscented Kalman Inversion, which utilizes the ideas from Kalman filter, to efficiently solve these inverse problems. First, I will explain some basics about Variational Inference under general metric tensors. In particular, under the…

Find out more »

Inference in High Dimensions for (Mixed) Generalized Linear Models: the Linear, the Spectral and the Approximate

Marco Mondelli, Institute of Science and Technology Austria
E18-304

Abstract: In a generalized linear model (GLM), the goal is to estimate a d-dimensional signal x from an n-dimensional observation of the form f(Ax, w), where A is a design matrix and w is a noise vector. Well-known examples of GLMs include linear regression, phase retrieval, 1-bit compressed sensing, and logistic regression. We focus on the high-dimensional setting in which both the number of measurements n and the signal dimension d diverge, with their ratio tending to a fixed constant.…

Find out more »

Variational methods in reinforcement learning

Martin Wainwright, MIT
E18-304

Abstract: Reinforcement learning is the study of models and procedures for optimal sequential decision-making under uncertainty.  At its heart lies the Bellman optimality operator, whose unique fixed point specifies an optimal policy and value function.  In this talk, we discuss two classes of variational methods that can be used to obtain approximate solutions with accompanying error guarantees.  For policy evaluation problems based on on-line data, we present Krylov-Bellman boosting, which combines ideas from Krylov methods with non-parametric boosting.  For policy optimization problems based on…

Find out more »

Adaptivity in Domain Adaptation and Friends

Samory Kpotufe, Columbia University
E18-304

Abstract: Domain adaptation, transfer, multitask, meta, few-shots, representation, or lifelong learning … these are all important recent directions in ML that all touch at the core of what we might mean by ‘AI’. As these directions all concern learning in heterogeneous and ever-changing environments, they all share a central question: what information a data distribution may have about another, critically, in the context of a given estimation problem, e.g., classification, regression, bandits, etc. Our understanding of these problems is still…

Find out more »

Statistical Inference Under Information Constraints: User level approaches

Jayadev Acharya, Cornell University
E18-304

Abstract: In this talk, we will present highlights from some of the work we have been doing in distributed inference under information constraints, such as privacy and communication. We consider basic tasks such as learning and testing of discrete as well as high dimensional distributions, when the samples are distributed across users who can then only send an information-constrained message about their sample. Of key interest to us has been the role of the various types of communication protocols (e.g., non-interactive protocols…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764