Views Navigation

Event Views Navigation

Inference for Longitudinal Data After Adaptive Sampling

Susan Murphy, Harvard University
E18-304

Abstract: Adaptive sampling methods, such as reinforcement learning (RL) and bandit algorithms, are increasingly used for the real-time personalization of interventions in digital applications like mobile health and education. As a result, there is a need to be able to use the resulting adaptively collected user data to address a variety of inferential questions, including questions about time-varying causal effects. However, current methods for statistical inference on such data (a) make strong assumptions regarding the environment dynamics, e.g., assume the…

Find out more »

Generative Models, Normalizing Flows, and Monte Carlo Samplers

Eric Vanden-Eijnden, New York University
E18-304

Abstract: Contemporary generative models used in the context of unsupervised learning have primarily been designed around the construction of a map between two probability distributions that transform samples from the first into samples from the second. Advances in this domain have been governed by the introduction of algorithms or inductive biases that make learning this map, and the Jacobian of the associated change of variables, more tractable. The challenge is to choose what structure to impose on the transport to…

Find out more »

On the statistical cost of score matching

Andrej Risteski, Carnegie Mellon University
E18-304

Abstract: Energy-based models are a recent class of probabilistic generative models wherein the distribution being learned is parametrized up to a constant of proportionality (i.e. a partition function). Fitting such models using maximum likelihood (i.e. finding the parameters which maximize the probability of the observed data) is computationally challenging, as evaluating the partition function involves a high dimensional integral. Thus, newer incarnations of this paradigm instead train other losses which obviate the need to evaluate partition functions. Prominent examples include score matching (in which we fit…

Find out more »

Spectral pseudorandomness and the clique number of the Paley graph

Dmitriy (Tim) Kunisky, Yale University
E18-304

Abstract: The Paley graph is a classical number-theoretic construction of a graph that is believed to behave "pseudorandomly" in many regards. Accurately bounding the clique number of the Paley graph is a long-standing open problem in number theory, with applications to several other questions about the statistics of finite fields. I will present recent results studying the application of convex optimization and spectral graph theory to this problem, which involve understanding the extent to which the Paley graph is "spectrally…

Find out more »

WiDS Cambridge 2023

Microsoft NERD Center

WiDS Cambridge is a hybrid one-day technical conference will feature an all-female line up of speakers from academia and industry to talk about the latest data science-related research in a number of domains.

Find out more »

Spectral Independence: A New Tool to Analyze Markov Chains

Kuikui Liu, University of Washington
E18-304

Abstract: Sampling from high-dimensional probability distributions is a fundamental and challenging problem encountered throughout science and engineering. One of the most popular approaches to tackle such problems is the Markov chain Monte Carlo (MCMC) paradigm. While MCMC algorithms are often simple to implement and widely used in practice, analyzing the rate of convergence to stationarity, i.e. the "mixing time", remains a challenging problem in many settings. I will describe a new technique based on pairwise correlations called "spectral independence", which has been…

Find out more »

Geometric EDA for Random Objects

Paromita Dubey, University of Southern California
E18-304

Abstract: In this talk I will propose new tools for the exploratory data analysis of data objects taking values in a general separable metric space. First, I will introduce depth profiles, where the depth profile of a point ω in the metric space refers to the distribution of the distances between ω and the data objects. I will describe how depth profiles can be harnessed to define transport ranks, which capture the centrality of each element in the metric space with respect to the…

Find out more »

Variational methods in reinforcement learning

Martin Wainwright, MIT
E18-304

Abstract: Reinforcement learning is the study of models and procedures for optimal sequential decision-making under uncertainty.  At its heart lies the Bellman optimality operator, whose unique fixed point specifies an optimal policy and value function.  In this talk, we discuss two classes of variational methods that can be used to obtain approximate solutions with accompanying error guarantees.  For policy evaluation problems based on on-line data, we present Krylov-Bellman boosting, which combines ideas from Krylov methods with non-parametric boosting.  For policy optimization problems based on…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764