Views Navigation

Event Views Navigation

Democracy and the Pursuit of Randomness

Ariel Procaccia, Harvard University
E18-304

Abstract: Sortition is a storied paradigm of democracy built on the idea of choosing representatives through lotteries instead of elections. In recent years this idea has found renewed popularity in the form of citizens’ assemblies, which bring together randomly selected people from all walks of life to discuss key questions and deliver policy recommendations. A principled approach to sortition, however, must resolve the tension between two competing requirements: that the demographic composition of citizens’ assemblies reflect the general population and…

Find out more »

Regularized modified log-Sobolev inequalities, and comparison of Markov chains

Konstantin Tikhomirov, Georgia Institute of Technology
E18-304

Abstract: In this work, we develop a comparison procedure for the Modified log-Sobolev Inequality (MLSI) constants of two reversible Markov chains on a finite state space. As an application, we provide a sharp estimate of the MLSI constant of the switch chain on the set of simple bipartite regular graphs of size n with a fixed degree d. Our estimate implies that the total variation mixing time of the switch chain is of order O(n log(n)). The result is optimal up to a multiple…

Find out more »

Efficient derivative-free Bayesian inference for large-scale inverse problems

Jiaoyang Huang, University of Pennsylvania
E18-304

Abstract: We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for the repeated evaluations of an expensive forward model, which is often given as a black box or is impractical to differentiate. In this talk I will propose a new derivative-free algorithm Unscented Kalman Inversion, which utilizes the ideas from Kalman filter, to efficiently solve these inverse problems. First, I will explain some basics about Variational Inference under general metric tensors. In particular, under the…

Find out more »

Maximum likelihood for high-noise group orbit estimation and cryo-EM

Zhou Fan, Yale University
E18-304

Abstract: Motivated by applications to single-particle cryo-electron microscopy, we study a problem of group orbit estimation where samples of an unknown signal are observed under uniform random rotations from a rotational group. In high-noise settings, we show that geometric properties of the log-likelihood function are closely related to algebraic properties of the invariant algebra of the group action. Eigenvalues of the Fisher information matrix are stratified according to a sequence of transcendence degrees in this invariant algebra, and critical points…

Find out more »

Sampling from the SK measure via algorithmic stochastic localization

Ahmed El Alaoui, Cornell University
E18-304

Abstract: I will present an algorithm which efficiently samples from the Sherrington-Kirkpatrick (SK) measure with no external field at high temperature. The approach is based on the stochastic localization process of Eldan, together with a subroutine for computing the mean vectors of a family of SK measures tilted by an appropriate external field. This approach is general and can potentially be applied to other discrete or continuous non-log-concave problems. We show that the algorithm outputs a sample within vanishing rescaled Wasserstein…

Find out more »

Inference in High Dimensions for (Mixed) Generalized Linear Models: the Linear, the Spectral and the Approximate

Marco Mondelli, Institute of Science and Technology Austria
E18-304

Abstract: In a generalized linear model (GLM), the goal is to estimate a d-dimensional signal x from an n-dimensional observation of the form f(Ax, w), where A is a design matrix and w is a noise vector. Well-known examples of GLMs include linear regression, phase retrieval, 1-bit compressed sensing, and logistic regression. We focus on the high-dimensional setting in which both the number of measurements n and the signal dimension d diverge, with their ratio tending to a fixed constant.…

Find out more »

Structural Deep Learning in Financial Asset Pricing

Jianqing Fan, Princeton University
E18-304

Abstract: We develop new financial economics theory guided structural nonparametric methods for estimating conditional asset pricing models using deep neural networks, by employing time-varying conditional information on alphas and betas carried by firm-specific characteristics. Contrary to many applications of neural networks in economics, we can open the “black box” of machine learning predictions by incorporating financial economics theory into the learning, and provide an economic interpretation of the successful predictions obtained from neural networks,  by decomposing the neural predictors as…

Find out more »

Distance-based summaries and modeling of evolutionary trees

Julia Palacios, Stanford University
E18-304

Abstract:  Phylogenetic trees are mathematical objects of great importance used to model hierarchical data and evolutionary relationships with applications in many fields including evolutionary biology and genetic epidemiology. Bayesian phylogenetic inference usually explore the posterior distribution of trees via Markov Chain Monte Carlo methods, however assessing uncertainty and summarizing distributions remains challenging for these types of structures. In this talk I will first introduce a distance metric on the space of unlabeled ranked tree shapes and genealogies. I will then…

Find out more »

Coding convex bodies under Gaussian noise, and the Wills functional

Jaouad Mourtada, ENSAE Paris
E18-304

Abstract: We consider the problem of sequential probability assignment in the Gaussian setting, where one aims to predict (or equivalently compress) a sequence of real-valued observations almost as well as the best Gaussian distribution with mean constrained to a general domain. First, in the case of a convex constraint set K, we express the hardness of the prediction problem (the minimax regret) in terms of the intrinsic volumes of K. We then establish a comparison inequality for the minimax regret…

Find out more »

Inference for Longitudinal Data After Adaptive Sampling

Susan Murphy, Harvard University
E18-304

Abstract: Adaptive sampling methods, such as reinforcement learning (RL) and bandit algorithms, are increasingly used for the real-time personalization of interventions in digital applications like mobile health and education. As a result, there is a need to be able to use the resulting adaptively collected user data to address a variety of inferential questions, including questions about time-varying causal effects. However, current methods for statistical inference on such data (a) make strong assumptions regarding the environment dynamics, e.g., assume the…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764