Views Navigation

Event Views Navigation

Walking within growing domains: recurrence versus transience

Amir Dembo (Stanford University)
4-163

Abstract: When is simple random walk on growing in time d-dimensional domains recurrent? For domain growth which is independent of the walk, we review recent progress and related universality conjectures about a sharp recurrence versus transience criterion in terms of the growth rate. We compare this with the question of recurrence/transience for time varying conductance models, where Gaussian heat kernel estimates and evolving sets play an important role. We also briefly contrast such expected universality with examples of the rich…

Find out more »

Optimal lower bounds for universal relation, and for samplers and finding duplicates in streams

Jelani Nelson (Harvard University)
E18-304

Abstract: Consider the following problem: we monitor a sequence of edgeinsertions and deletions in a graph on n vertices, so there are N = (n choose 2) possible edges (e.g. monitoring a stream of friend accepts/removals on Facebook). At any point someone may say "query()", at which point must output a random edge that exists in the graph at that time from a distribution that is statistically close to uniform.  More specifically, with probability p our edge should come from a distribution close to uniform,…

Find out more »

2017 Charles River Lectures on Probability and Related Topics

The Charles River Lectures on Probability and Related Topics will be hosted by Harvard University on Monday, October 2, 2017 in Cambridge, MA. The lectures are jointly organized by Harvard University, Massachusetts Institute of Technology and Microsoft Research New England for the benefit of the greater Boston area mathematics community. The event features five lectures by distinguished researchers in the areas of probability and related topics. This year's lectures will be delivered by: For questions regarding the event, please email…

Find out more »

Transport maps for Bayesian computation

Youssef Marzouk (MIT)
E18-304

Abstract: Integration against an intractable probability measure is among the fundamental challenges of Bayesian inference. A useful approach to this problem seeks a deterministic coupling of the measure of interest with a tractable "reference" measure (e.g., a standard Gaussian). This coupling is induced by a transport map, and enables direct simulation from the desired measure simply by evaluating the transport map at samples from the reference. Approximate transports can also be used to "precondition" standard Monte Carlo schemes. Yet characterizing a…

Find out more »

Additivity of Information in Deep Generative Networks: The I-MMSE Transform Method

Galen Reeves (Duke University)
E18-304

Abstract:  Deep generative networks are powerful probabilistic models that consist of multiple stages of linear transformations (described by matrices) and non-linear, possibly random, functions (described generally by information channels). These models have gained great popularity due to their ability to characterize complex probabilistic relationships arising in a wide variety of inference problems. In this talk, we introduce a new method for analyzing the fundamental limits of statistical inference in settings where the model is known. The validity of our method can…

Find out more »

Structure in multi-index tensor data: a trivial byproduct of simpler phenomena?

John Cunningham (Columbia)
E18-304

Abstract:  As large tensor-variate data become increasingly common across applied machine learning and statistics, complex analysis methods for these data similarly increase in prevalence.  Such a trend offers the opportunity to understand subtler and more meaningful features of the data that, ostensibly, could not be studied with simpler datasets or simpler methodologies.  While promising, these advances are also perilous: novel analysis techniques do not always consider the possibility that their results are in fact an expected consequence of some simpler, already-known…

Find out more »

Inference in dynamical systems and the geometry of learning group actions

Sayan Mukherjee (Duke)
E18-304

Abstract: We examine consistency of the Gibbs posterior for dynamical systems using a classical idea in dynamical systems called the thermodynamic formalism in tracking dynamical systems. We state a variation formulation under which there is a unique posterior distribution of parameters as well as hidden states using using classic ideas from dynamical systems such as pressure and joinings. We use an example of consistency of hidden Markov with infinite lags as an application of our theory. We develop a geometric framework that characterizes…

Find out more »

On Learning Theory and Neural Networks

Amit Daniely (Google)
E18-304

Abstract:  Can learning theory, as we know it today, form a theoretical basis for neural networks. I will try to discuss this question in light of two new results -- one positive and one negative. Based on joint work with Roy Frostig, Vineet Gupta and Yoram Singer, and with Vitaly Feldman Biography: Amit Daniely is an Assistant Professor at the Hebrew University in Jerusalem, and a research scientist at Google Research, Tel-Aviv. Prior to that, he was a research scientist at Google Research, Mountain-View. Even…

Find out more »

Unbiased Markov chain Monte Carlo with couplings

Pierre Jacob (Harvard)
E18-304

Abstract: Markov chain Monte Carlo methods provide consistent approximations of integrals as the number of iterations goes to infinity. However, these estimators are generally biased after any fixed number of iterations, which complicates both parallel computation. In this talk I will explain how to remove this burn-in  bias by using couplings of Markov chains and a telescopic sum argument, inspired by Glynn & Rhee (2014). The resulting unbiased estimators can be computed independently in parallel, and averaged. I will present…

Find out more »

Statistics, Computation and Learning with Graph Neural Networks

Joan Bruna Estrach (NYU)
E18-304

Abstract:  Deep Learning, thanks mostly to Convolutional architectures, has recently transformed computer vision and speech recognition. Their ability to encode geometric stability priors, while offering enough expressive power, is at the core of their success. In such settings, geometric stability is expressed in terms of local deformations, and it is enforced thanks to localized convolutional operators that separate the estimation into scales. Many problems across applied sciences, from particle physics to recommender systems, are formulated in terms of signals defined over…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764