Past Events

Find Events

Event Views Navigation

Past Events

Events List Navigation

October 2017

On Learning Theory and Neural Networks

Amit Daniely (Google)

October 27, 2017 @ 11:00 am - 12:00 pm
E18-304

Abstract:  Can learning theory, as we know it today, form a theoretical basis for neural networks. I will try to discuss this question in light of two new results -- one positive and one negative. Based on joint work with Roy Frostig, Vineet Gupta and Yoram Singer, and with Vitaly Feldman Biography: Amit Daniely is an Assistant Professor at the Hebrew University in Jerusalem, and a research scientist at Google Research, Tel-Aviv. Prior to that, he was a research scientist at Google Research, Mountain-View. Even…

Find out more »
November 2017

Unbiased Markov chain Monte Carlo with couplings

Pierre Jacob (Harvard)

November 1, 2017 @ 11:00 am - 12:00 pm
E18-304

Abstract: Markov chain Monte Carlo methods provide consistent approximations of integrals as the number of iterations goes to infinity. However, these estimators are generally biased after any fixed number of iterations, which complicates both parallel computation. In this talk I will explain how to remove this burn-in  bias by using couplings of Markov chains and a telescopic sum argument, inspired by Glynn & Rhee (2014). The resulting unbiased estimators can be computed independently in parallel, and averaged. I will present…

Find out more »

Statistics, Computation and Learning with Graph Neural Networks

Joan Bruna Estrach (NYU)

November 3, 2017 @ 11:00 am - 12:00 pm
E18-304

Abstract:  Deep Learning, thanks mostly to Convolutional architectures, has recently transformed computer vision and speech recognition. Their ability to encode geometric stability priors, while offering enough expressive power, is at the core of their success. In such settings, geometric stability is expressed in terms of local deformations, and it is enforced thanks to localized convolutional operators that separate the estimation into scales. Many problems across applied sciences, from particle physics to recommender systems, are formulated in terms of signals defined over…

Find out more »

Generative Models and Compressed Sensing

Alex Dimakis (University of Texas at Austin)

November 17, 2017 @ 11:00 am - 12:00 pm
E18-304

Abstract:   The goal of compressed sensing is to estimate a vector from an under-determined system of noisy linear measurements, by making use of prior knowledge in the relevant domain. For most results in the literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we assume that the unknown vectors lie near the range of a generative model, e.g. a GAN…

Find out more »
December 2017

Challenges in Developing Learning Algorithms to Personalize Treatment in Real Time

Susan Murphy (Harvard)

December 1, 2017 @ 11:00 am - 12:00 pm
E18-304

Abstract:  A formidable challenge in designing sequential treatments is to  determine when and in which context it is best to deliver treatments.  Consider treatment for individuals struggling with chronic health conditions.  Operationally designing the sequential treatments involves the construction of decision rules that input current context of an individual and output a recommended treatment.   That is, the treatment is adapted to the individual's context; the context may include  current health status, current level of social support and current level of adherence…

Find out more »

Stochastics and Statistics Seminar

Alex Bloemendal (Broad Institute)

December 8, 2017 @ 11:00 am - 12:00 pm
E18-304

Biography: Alex Bloemendal is a computational scientist at the Broad Institute of MIT and Harvard and at the Analytic and Translational Genetics Unit of Massachusetts General Hospital. As a member of Broad institute member Ben Neale’s lab, Bloemendal leads a group in developing new methods to analyze genetic data, harnessing its unprecedented scope and scale to discover the genetic causes of disease. He also co-founded and directs the Models, Inference & Algorithms initiative at the Broad, bridging computational biology, mathematical…

Find out more »
February 2018

Connections between structured estimation and weak submodularity

Sahand Negahban (Yale University)

February 2 @ 11:00 am - 12:00 pm
E18-304

Abstract:  Many modern statistical estimation problems rely on imposing additional structure in order to reduce the statistical complexity and provide interpretability. Unfortunately, these structures often are combinatorial in nature and result in computationally challenging problems. In parallel, the combinatorial optimization community has placed significant effort in developing algorithms that can approximately solve such optimization problems in a computationally efficient manner. The focus of this talk is to expand upon ideas that arise in combinatorial optimization and connect those algorithms and…

Find out more »

Data Science and Big Data Analytics: Making Data-Driven Decisions

February 5 @ 8:00 am
online

The seven-week course launches February 5, 2018. This course was developed by over ten MIT faculty members at IDSS. It is specially designed for data scientists, business analysts, engineers, and technical managers looking to learn the latest theories and strategies to harness data.

Find out more »

Variable selection using presence-only data with applications to biochemistry

Garvesh Raskutti (University of Wisconsin)

February 9 @ 11:00 am - 12:00 pm
E18-304

Abstract:  In a number of problems, we are presented with positive and unlabelled data, referred to as presence-only responses. The application I present today involves studying the relationship between protein sequence and function and presence-only data arises since for many experiments it is impossible to obtain a large set of negative (non-functional) sequences. Furthermore, if the number of variables is large and the goal is variable selection (as in this case), a number of statistical and computational challenges arise due…

Find out more »

User-friendly guarantees for the Langevin Monte Carlo

Arnak Dalalyan (ENSAE-CREST)

February 16 @ 11:00 am - 12:00 pm
E18-304

Abstract: In this talk, I will revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. I will discuss the existing results when the accuracy of sampling is measured in the Wasserstein distance and provide further insights on relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. I will also present non-asymptotic guarantees for the accuracy…

Find out more »
+ Export Events