Views Navigation

Event Views Navigation

Topics in Information and Inference Seminar

Guy Bresler (MIT)
32-D677

This seminar consists of a series of lectures each followed by a period of informal discussion and social. The topics are at the nexus of information theory, inference, causality, estimation, and non-convex optimization. The lectures are intended to be tutorial in nature with the goal of learning about interesting and exciting topics rather than merely hearing about the most recent results. The topics are driven by the interests of the speakers, and with the exception of the two lectures on…

Find out more »

Algorithmic thresholds for tensor principle component analysis

Aukosh Jagannath (Harvard University)
E18-304

Abstract: Consider the problem of recovering a rank 1 tensor of order k that has been subject to Gaussian noise. The log-likelihood for this problem is highly non-convex. It is information theoretically possible to recover the tensor with a finite number of samples via maximum likelihood estimation, however, it is expected that one needs a polynomially diverging number of samples to efficiently recover it. What is the cause of this large statistical–to–algorithmic gap? To study this question, we investigate the…

Find out more »

Topics in Information and Inference Seminar

Abbas El Gamal (Stanford University)

This seminar consists of a series of lectures each followed by a period of informal discussion and social. The topics are at the nexus of information theory, inference, causality, estimation, and non-convex optimization. The lectures are intended to be tutorial in nature with the goal of learning about interesting and exciting topics rather than merely hearing about the most recent results. The topics are driven by the interests of the speakers, and with the exception of the two lectures on…

Find out more »

On the cover time of two classes of graph

Alan Frieze (Carnegie Mellon University)
E18-304

Abstract: Dense Graphs: We consider abritrary graphs G with n vertices and minimum degree at least n. where δ > 0 is constant. If the conductance of G is sufficiently large then we obtain an asymptotic expression for the cover time CG of G as the solution to some explicit transcendental equation. Failing this, if the mixing time of a random walk on G is of a lesser magnitude than the cover time, then we can obtain an asymptotic deterministic…

Find out more »

Topics in Information and Inference Seminar

Abbas El Gamal (Stanford University)

*This lecture is the second of two. The first lecture was given Thursday, October 25th. Title: Randomness and Information I and II Abstract: Exact or approximate generation of random variables with prescribed statistics from a given randomness source has many important applications, including random number generation from physical sources, Monte Carlo simulations, and randomized algorithms, e.g., for cryptography, optimization, and machine learning. It is also closely related to several fundamental questions in information theory, CS theory, and quantum information. The…

Find out more »

Joint estimation of parameters in Ising Model

Sumit Mukherjee (Columbia University)
E18-304

Abstract: Inference in the framework of Ising models has received significant attention in Statistics and Machine Learning in recent years. In this talk we study joint estimation of the inverse temperature parameter β, and the magnetization parameter B, given one realization from the Ising model, under the assumption that the underlying graph of the Ising model is completely specified. We show that if the graph is either irregular or sparse, then both the parameters can be estimated at rate n−1/2…

Find out more »

Topics in Information and Inference Seminar

Suvrit Sra (MIT)
32-D677

This seminar consists of a series of lectures each followed by a period of informal discussion and social. The topics are at the nexus of information theory, inference, causality, estimation, and non-convex optimization. The lectures are intended to be tutorial in nature with the goal of learning about interesting and exciting topics rather than merely hearing about the most recent results. The topics are driven by the interests of the speakers, and with the exception of the two lectures on…

Find out more »

Optimal hypothesis testing for stochastic block models with growing degrees

Zongming Ma (University of Pennsylvania)
E18-304

Abstract: In this talk, we discuss optimal hypothesis testing for distinguishing a stochastic block model from an Erdos--Renyi random graph when the average degree grows to infinity with the graph size. We show that linear spectral statistics based on Chebyshev polynomials of the adjacency matrix can approximate signed cycles of growing lengths when the graph is sufficiently dense. The signed cycles have been shown by Banerjee (2018) to determine the likelihood ratio statistic asymptotically. In this way one achieves sharp…

Find out more »

Topics in Information and Inference Seminar

Devavrat Shah (MIT)
32-D677

This seminar consists of a series of lectures each followed by a period of informal discussion and social. The topics are at the nexus of information theory, inference, causality, estimation, and non-convex optimization. The lectures are intended to be tutorial in nature with the goal of learning about interesting and exciting topics rather than merely hearing about the most recent results. The topics are driven by the interests of the speakers, and with the exception of the two lectures on…

Find out more »

Model-X knockoffs for controlled variable selection in high dimensional nonlinear regression

Lucas Janson (Harvard University)
E18-304

Abstract: Many contemporary large-scale applications, from genomics to advertising, involve linking a response of interest to a large set of potential explanatory variables in a nonlinear fashion, such as when the response is binary. Although this modeling problem has been extensively studied, it remains unclear how to effectively select important variables while controlling the fraction of false discoveries, even in high-dimensional logistic regression, not to mention general high-dimensional nonlinear models. To address such a practical problem, we propose a new…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764