Views Navigation

Event Views Navigation

Calendar of Events

S Sun

M Mon

T Tue

W Wed

T Thu

F Fri

S Sat

0 events,

1 event,

SDSC Special Events Vladimir Koltchinskii, Georgia Institute of Technology

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Edward Kennedy, Carnegie Mellon University

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Vinod Vaikuntanathan (MIT)

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Reza Gheissari, Northwestern University

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

Estimation of Functionals of High-Dimensional and Infinite-Dimensional Parameters of Statistical Models

Vladimir Koltchinskii, Georgia Institute of Technology
2-449

The mini-course will meet on Monday, April 1 and Wednesday, April 3rd from 1:30-3:00pm This mini-course deals with a circle of problems related to estimation of real valued functionals of high-dimensional and infinite-dimensional parameters of statistical models. In such problems, it is of interest to estimate one-dimensional features of a high-dimensional parameter represented by nonlinear…

Find out more »

Optimal nonparametric capture-recapture methods for estimating population size

Edward Kennedy, Carnegie Mellon University
E18-304

Abstract: Estimation of population size using incomplete lists has a long history across many biological and social sciences. For example, human rights groups often construct partial lists of victims of armed conflicts, to estimate the total number of victims. Earlier statistical methods for this setup often use parametric assumptions, or rely on suboptimal plug-in-type nonparametric estimators;…

Find out more »

Lattices and the Hardness of Statistical Problems

Vinod Vaikuntanathan (MIT)
E18-304

Abstract: I will describe recent results that (a) show nearly optimal hardness of learning Gaussian mixtures, and (b) give evidence of average-case hardness of sparse linear regression w.r.t. all efficient algorithms, assuming the worst-case hardness of lattice problems. The talk is based on the following papers with Aparna Gupte and Neekon Vafa. https://arxiv.org/pdf/2204.02550.pdf https://arxiv.org/pdf/2402.14645.pdf Bio:…

Find out more »

Emergent outlier subspaces in high-dimensional stochastic gradient descent

Reza Gheissari, Northwestern University
E18-304

Abstract:  It has been empirically observed that the spectrum of neural network Hessians after training have a bulk concentrated near zero, and a few outlier eigenvalues. Moreover, the eigenspaces associated to these outliers have been associated to a low-dimensional subspace in which most of the training occurs, and this implicit low-dimensional structure has been used…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764