Views Navigation

Event Views Navigation

Calendar of Events

S Sun

M Mon

T Tue

W Wed

T Thu

F Fri

S Sat

0 events,

1 event,

SDSC Special Events Vladimir Koltchinskii, Georgia Institute of Technology

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Edward Kennedy, Carnegie Mellon University

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Vinod Vaikuntanathan (MIT)

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Reza Gheissari, Northwestern University

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Franca Hoffmann, California Institute of Technology

0 events,

Estimation of Functionals of High-Dimensional and Infinite-Dimensional Parameters of Statistical Models

Vladimir Koltchinskii, Georgia Institute of Technology
2-449

The mini-course will meet on Monday, April 1 and Wednesday, April 3rd from 1:30-3:00pm This mini-course deals with a circle of problems related to estimation of real valued functionals of high-dimensional and infinite-dimensional parameters of statistical models. In such problems, it is of interest to estimate one-dimensional features of a high-dimensional parameter represented by nonlinear functionals of certain degree of smoothness defined on the parameter space. The functionals of interest could be often estimated with faster convergence rates than the…

Find out more »

Optimal nonparametric capture-recapture methods for estimating population size

Edward Kennedy, Carnegie Mellon University
E18-304

Abstract: Estimation of population size using incomplete lists has a long history across many biological and social sciences. For example, human rights groups often construct partial lists of victims of armed conflicts, to estimate the total number of victims. Earlier statistical methods for this setup often use parametric assumptions, or rely on suboptimal plug-in-type nonparametric estimators; but both approaches can lead to substantial bias, the former via model misspecification and the latter via smoothing. Under an identifying assumption that two lists…

Find out more »

Lattices and the Hardness of Statistical Problems

Vinod Vaikuntanathan (MIT)
E18-304

Abstract: I will describe recent results that (a) show nearly optimal hardness of learning Gaussian mixtures, and (b) give evidence of average-case hardness of sparse linear regression w.r.t. all efficient algorithms, assuming the worst-case hardness of lattice problems. The talk is based on the following papers with Aparna Gupte and Neekon Vafa. https://arxiv.org/pdf/2204.02550.pdf https://arxiv.org/pdf/2402.14645.pdf Bio: Vinod Vaikuntanathan is a professor of computer science at MIT and the chief cryptographer at Duality Technologies. His research is in the foundations of cryptography…

Find out more »

Emergent outlier subspaces in high-dimensional stochastic gradient descent

Reza Gheissari, Northwestern University
E18-304

Abstract:  It has been empirically observed that the spectrum of neural network Hessians after training have a bulk concentrated near zero, and a few outlier eigenvalues. Moreover, the eigenspaces associated to these outliers have been associated to a low-dimensional subspace in which most of the training occurs, and this implicit low-dimensional structure has been used as a heuristic for the success of high-dimensional classification. We will describe recent rigorous results in this direction for the Hessian spectrum over the course…

Find out more »

Consensus-based optimization and sampling

Franca Hoffmann, California Institute of Technology
E18-304

Abstract: Particle methods provide a powerful paradigm for solving complex global optimization problems leading to highly parallelizable algorithms. Despite widespread and growing adoption, theory underpinning their behavior has been mainly based on meta-heuristics. In application settings involving black-box procedures, or where gradients are too costly to obtain, one relies on derivative-free approaches instead. This talk will focus on two recent techniques, consensus-based optimization and consensus-based sampling. We explain how these methods can be used for the following two goals: (i)…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764