Views Navigation

Event Views Navigation

Calendar of Events

S Sun

M Mon

T Tue

W Wed

T Thu

F Fri

S Sat

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Rajarshi Mukherjee, Harvard University

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar David Alvarez-Melis, Harvard University

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Stochastics and Statistics Seminar Ashia Wilson, MIT

0 events,

Inference for ATE & GLM’s when p/n→δ∈(0,∞)

Rajarshi Mukherjee, Harvard University
E18-304

Abstract In this talk we will discuss statistical inference of average treatment effect in measured confounder settings as well as parallel questions of inferring linear and quadratic functionals in generalized linear models under high dimensional proportional asymptotic settings i.e. when p/n→δ∈(0,∞) where p, n denote the dimension of the covariates and the sample size respectively . The results rely on the knowledge of the variance covariance matrix Σ of the covariates under study and we show that whereas √n-consistent asymptotically…

Find out more »

Towards a ‘Chemistry of AI’: Unveiling the Structure of Training Data for more Scalable and Robust Machine Learning

David Alvarez-Melis, Harvard University
E18-304

Abstract:  Recent advances in AI have underscored that data, rather than model size, is now the primary bottleneck in large-scale machine learning performance. Yet, despite this shift, systematic methods for dataset curation, augmentation, and optimization remain underdeveloped. In this talk, I will argue for the need for a "Chemistry of AI"—a paradigm that, like the emerging "Physics of AI," embraces a principles-first, rigorous, empiricist approach but shifts the focus from models to data. This perspective treats datasets as structured, dynamic…

Find out more »

Two Approaches Towards Adaptive Optimization

Ashia Wilson, MIT
E18-304

Abstract: This talk will address to recent projects I am excited about. The first describes efficient methodologies for hyper-parameter estimation in optimization algorithms. I will describe two approaches for how to adaptively estimate these parameters that often lead to significant improvement in convergence. The second describes a new method, called Metropolis-Adjusted Preconditioned Langevin Algorithm for sampling from a convex body. Taking an optimization perspective, I focus on the mixing time guarantees of these algorithms — an essential theoretical property for…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764