Views Navigation

Event Views Navigation

Geometric EDA for Random Objects

Paromita Dubey, University of Southern California
E18-304

Abstract: In this talk I will propose new tools for the exploratory data analysis of data objects taking values in a general separable metric space. First, I will introduce depth profiles, where the depth profile of a point ω in the metric space refers to the distribution of the distances between ω and the data objects. I will describe how depth profiles can be harnessed to define transport ranks, which capture the centrality of each element in the metric space with respect to the…

Find out more »

Variational methods in reinforcement learning

Martin Wainwright, MIT
E18-304

Abstract: Reinforcement learning is the study of models and procedures for optimal sequential decision-making under uncertainty.  At its heart lies the Bellman optimality operator, whose unique fixed point specifies an optimal policy and value function.  In this talk, we discuss two classes of variational methods that can be used to obtain approximate solutions with accompanying error guarantees.  For policy evaluation problems based on on-line data, we present Krylov-Bellman boosting, which combines ideas from Krylov methods with non-parametric boosting.  For policy optimization problems based on…

Find out more »

James-Stein for eigenvectors: reducing the optimization bias in Markowitz portfolios

Lisa Goldberg, UC Berkeley

Abstract: We identify and reduce bias in the leading sample eigenvector of a high-dimensional covariance matrix of correlated variables. Our analysis illuminates how error in an estimated covariance matrix corrupts optimization. It may be applicable in finance, machine learning and genomics. Biography: Lisa Goldberg is Head of Research at Aperio and Managing Director at BlackRock.  She is Professor of the Practice of Economics at University of California, Berkeley, where she co-directs the Center for Data Analysis in Risk, an industry…

Find out more »

Free Discontinuity Design (joint w/ David van Dijcke)

Florian Gunsilius, University of Michigan
E18-304

Abstract: Regression discontinuity design (RDD) is a quasi-experimental impact evaluation method ubiquitous in the social- and applied health sciences. It aims to estimate average treatment effects of policy interventions by exploiting jumps in outcomes induced by cut-off assignment rules. Here, we establish a correspondence between the RDD setting and free discontinuity problems, in particular the celebrated Mumford-Shah model in image segmentation. The Mumford-Shah model is non-convex and hence admits local solutions in general. We circumvent this issue by relying on…

Find out more »

IDSS Celebration

This celebratory event reflects on the impact in research and education the Institute for Data, Systems, and Society has had since its launch in 2015 and explores future opportunities with thought leaders and policy experts. In panels and plenary talks, we will discuss the impact of research areas utilizing the available massive data, in-depth understanding of underlying social and engineering systems, and the investigation of social and institutional behavior to provide answers to critical and complex challenges. For more information,…

Find out more »

Adaptive Decision Tree Methods

Matias Cattaneo, Princeton University
E18-304

Abstract: This talk is based on two recent papers: 1. “On the Pointwise Behavior of Recursive Partitioning and Its Implications for Heterogeneous Causal Effect Estimation” and 2. “Convergence Rates of Oblique Regression Trees for Flexible Function Libraries” 1. Decision tree learning is increasingly being used for pointwise inference. Important applications include causal heterogenous treatment effects and dynamic policy decisions, as well as conditional quantile regression and design of experiments, where tree estimation and inference is conducted at specific values of…

Find out more »

Adaptivity in Domain Adaptation and Friends

Samory Kpotufe, Columbia University
E18-304

Abstract: Domain adaptation, transfer, multitask, meta, few-shots, representation, or lifelong learning … these are all important recent directions in ML that all touch at the core of what we might mean by ‘AI’. As these directions all concern learning in heterogeneous and ever-changing environments, they all share a central question: what information a data distribution may have about another, critically, in the context of a given estimation problem, e.g., classification, regression, bandits, etc. Our understanding of these problems is still…

Find out more »

Learning learning-augmented algorithms. The example of stochastic scheduling

Vianney Perchet, ENSAE Paris
E18-304

Abstract: In this talk, I will argue that it is sometimes possible to learn, with techniques originated from bandits, the "hints" on which learning-augmented algorithms rely to improve worst-case performances. We will describe this phenomenon, the combination of online learning with competitive analysis, on the example of stochastic online scheduling. We shall quantify the merits of this approach by computing and comparing non-asymptotic expected competitive ratios (the standard performance measure of algorithms) Bio: Vianney Perchet is a professor at the…

Find out more »

Statistical Inference Under Information Constraints: User level approaches

Jayadev Acharya, Cornell University
E18-304

Abstract: In this talk, we will present highlights from some of the work we have been doing in distributed inference under information constraints, such as privacy and communication. We consider basic tasks such as learning and testing of discrete as well as high dimensional distributions, when the samples are distributed across users who can then only send an information-constrained message about their sample. Of key interest to us has been the role of the various types of communication protocols (e.g., non-interactive protocols…

Find out more »

Fine-Grained Extensions of the Low-Degree Testing Framework

Alex Wein (University of California, Davis)
E18-304

Abstract: The low-degree polynomial framework has emerged as a versatile tool for probing the computational complexity of statistical problems by studying the power and limitations of a restricted class of algorithms: low-degree polynomials. Focusing on the setting of hypothesis testing, I will discuss some extensions of this method that allow us to tackle finer-grained questions than the standard approach. First, for the task of detecting a planted clique in a random graph, we ask not merely when this can be…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764