Estimation and inference for error-in-operator model

Vladimir Spokoinyi (Humboldt University of Berlin)
E18-304

Abstract: We consider the Error-in-Operator (EiO) problem of recovering the source x signal from the noise observation Y given by the equation Y = A x + ε in the situation when the operator A is not precisely known. Instead, a pilot estimate \hat{A} is available. The study is motivated by Hoffmann & Reiss (2008), Trabs (2018) and by recent results on high dimensional regression with random design; see e.g., Tsigler, Bartlett (2020) (Benign overfitting in ridge regression; arXiv:2009.14286) Cheng, and Montanari (2022) (Dimension free ridge regression; arXiv:2210.08571), among many others. Examples of EiO include regression with error-in-variables and instrumental regression, stochastic diffusion, Markov time series, interacting particle…

Find out more »

Source Condition Double Robust Inference on Functionals of Inverse Problems

Vasilis Syrgkanis (Stanford University)
E18-304

Abstract: We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems. Any such parameter admits a doubly robust representation that depends on the solution to a dual linear inverse problem, where the dual solution can be thought as a generalization of the inverse propensity function. We provide the first source condition double robust inference method that ensures asymptotic normality around the parameter of interest as long as either the primal or the dual inverse problem…

Find out more »

Fine-Grained Extensions of the Low-Degree Testing Framework

Alex Wein (University of California, Davis)
E18-304

Abstract: The low-degree polynomial framework has emerged as a versatile tool for probing the computational complexity of statistical problems by studying the power and limitations of a restricted class of algorithms: low-degree polynomials. Focusing on the setting of hypothesis testing, I will discuss some extensions of this method that allow us to tackle finer-grained questions than the standard approach. First, for the task of detecting a planted clique in a random graph, we ask not merely when this can be…

Find out more »

Statistical Inference Under Information Constraints: User level approaches

Jayadev Acharya, Cornell University
E18-304

Abstract: In this talk, we will present highlights from some of the work we have been doing in distributed inference under information constraints, such as privacy and communication. We consider basic tasks such as learning and testing of discrete as well as high dimensional distributions, when the samples are distributed across users who can then only send an information-constrained message about their sample. Of key interest to us has been the role of the various types of communication protocols (e.g., non-interactive protocols…

Find out more »

Learning learning-augmented algorithms. The example of stochastic scheduling

Vianney Perchet, ENSAE Paris
E18-304

Abstract: In this talk, I will argue that it is sometimes possible to learn, with techniques originated from bandits, the "hints" on which learning-augmented algorithms rely to improve worst-case performances. We will describe this phenomenon, the combination of online learning with competitive analysis, on the example of stochastic online scheduling. We shall quantify the merits of this approach by computing and comparing non-asymptotic expected competitive ratios (the standard performance measure of algorithms) Bio: Vianney Perchet is a professor at the…

Find out more »

Adaptivity in Domain Adaptation and Friends

Samory Kpotufe, Columbia University
E18-304

Abstract: Domain adaptation, transfer, multitask, meta, few-shots, representation, or lifelong learning … these are all important recent directions in ML that all touch at the core of what we might mean by ‘AI’. As these directions all concern learning in heterogeneous and ever-changing environments, they all share a central question: what information a data distribution may have about another, critically, in the context of a given estimation problem, e.g., classification, regression, bandits, etc. Our understanding of these problems is still…

Find out more »

Adaptive Decision Tree Methods

Matias Cattaneo, Princeton University
E18-304

Abstract: This talk is based on two recent papers: 1. “On the Pointwise Behavior of Recursive Partitioning and Its Implications for Heterogeneous Causal Effect Estimation” and 2. “Convergence Rates of Oblique Regression Trees for Flexible Function Libraries” 1. Decision tree learning is increasingly being used for pointwise inference. Important applications include causal heterogenous treatment effects and dynamic policy decisions, as well as conditional quantile regression and design of experiments, where tree estimation and inference is conducted at specific values of…

Find out more »

Free Discontinuity Design (joint w/ David van Dijcke)

Florian Gunsilius, University of Michigan
E18-304

Abstract: Regression discontinuity design (RDD) is a quasi-experimental impact evaluation method ubiquitous in the social- and applied health sciences. It aims to estimate average treatment effects of policy interventions by exploiting jumps in outcomes induced by cut-off assignment rules. Here, we establish a correspondence between the RDD setting and free discontinuity problems, in particular the celebrated Mumford-Shah model in image segmentation. The Mumford-Shah model is non-convex and hence admits local solutions in general. We circumvent this issue by relying on…

Find out more »

Variational methods in reinforcement learning

Martin Wainwright, MIT
E18-304

Abstract: Reinforcement learning is the study of models and procedures for optimal sequential decision-making under uncertainty.  At its heart lies the Bellman optimality operator, whose unique fixed point specifies an optimal policy and value function.  In this talk, we discuss two classes of variational methods that can be used to obtain approximate solutions with accompanying error guarantees.  For policy evaluation problems based on on-line data, we present Krylov-Bellman boosting, which combines ideas from Krylov methods with non-parametric boosting.  For policy optimization problems based on…

Find out more »

Geometric EDA for Random Objects

Paromita Dubey, University of Southern California
E18-304

Abstract: In this talk I will propose new tools for the exploratory data analysis of data objects taking values in a general separable metric space. First, I will introduce depth profiles, where the depth profile of a point ω in the metric space refers to the distribution of the distances between ω and the data objects. I will describe how depth profiles can be harnessed to define transport ranks, which capture the centrality of each element in the metric space with respect to the…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764