Views Navigation

Event Views Navigation

Breaking the Sample Size Barrier in Reinforcement Learning

Yuting Wei, Wharton School at UPenn
E18-304

Abstract: Reinforcement learning (RL), which is frequently modeled as sequential learning and decision making in the face of uncertainty, is garnering growing interest in recent years due to its remarkable success in practice. In contemporary RL applications, it is increasingly more common to encounter environments with prohibitively large state and action space, thus imposing stringent requirements on the sample efficiency of the RL algorithms in use. Despite the empirical success, however, the theoretical underpinnings for many popular RL algorithms remain…

Find out more »

Instance Dependent PAC Bounds for Bandits and Reinforcement Learning

Kevin Jamieson (University of Washington)
E18-304

Abstract: The sample complexity of an interactive learning problem, such as multi-armed bandits or reinforcement learning, is the number of interactions with nature required to output an answer (e.g., a recommended arm or policy) that is approximately close to optimal with high probability. While minimax guarantees can be useful rules of thumb to gauge the difficulty of a problem class, algorithms optimized for this worst-case metric often fail to adapt to “easy” instances where fewer samples suffice. In this talk, I…

Find out more »

Revealing the simplicity of high-dimensional objects via pathwise analysis

Ronen Eldan (Weizmann Inst. of Science and Princeton)
E18-304

Abstract: One of the main reasons behind the success of high-dimensional statistics and modern machine learning in taming the curse of dimensionality is that many classes of high-dimensional distributions are surprisingly well-behaved and, when viewed correctly, exhibit a simple structure. This emergent simplicity is in the center of the theory of "high-dimensional phenomena", and is manifested in principles such as "Gaussian-like behavior" (objects of interest often inherit the properties of the Gaussian measure), "dimension-free behavior" (expressed in inequalities which do…

Find out more »

Asymptotics of learning on dependent and structured random objects

Morgane Austern (Harvard University)
E18-304

Abstract:  Classical statistical inference relies on numerous tools from probability theory to study the properties of estimators. However, these same tools are often inadequate to study modern machine problems that frequently involve structured data (e.g networks) or complicated dependence structures (e.g dependent random matrices). In this talk, we extend universal limit theorems beyond the classical setting. Firstly, we consider distributionally "structured" and dependent random object i.e random objects whose distribution are invariant under the action of an amenable group. We…

Find out more »

Characterizing the Type 1-Type 2 Error Trade-off for SLOPE

Cynthia Rush (Columbia University)
E18-304

Abstract:  Sorted L1 regularization has been incorporated into many methods for solving high-dimensional statistical estimation problems, including the SLOPE estimator in linear regression. In this talk, we study how this relatively new regularization technique improves variable selection by characterizing the optimal SLOPE trade-off between the false discovery proportion (FDP) and true positive proportion (TPP) or, equivalently, between measures of type I and type II error. Additionally, we show that on any problem instance, SLOPE with a certain regularization sequence outperforms…

Find out more »

Precise high-dimensional asymptotics for AdaBoost via max-margins & min-norm interpolants

Pragya Sur (Harvard University)
E18-304

Abstract: This talk will introduce a precise high-dimensional asymptotic theory for AdaBoost on separable data, taking both statistical and computational perspectives. We will consider the common modern setting where the number of features p and the sample size n are both large and comparable, and in particular, look at scenarios where the data is asymptotically separable. Under a class of statistical models, we will provide an (asymptotically) exact analysis of the max-min-L1-margin and the min-L1-norm interpolant. In turn, this will…

Find out more »

The Geometry of Particle Collisions: Hidden in Plain Sight

Jesse Thaler (MIT)
E18-304

Abstract: Since the 1960s, particle physicists have developed a variety of data analysis strategies for the goal of comparing experimental measurements to theoretical predictions.  Despite their numerous successes, these techniques can seem esoteric and ad hoc, even to practitioners in the field.  In this talk, I explain how many particle physics analysis tools have a natural geometric interpretation in an emergent "space" of collider events induced by the Wasserstein metric.  This in turn suggests new analysis strategies to interpret generic…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764