Views Navigation

Event Views Navigation

Interpolation and learning with scale dependent kernels

Lorenzo Rosasco (MIT/Universita' di Genova)
E18-304

Title: Interpolation and learning with scale dependent kernels Abstract:  We study the learning properties of nonparametric ridge-less least squares. In particular, we consider the common case of estimators defined by scale dependent (Matern) kernels, and focus on the role scale and smoothness. These estimators interpolate the data and the scale can be shown to control their stability to noise and sampling.  Larger scales, corresponding to smoother functions, improve stability with respect to sampling. However, smaller scales, corresponding to more complex functions,…

Find out more »

Representation and generalization

Boaz Barak (Harvard University)
E18-304

Abstract:  Self-supervised learning is an increasingly popular approach for learning representations of data that can be used for downstream representation tasks. A practical advantage of self-supervised learning is that it can be used on unlabeled data. However, even when labels are available, self-supervised learning can be competitive with the more "traditional" approach of supervised learning. In this talk we consider "self supervised + simple classifier (SSS)" algorithms, which are obtained by first learning a self-supervised classifier on data, and then…

Find out more »

Causal Matrix Completion

Devavrat Shah (MIT)
E18-304

Abstract: Matrix completion is the study of recovering an underlying matrix from a sparse subset of noisy observations. Traditionally, it is assumed that the entries of the matrix are “missing completely atrandom” (MCAR), i.e., each entry is revealed at random, independent of everything else, with uniform probability. This is likely unrealistic due to the presence of “latent confounders”, i.e., unobserved factors that determine both the entries of the underlying matrix and the missingness pattern in the observed matrix.  In general,…

Find out more »

Recent results in planted assignment problems

Yihong Wu (Yale University)
E18-304

Abstract: Motivated by applications such as particle tracking, network de-anonymization, and computer vision, a recent thread of research is devoted to statistical models of assignment problems, in which the data are random weight graphs correlated with the latent permutation. In contrast to problems such as planted clique or stochastic block model, the major difference here is the lack of low-rank structures, which brings forth new challenges in both statistical analysis and algorithm design. In the first half of the talk,…

Find out more »

Breaking the Sample Size Barrier in Reinforcement Learning

Yuting Wei, Wharton School at UPenn
E18-304

Abstract: Reinforcement learning (RL), which is frequently modeled as sequential learning and decision making in the face of uncertainty, is garnering growing interest in recent years due to its remarkable success in practice. In contemporary RL applications, it is increasingly more common to encounter environments with prohibitively large state and action space, thus imposing stringent requirements on the sample efficiency of the RL algorithms in use. Despite the empirical success, however, the theoretical underpinnings for many popular RL algorithms remain…

Find out more »

Instance Dependent PAC Bounds for Bandits and Reinforcement Learning

Kevin Jamieson (University of Washington)
E18-304

Abstract: The sample complexity of an interactive learning problem, such as multi-armed bandits or reinforcement learning, is the number of interactions with nature required to output an answer (e.g., a recommended arm or policy) that is approximately close to optimal with high probability. While minimax guarantees can be useful rules of thumb to gauge the difficulty of a problem class, algorithms optimized for this worst-case metric often fail to adapt to “easy” instances where fewer samples suffice. In this talk, I…

Find out more »

Revealing the simplicity of high-dimensional objects via pathwise analysis

Ronen Eldan (Weizmann Inst. of Science and Princeton)
E18-304

Abstract: One of the main reasons behind the success of high-dimensional statistics and modern machine learning in taming the curse of dimensionality is that many classes of high-dimensional distributions are surprisingly well-behaved and, when viewed correctly, exhibit a simple structure. This emergent simplicity is in the center of the theory of "high-dimensional phenomena", and is manifested in principles such as "Gaussian-like behavior" (objects of interest often inherit the properties of the Gaussian measure), "dimension-free behavior" (expressed in inequalities which do…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764