- This event has passed.
Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization
March 19, 2021 @ 11:00 am - 12:00 pm
Daniel Roy, University of Toronto
online
Event Navigation
Abstract: We consider sequential prediction with expert advice when data are generated from distributions varying arbitrarily within an unknown constraint set. We quantify relaxations of the classical i.i.d. assumption in terms of these constraint sets, with i.i.d. sequences at one extreme and adversarial mechanisms at the other. The Hedge algorithm, long known to be minimax optimal in the adversarial regime, was recently shown to be minimax optimal for i.i.d. data. We show that Hedge with deterministic learning rates is suboptimal between these extremes, and present a new algorithm that adaptively achieves the minimax optimal rate of regret with respect to our relaxations of the i.i.d. assumption, and does so without knowledge of the underlying constraint set. We analyze our algorithm using the follow-the-regularized-leader framework, and prove it corresponds to Hedge with an adaptive learning rate that implicitly scales as the square root of the entropy of the current predictive distribution, rather than the entropy of the initial predictive distribution.
Bio: Daniel Roy is an Associate Professor in the Department of Statistical Sciences at the University of Toronto, with cross appointments in Computer Science and Electrical and Computer Engineering. He is also a CIFAR Canada AI Chair and founding member of the Vector Institute. Prior to joining Toronto, he was a Research Fellow of Emmanuel College and Newton International Fellow of the Royal Academy of Engineering, hosted by the University of Cambridge. Roy completed his doctorate in Computer Science at the Massachusetts Institute of Technology, where his dissertation was awarded an MIT EECS Sprowls Award.