BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MIT Statistics and Data Science Center - ECPv5.14.2.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MIT Statistics and Data Science Center
X-ORIGINAL-URL:https://stat.mit.edu
X-WR-CALDESC:Events for MIT Statistics and Data Science Center
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210319T110000
DTEND;TZID=America/New_York:20210319T120000
DTSTAMP:20220528T121257
CREATED:20210112T211828Z
LAST-MODIFIED:20210305T190820Z
UID:4483-1616151600-1616155200@stat.mit.edu
SUMMARY:Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization
DESCRIPTION:Abstract: We consider sequential prediction with expert advice when data are generated from distributions varying arbitrarily within an unknown constraint set. We quantify relaxations of the classical i.i.d. assumption in terms of these constraint sets\, with i.i.d. sequences at one extreme and adversarial mechanisms at the other. The Hedge algorithm\, long known to be minimax optimal in the adversarial regime\, was recently shown to be minimax optimal for i.i.d. data. We show that Hedge with deterministic learning rates is suboptimal between these extremes\, and present a new algorithm that adaptively achieves the minimax optimal rate of regret with respect to our relaxations of the i.i.d. assumption\, and does so without knowledge of the underlying constraint set. We analyze our algorithm using the follow-the-regularized-leader framework\, and prove it corresponds to Hedge with an adaptive learning rate that implicitly scales as the square root of the entropy of the current predictive distribution\, rather than the entropy of the initial predictive distribution. \nBio: Daniel Roy is an Associate Professor in the Department of Statistical Sciences at the University of Toronto\, with cross appointments in Computer Science and Electrical and Computer Engineering. He is also a CIFAR Canada AI Chair and founding member of the Vector Institute. Prior to joining Toronto\, he was a Research Fellow of Emmanuel College and Newton International Fellow of the Royal Academy of Engineering\, hosted by the University of Cambridge. Roy completed his doctorate in Computer Science at the Massachusetts Institute of Technology\, where his dissertation was awarded an MIT EECS Sprowls Award. \nBio: Blair Bilodeau is a third-year PhD candidate in the Department of Statistical Sciences at the University of Toronto\, supported by an NSERC Doctoral Canada Graduate Scholarship and the Vector Institute. His research focuses on combining techniques from statistics and computer science to obtain theoretical performance guarantees for decision making. His work emphasizes guarantees that are sensitive to model structure\, data assumptions\, and model uncertainty.
URL:https://stat.mit.edu/calendar/roy/
LOCATION:online
CATEGORIES:Stochastics and Statistics Seminar
END:VEVENT
END:VCALENDAR