BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MIT Statistics and Data Science Center - ECPv5.14.2.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MIT Statistics and Data Science Center
X-ORIGINAL-URL:https://stat.mit.edu
X-WR-CALDESC:Events for MIT Statistics and Data Science Center
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20181104T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180223T110000
DTEND;TZID=America/New_York:20180223T120000
DTSTAMP:20220528T122015
CREATED:20171206T225928Z
LAST-MODIFIED:20180216T200017Z
UID:2233-1519383600-1519387200@stat.mit.edu
SUMMARY:Optimization's Implicit Gift to Learning: Understanding Optimization Bias as a Key to Generalization
DESCRIPTION:Abstract: It is becoming increasingly clear that implicit regularization afforded by the optimization algorithms play a central role in machine learning\, and especially so when using large\, deep\, neural networks. We have a good understanding of the implicit regularization afforded by stochastic approximation algorithms\, such as SGD\, and as I will review\, we understand and can characterize the implicit bias of different algorithms\, and can design algorithms with specific biases. But in this talk I will focus on implicit biases of deterministic algorithms on underdetermined problem. In an effort to uncover the implicit biases of gradient-based optimization of neural networks\, which holds the key to their empirical success\, I will discuss recent work on implicit regularization for matrix factorization and for linearly separable problems with monotone decreasing loss functions. \nBiography: Professor Nati Srebro obtained his PhD at the Massachusetts Institute of Technology (MIT) in 2004\, held a post-doctoral fellowship with the Machine Learning Group at the University of Toronto\, and was a Visiting Scientist at IBM Haifa Research Labs. Since January 2006\, he has been on the faculty of the Toyota Technological Institute at Chicago (TTIC) and the University of Chicago\, and has also served as the first Director of Graduate Studies at TTIC. From 2013 to 2014 he was associate professor at the Technion-Israel Institute of Technology. Prof. Srebro’s research encompasses methodological\, statistical and computational aspects of Machine Learning\, as well as related problems in Optimization. Some of Prof. Srebro’s significant contributions include work on learning “wider” Markov networks\, including introducing the use of the nuclear norm for machine learning and matrix reconstruction and work on fast optimization techniques for machine learning\, and on the relationship between learning and optimization.
URL:https://stat.mit.edu/calendar/stochastics-statistics-seminar-nathan-srebro-university-chicago/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar
GEO:42.3620185;-71.0878444
END:VEVENT
END:VCALENDAR