BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MIT Statistics and Data Science Center - ECPv5.14.2.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MIT Statistics and Data Science Center
X-ORIGINAL-URL:https://stat.mit.edu
X-WR-CALDESC:Events for MIT Statistics and Data Science Center
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20181104T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20180323T110000
DTEND;TZID=America/New_York:20180323T120000
DTSTAMP:20220528T132053
CREATED:20171214T203007Z
LAST-MODIFIED:20180205T144449Z
UID:2279-1521802800-1521806400@stat.mit.edu
SUMMARY:Statistical theory for deep neural networks with ReLU activation function
DESCRIPTION:Abstract: The universal approximation theorem states that neural networks are capable of approximating any continuous function up to a small error that depends on the size of the network. The expressive power of a network does\, however\, not guarantee that deep networks perform well on data. For that\, control of the statistical estimation risk is needed. In the talk\, we derive statistical theory for fitting deep neural networks to data generated from the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to logarithmic factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture\, the tuning parameter is the sparsity of the network. Specifically\, we consider large networks with number of potential parameters being much bigger than the sample size. Interestingly\, the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that scaling the network depth with the logarithm of the sample size is natural.
URL:https://stat.mit.edu/calendar/schmidt-hieber/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar
GEO:42.3620185;-71.0878444
END:VEVENT
END:VCALENDAR