BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MIT Statistics and Data Science Center - ECPv5.14.2.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MIT Statistics and Data Science Center
X-ORIGINAL-URL:https://stat.mit.edu
X-WR-CALDESC:Events for MIT Statistics and Data Science Center
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20170312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20171105T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20171013T110000
DTEND;TZID=America/New_York:20171013T120000
DTSTAMP:20220528T132559
CREATED:20170801T015204Z
LAST-MODIFIED:20171129T204524Z
UID:1681-1507892400-1507896000@stat.mit.edu
SUMMARY:Additivity of Information in Deep Generative Networks: The I-MMSE Transform Method
DESCRIPTION:Abstract: Deep generative networks are powerful probabilistic models that consist of multiple stages of linear transformations (described by matrices) and non-linear\, possibly random\, functions (described generally by information channels). These models have gained great popularity due to their ability to characterize complex probabilistic relationships arising in a wide variety of inference problems. In this talk\, we introduce a new method for analyzing the fundamental limits of statistical inference in settings where the model is known. The validity of our method can be established in a number of settings and is conjectured to hold more generally. A key assumption made throughout is that the linear transforms are drawn randomly from rotationally-invariant distributions over matrices. \nOur method yields explicit formulas for 1) the mutual information; 2) the minimum mean-squared error (MMSE); 3) the existence and locations of certain phase-transitions with respect to the problem parameters; and 4) the stationary points for the state evolution of approximate inference algorithms based on approximate message passing. When applied to the special case of models with multivariate Gaussian channels our method is rigorous and has close connections to spherical integrals and free probability theory for random matrices. When applied to the general case of non-Gaussian channels\, our method provides a simple alternative to the replica method from statistical physics. \nA key observation is that the combined effects of the individual components in the model (namely the matrices and the channels) are additive when viewed in a certain transformed domain. We provide an explicit characterization of the transformation that achieves this property\, which we refer to as the I-MMSE transform following from its connection to the integral-derivative relationship between mutual information and MMSE under additive Gaussian noise. \n \nBiography: Galen Reeves joined the faculty at Duke University in Fall 2013\, and is currently an Assistant Professor with a joint appointment in the Department of Electrical & Computer Engineering and the Department of Statistical Science. He completed his PhD in Electrical Engineering and Computer Sciences at the University of California\, Berkeley in 2011\, and he was a postdoctoral associate in the Departments of Statistics at Stanford University from 2011 to 2013.
URL:https://stat.mit.edu/calendar/stochastics-and-statistics-seminar-galen-reeves/
LOCATION:E18-304\, United States
CATEGORIES:Stochastics and Statistics Seminar
GEO:42.3620185;-71.0878444
END:VEVENT
END:VCALENDAR