Loading Events

« All Events

How Well Generative Adversarial Networks Learn Distributions (and Beyond)?

September 6 @ 11:00 am - 12:00 pm

Tengyuan Liang (University of Chicago)

E18-304

Abstract:
We study in this paper the rate of convergence for learning distributions with the adversarial framework and Generative Adversarial Networks (GANs), which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions, under a collection of objective evaluation metrics. On the nonparametric end, we investigate the minimax optimal rates and fundamental difficulty of the density estimation under the adversarial framework. On the parametric end, we establish a theory for general neural network classes (including deep leaky ReLU as a special case), that characterizes the interplay on the choice of generator and discriminator. We investigate how to obtain a good statistical guarantee for GANs through the lens of regularization. We discover and isolate a new notion of regularization, called the generator/discriminator pair regularization, that sheds light on the advantage of GANs compared to classical parametric and nonparametric approaches for density estimation. We develop novel oracle inequalities as the main tools for analyzing GANs, which is of independent theoretical interest.

Biography:
Dr. Liang is an assistant professor at Chicago Booth. He is also the George C. Tiao faculty fellow in data science research. His current research interests include computational and algorithmic aspects of statistical inference, machine learning and statistical learning theory, stochastic methods in non-convex optimization.

The MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.

Details

Date:
September 6
Time:
11:00 am - 12:00 pm

Venue

E18-304
50 Ames Street
Cambridge, MA 02139

Other

Speaker Name(s)
Tengyuan Liang (University of Chicago)
Bldg-Room #
E18-304