- This event has passed.
Stochastics and Statistics Seminar
Frontiers of Efficient Neural-Network Learnability
September 27, 2019 @ 11:00 am - 12:00 pm
Adam Klivans (UT Austin)
E18-304
Event Navigation
Abstract:
What are the most expressive classes of neural networks that can be learned, provably, in polynomial-time in a distribution-free setting? In this talk we give the first efficient algorithm for learning neural networks with two nonlinear layers using tools for solving isotonic regression, a nonconvex (but tractable) optimization problem. If we further assume the distribution is symmetric, we obtain the first efficient algorithm for recovering the parameters of a one-layer convolutional network. These results implicitly make use of a convex surrogate loss for generalized linear models and go beyond the kernel-method/overparameterization regime used in recent works.
Biography:
Adam Klivans is a professor of computer science at the University of Texas at Austin who works in theoretical computer science and machine learning. He completed his doctorate in mathematics from MIT, where he was awarded the Charles W. and Jennifer C. Johnson Prize.
–
The MIT Statistics and Data Science Center hosts guest lecturers from around the world in this weekly seminar.