Stochastics and Statistics Seminar
Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe
Speaker Name: Vianney Perchet (Ecole Normale Superieure, Paris-Saclay)
Date: May 19, 2017
We consider the problem of bandit optimization, inspired by stochastic optimization and online learning with bandit feedback. In this problem, the objective is to minimize a global, not necessarily cumulative, convex loss function. This framework allows us to study a very general class of problems, with applications in statistics, machine learning, and other fields. To solve this problem, we analyze the Upper-Confidence Frank-Wolfe algorithm, inspired by techniques ranging from bandits to convex optimization. We identify slow and fast of convergence of the optimization error over various classes of functions, and discuss the optimality of these results.
Vianney Perchet is currently a professor at the Ecole Normale Supérieure of Paris-Saclay. Prior to that, he has held positions as assistant professor at University Paris Diderot and professor at ENSAE.
Perchet works at the intersection of machine learning, optimization and game theory. He focuses primarily on describing large subclasses of problems where it is possible to construct and analyze algorithms with "fast rates", beating the slow rates provided by general lower bounds.