• This event has passed.

# Faster and Simpler Algorithms for List Learning

## February 19 @ 11:00 am - 12:00 pm

Jerry Li, Microsoft Research

online

Abstract: The goal of list learning is to understand how to learn basic statistics of a dataset when it has been corrupted by an overwhelming fraction of outliers. More formally, one is given a set of points $S$, of which an $\alpha$-fraction $T$ are promised to be well-behaved. The goal is then to output an $O(1 / \alpha)$ sized list of candidate means, so that one of these candidates is close to the true mean of the points in $T$. In many ways, list learning can be thought of as the natural robust generalization of clustering mixture models. This formulation of the problem was first proposed in Charikar-Steinhardt-Valiant STOC’17, which gave the first polynomial-time algorithm which achieved nearly-optimal error guarantees. More recently, exciting work of Cherapanamjeri-Mohanty-Yau FOCS’20 gave an algorithm which ran in time $\widetilde{O} (n d \mathrm{poly} (1 / \alpha))$. In particular, this runtime is nearly linear in the input size for $1/\alpha = O(1)$, however, the runtime quickly becomes impractical for reasonably small $1/\alpha$. Moreover, both of these algorithms are quite complicated.

In our work, we have two main contributions. First, we give a polynomial time algorithm for this problem which achieves optimal error, which is considerably simpler than the previously known algorithms. Second, we then build off of these insights to develop a more sophisticated algorithm based on lazy mirror descent which runs in time $\widetilde{O}(n d / \alpha + 1/\alpha^6)$, and which also achieves optimal error. Our algorithm improves upon the runtime of previous work for all $1/\alpha = O(sqrt(d))$. The goal of this talk is to give a more or less self-contained proof of the first, and then explain at a high level how to use these ideas to develop our faster algorithm.

Joint work with Ilias Diakonikolas, Daniel Kane, Daniel Kongsgaard, and Kevin Tian

Bio:  Jerry is a senior researcher at Microsoft Research in Redmond. Prior to MSR, he obtained his PhD in computer science under the supervision of Ankur Moitra, after which he did a postdoc at the Simons Institute. His research interests primarily are in machine learning theory broadly construed, with a major focus on robustness against training time and test time perturbations, but he has also done work in a variety of learning-theoretic settings including quantum learning, distributed learning, and computational vs statistical tradeoffs in estimation tasks.

## Details

Date:
February 19
Time:
11:00 am - 12:00 pm
Event Category:

online

## Other

Speaker Name(s)
Jerry Li, Microsoft Research