- This event has passed.
Gaussian Differential Privacy, with Applications to Deep Learning
February 7, 2020 @ 11:00 am - 12:00 pm
Weijie Su (University of Pennsylvania)
Privacy-preserving data analysis has been put on a firm mathematical foundation since the introduction of differential privacy (DP) in 2006. This privacy definition, however, has some well-known weaknesses: notably, it does not tightly handle composition. This weakness has inspired several recent relaxations of differential privacy based on the Renyi divergences. We propose an alternative relaxation we term “f-DP”, which has a number of nice properties and avoids some of the difficulties associated with divergence based relaxations. First, f-DP preserves the hypothesis testing interpretation of differential privacy, which makes its guarantees easily interpretable. It allows for lossless reasoning about composition and post-processing, and notably, a direct way to analyze privacy amplification by subsampling. We define a canonical single-parameter family of definitions within our class that is termed “Gaussian Differential Privacy”, based on hypothesis testing of two shifted normal distributions. We prove that this family is focal to f-DP by introducing a central limit theorem, which shows that the privacy guarantees of any hypothesis-testing based definition of privacy (including differential privacy) converge to Gaussian differential privacy in the limit under composition. This central limit theorem also gives a tractable analysis tool. We demonstrate the use of the tools we develop by giving an improved analysis of the privacy guarantees of noisy stochastic gradient descent. This is joint work with Jinshuo Dong and Aaron Roth.
Weijie Su is an Assistant Professor of Statistics at the Wharton School, University of Pennsylvania. He is an associated faculty of the Applied Mathematics and Computational Science program at the University of Pennsylvania and a co-director of Penn Research in Machine Learning. Prior to joining Penn, he received his Ph.D. in Statistics from Stanford University in 2016. His research interests span machine learning, mathematical statistics, private data analysis, large-scale optimization, and multiple hypothesis testing. He is a recipient of the Theodore Anderson Dissertation Award in Theoretical Statistics in 2016 and the NSF CAREER Award in 2019.