Loading Events
  • This event has passed.
IDS.190 Topics in Bayesian Modeling and Computation

Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox

November 13, 2019 @ 4:00 pm - 5:00 pm

Xiao-Li Meng (Harvard University)

E18-304

Abstract:

Advances in Markov chain Monte Carlo in the past 30 years have made Bayesian analysis a routine practice. However, there is virtually no practice of performing Monte Carlo integration from the Bayesian perspective; indeed,this problem has earned the “paradox” label in the context of computing normalizing constants (Wasserman, 2013). We first use the modeling-what-we-ignore idea of Kong et al. (2003) to explain that the crux of the paradox is not with the likelihood theory, which is essentially the same as for a standard non-parametric probability/density estimation (Vardi, 1985); though via using group theory, it provides a richer framework for modeling the trade-off between statistical efficiency and computational efficiency. But there is a real Bayesian paradox: Bayesian analysis cannot be applied exactly for solving Bayesian computation, because to perform the exact Bayesian Monte Carlo integration would require more computation than needed to solve the original Monte Carlo problem. We then show that there is a practical resolution to this paradox using the profile likelihood obtained in Kong et al. (2006) and that this approximation is second-order valid asymptotically. We also investigate a more computationally efficient approximation via an artificial likelihood of Geyer (1994). This artificial likelihood approach is only first-order valid, but there is a computationally trivial adjustment to render its second-order validity. We demonstrate empirically the efficiency of these approximated Bayesian estimators, compared to the usual frequentist-based Monte Carlo estimators, such as bridge sampling estimators (Meng and Wong, 1996).

[This is a joint work with Masatoshi Uehara.]

References:

Wasserman, L. (2013) All of Statistics: A Concise Course in Statistical Inference.  Springer Science & Business Media. Also see https://normaldeviate.wordpress.com/2012/10/05/the-normalizing-constant-paradox/

Kong, A.,P. McCullagh, X.-L. Meng, D. Nicolae, and Z. Tan (2003). A theory of statistical models for Monte Carlo integration (with Discussions). J. R. Statist. Soc. B 65, 585-604.   http://stat.harvard.edu/XLM/JRoyStatSoc/JRoyStatSocB65-3_585-618_2003.pdf

Vardi, Y. (1985). Empirical distributions in selection bias models. Ann. Statist. 13 (1), 178-203.  https://projecteuclid.org/download/pdf_1/euclid.aos/1176346585

Kong, A., P. McCullagh, X.-L. Meng, and D. Nicolae (2006). Further explorations of likelihood theory for Monte Carlo integration. In Advances in Statistical Modeling and Inference: Essays in Honor of Kjell A. Doksum (Ed: V. Nair), 563-592. World Scientific Press.  http://www.stat.harvard.edu/XLM/books/kmmn.pdf

Geyer, C. J. (1994). Estimating normalizing constants and reweighting mixtures in Markov chain Monte Carlo.Technical Report, School of Statistics,University of Minnesota, Minneapolis 568. https://scholar.google.com/scholar?cluster=6307665497304333587&hl=en&as_sdt=0,22

Meng, X.-L. and Wong, W.H. (1996). Simulating ratios of normalizing constants via a simple identity: A theoretical exploration. Statistics Sinica6, 831-860. http://stat.harvard.edu/XLM/StatSin/StatSin6-4_831-860_1996.pdf

Biography:

Xiao-Li Meng, the Whipple V. N. Jones Professor of Statistics, and the Founding Editor-in-Chief of Harvard Data Science Review, is well known for his depth and breadth in research, his innovation and passion in pedagogy, his vision and effectiveness in administration, as well as for his engaging and entertaining style as a speaker and writer. Meng was named the best statistician under the age of 40 by COPSS (Committee of Presidents of Statistical Societies) in 2001, and he is the recipient of numerous awards and honors for his more than 150 publications in at least a dozen theoretical and methodological areas, as well as in areas of pedagogy and professional development. He has delivered more than 400 research presentations and public speeches on these topics, and he is the author of “The XL-Files,” a thought-provoking and entertaining column in the IMS (Institute of Mathematical Statistics) Bulletin. His interests range from the theoretical foundations of statistical inferences (e.g., the interplay among Bayesian, Fiducial, and frequentist perspectives; frameworks for multi-source, multi-phase and multi- resolution inferences) to statistical methods and computation (e.g., posterior predictive p-value; EM algorithm; Markov chain Monte Carlo; bridge and path sampling) to applications in natural, social, and medical sciences and engineering (e.g., complex statistical modeling in astronomy and astrophysics, assessing disparity in mental health services, and quantifying statistical information in genetic studies). Meng received his BS in mathematics from Fudan University in 1982 and his PhD in statistics from Harvard in 1990. He was on the faculty of the University of Chicago from 1991 to 2001 before returning to Harvard, where he served as the Chair of the Department of Statistics (2004-2012) and the Dean of Graduate School of Arts and Sciences (2012-2017).

For more information and an up-to-date schedule, please see https://stellar.mit.edu/S/course/IDS/fa19/IDS.190/

**Taking IDS.190 satisfies the seminar requirement for students in MIT’s Interdisciplinary Doctoral Program in Statistics (IDPS), but formal registration is open to any graduate student who can register for MIT classes. And the meetings are open to any interested researcher.   Talks will be followed by 30 minutes of tea/snacks and informal discussion.**

 


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764