Loading Events

Past Events › IDS.190 Topics in Bayesian Modeling and Computation

Events Search and Views Navigation

Event Views Navigation

September 2019

Automated Data Summarization for Scalability in Bayesian Inference

Tamara Broderick (MIT)

September 11 @ 4:00 pm - 5:00 pm
E18-304

IDS.190 - Topics in Bayesian Modeling and Computation Abstract: Many algorithms take prohibitively long to run on modern, large datasets. But even in complex data sets, many data points may be at least partially redundant for some task of interest. So one might instead construct and use a weighted subset of the data (called a "coreset") that is much smaller than the original dataset. Typically running algorithms on a much smaller data set will take much less computing time, but…

Find out more »

Probabilistic Modeling meets Deep Learning using TensorFlow Probability

Brian Patton (Google AI)

September 18 @ 4:00 pm - 5:00 pm
E18-304

IDS.190 - Topics in Bayesian Modeling and Computation Speaker: Brian Patton (Google AI) Abstract: TensorFlow Probability provides a toolkit to enable researchers and practitioners to integrate uncertainty with gradient-based deep learning on modern accelerators. In this talk we'll walk through some practical problems addressed using TFP; discuss the high-level interfaces, goals, and principles of the library; and touch on some recent innovations in describing probabilistic graphical models. Time-permitting, we may touch on a couple areas of research interest for the…

Find out more »

October 2019

Behavior of the Gibbs Sampler in the Imbalanced Case/Bias Correction from Daily Min and Max Temperature Measurements

Natesh Pillai (Harvard)

October 2 @ 4:00 pm - 5:00 pm
E18-304

IDS.190 Topics in Bayesian Modeling and Computation *Note:  The speaker this week will give two shorter talks within the usual session Title:   Behavior of the Gibbs sampler in the imbalanced case Abstract:   Many modern applications collect highly imbalanced categorical data, with some categories relatively rare. Bayesian hierarchical models combat data sparsity by borrowing information, while also quantifying uncertainty. However, posterior computation presents a fundamental barrier to routine use; a single class of algorithms does not work well in all settings and…

Find out more »

Probabilistic Programming and Artificial Intelligence

Vikash Mansinghka (MIT)

October 9 @ 4:00 pm - 5:00 pm
E18-304

IDS.190 – Topics in Bayesian Modeling and Computation Abstract: Probabilistic programming is an emerging field at the intersection of programming languages, probability theory, and artificial intelligence. This talk will show how to use recently developed probabilistic programming languages to build systems for robust 3D computer vision, without requiring any labeled training data; for automatic modeling of complex real-world time series; and for machine-assisted analysis of experimental data that is too small and/or messy for standard approaches from machine learning and…

Find out more »

Markov Chain Monte Carlo Methods and Some Attempts at Parallelizing Them

Pierre E. Jacob (Harvard University)

October 16 @ 4:00 pm - 5:00 pm
E18-304

IDS.190 – Topics in Bayesian Modeling and Computation Abstract: MCMC methods yield approximations that converge to quantities of interest in the limit of the number of iterations. This iterative asymptotic justification is not ideal: it stands at odds with current trends in computing hardware. Namely, it would often be computationally preferable to run many short chains in parallel, but such an approach is flawed because of the so-called "burn-in" bias.  This talk will first describe that issue and some known…

Find out more »

Esther Williams in the Harold Holt Memorial Swimming Pool: Some Thoughts on Complexity

Daniel Simpson (University of Toronto)

October 23 @ 4:00 pm - 5:00 pm
E18-304

IDS.190 – Topics in Bayesian Modeling and Computation Speaker: Daniel Simpson (University of Toronto) Abstract: As data becomes more complex and computational modelling becomes more powerful, we rapidly find ourselves beyond the scope of traditional statistical theory. As we venture beyond the traditional thunderdome, we need to think about how to cope with this additional complexity in our model building.  In this talk, I will talk about a few techniques that are useful when specifying prior distributions and building Bayesian models…

Find out more »

Using Bagged Posteriors for Robust Inference

Jonathan Huggins (Boston University)

October 30 @ 4:00 pm - 5:00 pm
37-212

IDS.190 – Topics in Bayesian Modeling and Computation **PLEASE NOTE ROOM CHANGE TO BUILDING 37-212 FOR THE WEEKS OF 10/30 AND 11/6** Speaker:   Jonathan Huggins (Boston University) Abstract: Standard Bayesian inference is known to be sensitive to misspecification between the model and the data-generating mechanism, leading to unreliable uncertainty quantification and poor predictive performance. However, finding generally applicable and computationally feasible methods for robust Bayesian inference under misspecification has proven to be a difficult challenge. An intriguing approach is…

Find out more »

November 2019

Probabilistic Inference and Learning with Stein’s Method

Lester Mackey (Microsoft Research)

November 6 @ 4:00 pm - 5:00 pm
37-212

IDS.190 – Topics in Bayesian Modeling and Computation **PLEASE NOTE ROOM CHANGE TO BUILDING 37-212 FOR THE WEEKS OF 10/30 AND 11/6** Speaker: Lester Mackey (Microsoft Research) Abstract: Stein’s method is a powerful tool from probability theory for bounding the distance between probability distributions.  In this talk, I’ll describe how this tool designed to prove central limit theorems can be adapted to assess and improve the quality of practical inference procedures.  I’ll highlight applications to Markov chain sampler selection, goodness-of-fit testing, variational…

Find out more »

Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox

Xiao-Li Meng (Harvard University)

November 13 @ 4:00 pm - 5:00 pm
E18-304

Abstract: Advances in Markov chain Monte Carlo in the past 30 years have made Bayesian analysis a routine practice. However, there is virtually no practice of performing Monte Carlo integration from the Bayesian perspective; indeed,this problem has earned the “paradox” label in the context of computing normalizing constants (Wasserman, 2013). We first use the modeling-what-we-ignore idea of Kong et al. (2003) to explain that the crux of the paradox is not with the likelihood theory, which is essentially the same…

Find out more »
+ Export Events