New provable techniques for learning and inference
in probabilistic graphical models
Andrej Risteski (MIT)
September 8 @ 11:00 am
A common theme in machine learning is succinct modeling of distributions over large domains. Probabilistic graphical models are one of the most expressive frameworks for doing this. The two major tasks involving graphical models are learning and inference. Learning is the task of calculating the “best fit” model parameters from raw data, while inference is the task of answering probabilistic queries for a model with known parameters (e.g. what is the marginal distribution of a subset of variables, after conditioning on the values of some other variables). Learning can be thought of as finding a graphical model that “explains” the raw data, while the inference queries extract the “knowledge” the graphical model contains.
I will focus on a few vignettes from my research which give new provable techniques for these tasks:
– In the context of learning, I will talk about method-of-moments techniques for learning noisy-or Bayesian networks, which are used for modeling the causal structure of diseases and symptoms.
– In the context of inference, I will talk about a new understanding of a class of algorithms for calculating partition functions, called variational methods, through the
lens of convex programming hierarchies. Time permitting, I will also speak about MCMC methods for sampling from highly multimodal distributions using simulated tempering.
The talk will assume no background, and is meant as a “meet and greet” talk surveying various questions I’ve worked on and am interested in.