IDSS Special Seminar


list_alt
  • Graphical models under total positivity

    On September 27, 2018 at 4:00 pm till 5:00 pm
    Caroline Uhler (MIT)
    32-D677

    Title: Graphical models under total positivity

    Abstract: We discuss properties of distributions that are multivariate totally positive of order two (MTP2). Such distributions appear in the context of positive dependence, ferromagnetism in the Ising model, and various latent models. While such distributions have a long history in probability theory and statistical physics, in this talk I will discuss such distributions in the context of high dimensional statistics and graphical models. In particular, I will show that MTP2 in the Gaussian setting implies sparsity and existence of the MLE already for 2 observations independent of the number of variables, making this an interesting alternative to the graphical lasso for various applications.

    Find out more »: Graphical models under total positivity
  • Strong data processing inequalities and information percolation

    On September 20, 2018 at 4:00 pm till 5:00 pm
    Yury Polyanskiy (MIT)
    32-D677

    Title: Strong data processing inequalities and information percolation
    Abstract:  The data-processing inequality, that is, $I(U;Y) le I(U;X)$ for a Markov chain $U to X to Y$, has been the method of choice for proving impossibility (converse) results in information theory and many other disciplines. A channel-dependent improvement is called the strong data-processing inequality (or SDPI). In this talk we will: a) review SDPIs; b) show how point-to-point SDPIs can be combined into an SDPI for a network; c) show recent applications to problems of statistical inference on graphs (spiked Wigner model, community detection etc.)

    Find out more »: Strong data processing inequalities and information percolation
  • Learning in Strategic Environments: Theory and Data

    On February 24, 2016 at 2:00 pm till 3:00 pm

    The strategic interaction of multiple parties with different objectives is at the heart of modern large scale computer systems and electronic markets. Participants face such complex decisions in these settings that the classic economic equilibrium is not a good predictor of their behavior. The analysis and design of these systems has to go beyond equilibrium assumptions. Evidence from online auction marketplaces suggests that participants rather use algorithmic learning. In the first part of the talk, I will describe a theoretical framework for the analysis and design of efficient market mechanisms, with robust guarantees that hold under learning behavior, incomplete information and in complex environments with many mechanisms running at the same time. In the second part of the talk, I will describe a method for analyzing datasets from such marketplaces and inferring private parameters of participants under the assumption that their observed behavior is the outcome of a learning algorithm. I will give an example application on datasets from Microsoft’s sponsored search auction system.

    Find out more »: Learning in Strategic Environments: Theory and Data
  • Overcoming Overfitting with Algorithmic Stability

    On February 23, 2016 at 2:00 pm till 3:00 pm

    Most applications of machine learning across science and industry rely on the holdout method for model selection and validation. Unfortunately, the holdout method often fails in the now common scenario where the analyst works interactively with the data, iteratively choosing which methods to use by probing the same holdout data many times. In this talk, we apply the principle of algorithmic stability to design reusable holdout methods, which can be used many times without losing the guarantees of fresh data. Applications include a model benchmarking tool that detects and prevents overfitting at scale. We conclude with a bird’s eye view of what algorithmic stability says about machine learning at large, including new insights into stochastic gradient descent, the most popular optimization method in contemporary machine learning.

    Find out more »: Overcoming Overfitting with Algorithmic Stability