Views Navigation

Event Views Navigation

On a High-Dimensional Random Graph Process

Gábor Lugosi (Pompeu Fabra University)
32-141

We introduce a model for a high-dimensional random graph process and ask how "rich" the process has to be so that one finds atypical behavior. In particular, we study a natural process of Erdös-Rényi random graphs indexed by unit vectors in R^d . We investigate the deviations of the process with respect to three fundamental properties: clique number, chromatic number, and connectivity. The talk is based on joint work with Louigi Addario-Berry, Shankar Bhamidi, Sebastien Bubeck, Luc Devroye, and Roberto…

Find out more »

Gossip: Identifying Central Individuals in a Social Network

How can we identify the most influential nodes in a network for initiating diffusion? Are people able to easily identify those people in their communities who are best at spreading information, and if so, how? Using theory and recent data, we examine these questions and see how the structure of social networks affects information transmission ranging from gossip to the diffusion of new products. In particular, a model of diffusion is used to define centrality and shown to nest other…

Find out more »

Expansion of biological pathways by integrative Genomics

Jun Liu (Harvard University)
32-141

The number of publicly available gene expression datasets has been growing dramatically. Various methods had been proposed to predict gene co-expression by integrating the publicly available datasets. These methods assume that the genes in the query gene set are homogeneously correlated and consider no gene-specific correlation tendencies, no background intra-experimental correlations, and no quality variations of different experiments. We propose a two-step algorithm called CLIC (CLustering by Inferred Co-expression) based on a coherent Bayesian model to overcome these limitations. CLIC…

Find out more »

Minimax Estimation of Nonlinear Functionals with Higher Order Influence Functions: Results and Applications

James Robins (Harvard University)
32-141

Professor Robins describes a novel approach to point and interval estimation of nonlinear functionals in parametric, semi-, and non-parametric models based on higher order influence functions. Higher order influence functions are higher order U-statistics. The approach applies equally to both n‾√ and non-n‾√ problems. It reproduces results previously obtained by the modern theory of non-parametric inference, produces many new n‾√ and non-n‾√ results, and opens up the ability to perform non-n‾√ inference in complex high dimensional models, such as models…

Find out more »

Next Generation Missing Data Methodology

Eric Tchetgen Tchetgen (Harvard University)
32-141

Missing data is a reality of empirical sciences and can rarely be prevented entirely. It is often assumed that incomplete data are missing completely at random (MCAR) or missing at random (MAR), When neither MCAR nor MAR, missingness is said to be Not MAR (NMAR). Under MAR, there are two main approaches to inference, likelihood/Bayesian inference, e.g. EM or MI, and semiparametric approaches such as Inverse probability weighting (IPW). In several important settings, likelihood based inferences suffer the difficulty of…

Find out more »

Wiki Surveys: Open and Quantifiable Social Data Collection

In the social sciences, there is a longstanding tension between data collection methods that facilitate quantification and those that are open to unanticipated information. Advances in technology now enable new, hybrid methods that can combine some of the benefits of both approaches. Drawing inspiration both from online information aggregation systems like Wikipedia and from traditional survey research, we propose a new class of research instruments called wiki surveys. Just as Wikipedia evolves over time based on contributions from participants, we…

Find out more »

Efficient Optimal Strategies for Universal Prediction

Peter Bartlett (UC Berkeley)
32-141

In game-theoretic formulations of prediction problems, a strategy makes a decision, observes an outcome and pays a loss. The aim is to minimize the regret, which is the amount by which the total loss incurred exceeds the total loss of the best decision in hindsight. This talk will focus on the minimax optimal strategy, which minimizes the regret, in three settings: prediction with log loss (a formulation of sequential probability density estimation that is closely related to sequential compression, coding,…

Find out more »

Principal Components Analysis in Light of the Spiked Model

Principal components is a true workhorse of science and technology, applied everywhere from radio frequency signal processing to financial econometrics, genomics, and social network analysis. In this talk, I will review some of these applications and then describe the challenge posed by modern 'big data asymptotics' where there are roughly as many dimensions as observations; this setting has seemed in the past full of mysteries. Over the last ten years random matrix theory has developed a host of new tools…

Find out more »

Large Average Submatrices of a Gaussian Random Matrix: Landscapes and Local Optima

Andrew Nobel (UNC)

The problem of finding large average submatrices of a real-valued matrix arises in the exploratory analysis of data from disciplines as diverse as genomics and social sciences. This talk will present several new theoretical results concerning large average submatrices of an n x n Gaussian random matrix that are motivated in part by previous work on biomedical applications. We will begin by considering the average and distribution of the k x k submatrix having largest average value (the global maximum),…

Find out more »

Incremental Methods for Additive Convex Cost Optimization

David Donoho (Stanford)
32-123

Motivated by machine learning problems over large data sets and distributed optimization over networks, we consider the problem of minimizing the sum of a large number of convex component functions. We study incremental gradient methods for solving such problems, which process component functions sequentially one at a time. We first consider deterministic cyclic incremental gradient methods (that process the component functions in a cycle) and provide new convergence rate results under some assumptions. We then consider a randomized incremental gradient…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764