Views Navigation

Event Views Navigation

Matrix estimation by Universal Singular Value Thresholding

Sourav Chatterjee (Stanford)
E18-304

Consider the problem of estimating the entries of a large matrix, when the observed entries are noisy versions of a small random fraction of the original entries. This problem has received widespread attention in recent times. I will describe a simple estimation procedure, called Universal Singular Value Thresholding (USVT), that works for any matrix that has "a little bit of structure". Surprisingly, this simple estimator achieves the minimax error rate up to a constant factor. The method is applied to…

Find out more »

Influence maximization in stochastic and adversarial settings

Po-Ling Loh (University of Pennsylvania)
E18-304

Abstract: We consider the problem of influence maximization in fixed networks, for both stochastic and adversarial contagion models. In the stochastic setting, nodes are infected in waves according to linear threshold or independent cascade models. We establish upper and lower bounds for the influence of a subset of nodes in the network, where the influence is defined as the expected number of infected nodes at the conclusion of the epidemic. We quantify the gap between our upper and lower bounds…

Find out more »

Interpretable prediction models for network-linked data

Liza Levina (University of Michigan)
E18-304

Prediction problems typically assume the training data are independent samples, but in many modern applications samples come from individuals connected by a network. For example, in adolescent health studies of risk-taking behaviors, information on the subjects’ social networks is often available and plays an important role through network cohesion, the empirically observed phenomenon of friends behaving similarly. Taking cohesion into account should allow us to improve prediction. Here we propose a regression-based framework with a network penalty on individual node…

Find out more »

Shotgun Assembly of Graphs

Elchanan Mossel (MIT)
E18-304

We will present some results and some open problems related to shotgun assembly of graphs for random generating models.Shotgun assembly of graphs is the problem of recovering a random graph or a randomly labelled graphs from small pieces. This problem generalizes the theoretically elegant and practically important problem of shotgun assembly of DNA sequences. The general problem of shotgun assembly presents novel problems in random graphs, percolation, and random constraint satisfaction problems. Based on joint works with Nathan Ross, with…

Find out more »

Sparse PCA via covariance thresholding

Yash Deshpande (Microsoft Research)
E18-304

Abstract: In sparse principal components analysis (PCA), the task is to infer a sparse, low-rank matrix from noisy observations. Johnstone and Lu proposed the popular “spiked covariance” model, wherein the population distribution is equivariant with the exception of a single direction, called the spike. Assuming that the spike direction is sparse in some basis, they also proposed a simple scheme to estimate its support based on the diagonal entries of the sample covariance. Indeed, later information-theoretic analysis demonstrated that the…

Find out more »

Non-classical Berry-Esseen inequality and accuracy of the weighted bootstrap

Mayya Zhilova (Georgia Tech)
E18-304

Abstract: In this talk, we will study higher-order accuracy of the weighted bootstrap procedure for estimation of a distribution of a sum of independent random vectors with bounded fourth moments, on the set of all Euclidean balls. Our approach is based on Berry-Esseen type inequality which extends the classical normal approximation bound. These results justify in non-asymptotic setting that the weighted bootstrap can outperform Gaussian (or chi-squared) approximation in accuracy w.r.t. dimension and sample size. In addition, the presented results lead…

Find out more »

Slope meets Lasso in sparse linear regression

Pierre Bellec (Rutgers)
E18-304

Abstract: We will present results in sparse linear regression on two convex regularized estimators, the Lasso and the recently introduced Slope estimator, in the high-dimensional setting where the number of covariates p is larger than the number of observations n. The estimation and prediction performance of these estimators will be presented, as well as a comparative study of the assumptions on the design matrix.  https://arxiv.org/pdf/1605.08651.pdf Biography: I am an Assistant Professor of statistics at Rutgers, the State University of New Jersey. I obtained my PhD…

Find out more »

Causal Discovery in Systems with Feedback Cycles

Frederick Eberhardt (CalTech)
E18-304

Abstract: While causal relations are generally considered to be anti-symmetric, we often find that over time there are feedback systems such that a variable can have a causal effect on itself. Such "cyclic" causal systems pose significant challenges for causal analysis, both in terms of the appropriate representation of the system under investigation, and for the development of algorithms that attempt to infer as much as possible about the underlying causal system from statistical data. This talk will aim to provide some theoretical insights about…

Find out more »

Estimating the number of connected components of large graphs based on subgraph sampling

Yihong Wu (Yale)
E18-304

Abstract:  Learning properties of large graphs from samples is an important problem in statistical network analysis, dating back to the early work of Goodman and Frank. We revisit the problem formulated by Frank (1978) of estimating the numbers of connected components in a graph of N vertices based on the subgraph sampling model, where we observe the subgraph induced by n vertices drawn uniformly at random. The key question is whether it is possible to achieve accurate estimation, i.e., vanishing normalized mean-square error,…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764