BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MIT Statistics and Data Science Center - ECPv5.6.0//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MIT Statistics and Data Science Center
X-ORIGINAL-URL:https://stat.mit.edu
X-WR-CALDESC:Events for MIT Statistics and Data Science Center
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200501T110000
DTEND;TZID=America/New_York:20200501T120000
DTSTAMP:20210514T193717
CREATED:20200407T170547Z
LAST-MODIFIED:20200408T141715Z
UID:4143-1588330800-1588334400@stat.mit.edu
SUMMARY:Naive Feature Selection: Sparsity in Naive Bayes
DESCRIPTION:Abstract: Due to its linear complexity\, naive Bayes classification remains an attractive supervised learning method\, especially in very large-scale settings. We propose a sparse version of naive Bayes\, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem\, for which we provide an exact solution in the case of binary data\, or a bound in the multinomial case. We prove that our bound becomes tight as the marginal contribution of additional features decreases. Both binary and multinomial sparse models are solvable in time almost linear in problem size\, representing a very small extra relative cost compared to the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-of-the-art feature selection methods such as recursive feature elimination\, l1-penalized logistic regression and LASSO\, while being orders of magnitude faster. For a large data set\, having more than with 1.6 million training points and about 12 million features\, and with a non-optimized CPU implementation\, our sparse naive Bayes model can be trained in less than 15 seconds. Authors: A. Askari\, A. d’Aspremont\, L. El Ghaoui. \n– \nBiography: \nAfter dual PhDs from Ecole Polytechnique and Stanford University in optimisation and finance\, followed by a postdoc at U.C. Berkeley\, Alexandre d’Aspremont joined the faculty at Princeton University as an assistant then associate professor. He returned to Europe in 2011 and is now a research director at CNRS\, attached to Ecole Normale Supérieure in Paris. He received the SIAM Optimization prize\, a NSF CAREER award\, and an ERC starting grant. He co-founded and is scientific director of the MASH Msc degree at PSL. He also co-founded Kayrros SAS\, which focuses on energy markets and earth observation. \nHis work is focused on optimization and applications in machine learning\, statistics\, bioinformatics\, signal processing and finance. He collaborates with several companies on projects linked to earth observation\, insurance pricing\, statistical arbitrage\, etc. He is also co scientific director of MASH\, a Msc program focused on machine learning and its applications in digital marketing\, journalism\, public policy\, etc. \n
URL:https://stat.mit.edu/calendar/naive-feature-selection-sparsity-in-naive-bayes/
LOCATION:online
CATEGORIES:Stochastics and Statistics Seminar Series,Online events
END:VEVENT
END:VCALENDAR