BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MIT Statistics and Data Science Center - ECPv5.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:MIT Statistics and Data Science Center
X-ORIGINAL-URL:https://stat.mit.edu
X-WR-CALDESC:Events for MIT Statistics and Data Science Center
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210312T110000
DTEND;TZID=America/New_York:20210312T120000
DTSTAMP:20220128T120136
CREATED:20210112T211220Z
LAST-MODIFIED:20210308T212627Z
UID:4481-1615546800-1615550400@stat.mit.edu
SUMMARY:On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning
DESCRIPTION:Abstract: For many causal effect parameters of interest\, doubly robust machine learning (DRML) estimators ψ̂ 1 are the state-of-the-art\, incorporating the good prediction performance of machine learning; the decreased bias of doubly robust estimators; and the analytic tractability and bias reduction of sample splitting with cross fitting. Nonetheless\, even in the absence of confounding by unmeasured factors\, the nominal (1−α) Wald confidence interval ψ̂ 1±zα/2ˆ[ψ̂ 1] may still undercover even in large samples\, because the bias of ψ̂ 1 may be of the same or even larger order than its standard error of order n−1/2. \nIn this paper\, we introduce essentially assumption-free tests that (i) can falsify the null hypothesis that the bias of ψ̂ 1 is of smaller order than its standard error\, (ii) can provide an upper confidence bound on the true coverage of the Wald interval\, and (iii) are valid under the null under no smoothness/sparsity assumptions on the nuisance parameters. The tests\, which we refer to as \underline{A}ssumption \underline{F}ree \underline{E}mpirical \underline{C}overage \underline{T}ests (AFECTs)\, are based on a U-statistic that estimates part of the bias of ψ̂ 1. \nOur claims need to be tempered in several important ways. First no test\, including ours\, of the null hypothesis that the ratio of the bias to its standard error is smaller than some threshold δ can be consistent [with- out additional assumptions (e.g. smoothness or sparsity) that may be in- correct]. Second the above claims only apply to certain parameters in a particular class. For most of the others\, our results are unavoidably less sharp. \nWork with Lin Liu and Rajarshi Mukherjee \n– \nBio: \nThe principal focus of Dr. Robins’ research has been the development of analytic methods appropriate for drawing causal inferences from complex observational and randomized studies with time-varying exposures or treatments. The new methods are to a large extent based on the estimation of the parameters of a new class of causal models – the structural nested models – using a new class of estimators – the G estimators. The usual approach to the estimation of the effect of a time-varying treatment or exposure on time to disease is to model the hazard incidence of failure at time t as a function of past treatment history using a time-dependent Cox proportional hazards model. \nMore information available here.
URL:https://stat.mit.edu/calendar/robins/
LOCATION:online
CATEGORIES:Stochastics and Statistics Seminar
END:VEVENT
END:VCALENDAR