Individual Submission Summary
Share...

Direct link:

Experimental Evaluation of Individualized Dynamic Treatment Rules

Fri, September 6, 2:00 to 3:30pm, Pennsylvania Convention Center (PCC), 111A

Abstract

Across a wide range of disciplines, researchers have developed and applied machine learning algorithms to construct individualized treatment rules (ITRs). What has been lacking in the literature, however, is a robust methodology to evaluate the empirical performance of ITRs before implementing them in practice. Recently, Imai and Li (2023) introduced a general assumption-lean framework for the experimental evaluation of ITRs while only requiring the randomization of treatment assignment and random sampling of units. We extend this methodology to the evaluation of individualized dynamic treatment rules (IDTRs) based on a sequential, multiple assignment, randomized trial (SMART). In addition to the standard evaluation metric, we also introduce a new metric that decomposes the empirical performance of an IDTR into separate time periods while accounting for a budget constraint. We propose an unbiased estimator of these evaluation metrics and derive their finite-sample variances. We further extend our methodology to the setting, in which the same experimental data are used to both estimate and evaluate IDTRs via cross-fitting. Our methodology is applicable to IDTRs derived using any machine learning algorithms. Simulation results show that the confidence intervals based on the proposed finite-sample variance estimator have a good coverage even when the sample size is as small as 500. Finally, we apply our methodology to the experimental data from the Tennessee’s Student Teacher Achievement Ratio (STAR) project.

Authors