Individual Submission Summary
Share...

Direct link:

Difference-in-Differences Designs with Attrition Bias Correction

Fri, September 6, 12:00 to 1:30pm, Pennsylvania Convention Center (PCC), 110B

Abstract

The aim of this paper is to address one of the most prevalent problems encountered by political scientists working with difference-in-differences (DID) design: missingness in panel data. This problem is particularly evident in DID studies that utilize survey data to measure outcome variables, where respondent attrition is a frequent concern. To correct attrition bias, practitioners can resort to bounding causal effects or/and inverse probability weighting using baseline covariates. In this paper, I discuss how these two most widely discussed methods, being general remedies, under-utilize the assumptions already imposed on panel structure for causal identification.

My discussion aligns with recent attempts in disciplines adjacent to political science that use parallel trends assumptions and changes-in-changes approach to correct for the attrition bias, but also differs from them by offering a different way of interpreting such assumptions utilizing panel structure. What must not be overlooked is that attrition is a post-treatment intermediate variable. In this vein, I proceed to introduce principal stratification — namely, always-reporters, if-treated-reporters, if-control-reporters, and never-reporters — to induce assumptions based on these latent groups defined by the joint distribution of the potential outcome of attrition.

The contributions of this paper can be summarized as follows. First, I outline a set of assumptions that may justify the common practice of list wise deletion (complete case analysis) and their pitfalls. Second, I clarify the main identification challenges with attrition: (1) potential dependence between selection into treatment with principal strata and (2) heterogenous effect across principal strata. Third, I present different identification strategies based on principal strata specific parallel trends assumption for a partial identification and sensitivity analysis of average treatment effect for treated (ATT). Lastly, I propose alternative causal estimands that can be identified using auxiliary variables such as other survey questions and/or multi-wave data coming from pretreatment period.

Author