Individual Submission Summary
Share...

Direct link:

Understanding Order Effects in Within-Subjects Experimental Designs

Sat, September 7, 4:00 to 5:30pm, Marriott Philadelphia Downtown, 414

Abstract

Recent work has advocated for repeated measures experimental designs, such as pretest-posttest and within-subjects designs, on the grounds that they dramatically increase statistical power. The evidence suggests that, contrary to common concerns, pretest-posttest designs do not bias treatment effects relative to a standard posttest-only design. However, there is less systematic evidence regarding within-subjects designs, and some studies find order effects in these designs. Related research, however, suggests that it is unlikely that order effects are driven either by consistency effects or by demand effects. This suggests that order effects in within-subjects designs are likely driven by contrast effects – respondents use the first experimental condition as a baseline for comparison when evaluating the next condition. In this project, we make several contributions. First, across several experiments, we provide additional evidence as to the prevalence of order effects in within-subjects designs and how they affect the substantive results. Second, we argue that order effects are not an inherent problem of within-subjects designs but are instead diagnostic of problems with the treatment itself. Specifically, order effects occur when respondents are given insufficient context to interpret treatment effects, leading to non-compliance. In within-subjects designs, the initial round of an experiment provides the needed context for interpretation, changing treatment effects by changing the interpretation of the treatment itself.
In an initial pilot study on trade attitudes, we have already documented evidence of order effects in a within-subjects design. We will carry out multiple studies to test 1) the prevalence of order effects, 2) whether order effects extend to the manipulation check, indicating a role for non-compliance, 3) whether providing additional context for the treatment prior to the experiment reduces order effects, and 4) whether order effects change the substantive conclusions of the experiment. Taken together, we aim to show that order effects are not intrinsic to within-subjects designs, but can be avoided through better treatment design. This would imply that within-subjects designs should be more widely used and that researchers observing order effects should improve their treatments rather than abandon within-subjects designs.

Authors