Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
Recent research highlights substantial levels of inattentiveness in online, survey-based experiments. Importantly, such inattentiveness threatens to bias treatment effect estimates toward zero. This problem is likely even more pronounced in conjoint experiments, which ask respondents to attend to an even larger amount of information. In this paper, we explore potential ways to both measure--and account for--respondent inattentiveness in conjoint experiments. Replicating published conjoint experiments with large national samples, we ultimately propose a novel method--"conjoint attention checks" (CACs)--to both measure respondents' level of attentiveness and to provide for more robust tests of hypotheses in conjoint experiments.