Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Model uncertainty is a long-standing challenge in quantitative social science. Sensitivity analyses often focus on assessing the robustness to changes in the covariate space, while neglecting potentially crucial other empirical assumptions. This article develops an Augmented Extreme Bounds Analysis (A-EBA) to assess model uncertainty of multiple empirical assumptions, including the control set, fixed effect structures, standard error types, sample selection, and dependent variable operationalization. Applying A-EBA to the fields of democracy, institutional trust, populism, and welfare generosity, the results based on over 3.57 billion estimates reveal widespread model uncertainty, with most independent variables yielding a substantive share of statistically significant coefficients, pointing in opposite (positive and negative) directions depending on model specification. We adopt a machine learning approach to assess the relative importance of different model specification choices and demonstrate that the impact of the covariate space is relatively modest compared to sample selection and dependent variable operationalization. Thus, model uncertainty seems to primarily stem from sampling and measurement, rather than conditioning, with has crucial implications for what robustness checks should be carried out in social sciences.