Individual Submission Summary
Share...

Direct link:

Download

Improving External Validity of Lessons Learned from Case Studies in Evaluation

Thu, September 5, 8:00 to 9:30am, Marriott Philadelphia Downtown, 414

Abstract

A key issue in the social sciences is whether we can expect findings from a particular study to travel to a new, not studied case, i.e. the problems of generalization and external validity of our findings. This question takes an even more prominent role when we move from the realm of academic research, done for the sake of gaining knowledge (in which nonetheless it remains key), to the realm of policies and interventions, where research has applied consequences. The ‘will it work here’ question becomes very important, and it has been the subject of discussion and recommendations that seek to increase the external validity of a study (or evaluation) in order to gain knowledge about the possibility to obtain a similar outcome in a target destination (Bardach 2004; Barzelay 2007; Bennett 2022; Capano and Howlett 2021; Cartwright 2011, 2022; Cartwright and Hardie 2012; Woolcock 2022). Despite these useful advances and the progress of ‘Evidence Based Policy’ in the past few years, most existing approaches involve studying one case and the contextual conditions and then claiming that it could work other places without actually exploring whether it works in a similar way (it will never be exactly the same). Further, even if researchers do evaluate ‘how it works’ in multiple cases, we lack a conceptual framework for how we can assess whether the processes in the cases were ‘similar enough’.

As a result, most ‘Evidence-Based Policies’ do not have actual empirical warrant for the new sites in which they are implemented, as the emphasis on internal validity (important as it is) has led to most collected evidence only pertaining to the source site without assessing whether it works in similar ways in other cases. As a result, the evaluation of successful policies leads to overlearning from one or two cases, in which we often conclude that the policy can act as a silver bullet, readily applicable with little change in other contexts. This means that most scope conditions are added, if at all, as an afterthought without a true assessment of the role they play in the causal pathway, leading to policies being replicated without adaptation and ending up in failure. What we need is a conceptual language that allows us to learn from success in a conducive way through processual comparisons of two or more cases.

This paper proposes a three-step procedure for exploring the external validity of ‘how it works’ research questions. First, an initial source case is selected and studied in order to develop a processual theory that focuses on the key episodes of interactions between actors that link the intervention and outcome together. Second, another case that appears similar in terms of contextual conditions and intervention/outcome is selected. The second case study assesses whether we find functionally analogous activities also taking place, or whether the process worked in different ways (or even broke down). Third, if the two cases had similar processes, the research can identify other cases that might be a target population in which the process might also work. Other cases can be selected to strengthen confidence in the process working in functionally equivalent ways in additional cases.

Author