Individual Submission Summary
Share...

Direct link:

1,000 Speeches vs. GPT-4: Analyzing Position Shifting in the European Parliament

Thu, September 5, 10:00 to 11:30am, Pennsylvania Convention Center (PCC), 112A

Abstract

Substantively, our paper aims to explain variation in the positions of Members of the European Parliament (MEPs) and their national parties and European party groups on asylum-related issues. However, the paper’s main contribution is our novel, AI-based approach to analyzing text as data and a systematic comparison of the advantages, similarities, and differences of using a large language model instead of more traditional human or computer-based coding procedures.
Prior research shows that the positions of MEPs tend to reflect the views expressed by their national parties and vary along the traditional left-right dimension, but with a clear preference of Euroskeptic MEPs for domestic over European policy solutions. This research has also exclusively focused on variation between politicians and parties, while we are more interested in explaining changes in the positions of individual MEPs over the course of the European election cycle. Drawing on insights from American politics and the literature on the shift in campaign positions between primaries and general elections, we advance a theoretical argument for when MEPs portray themselves as being concerned about either the security implications of refugee migration or the rights of asylum seekers. Controlling for national- and European Union-level variables, we find that MEPs align their statements with their parties’ migration position prior to their intra-party nomination, but subsequently, in the run up to the elections for the European Parliament, adjust their positions to be more in line with national public opinion. Our results are based on the analysis of almost 1,000 speeches by over 200 MEPs in European Parliament debates across two election cycles.
Where previous research has relied on roll-call vote data, the manual coding of MEP speeches, or the automated analysis of heavily preprocessed chunks of words, we use a large language model to code entire speeches, and our paper details the conditions under which GPT-4 outperforms such unsupervised scaling method as Wordfish. We also introduce and incorporate alternative measures of inter-coder reliability and the uncertainty in the coding process into our analysis. The paper concludes with a short exploration into the potential of prompt engineering and its implications for future research into the analysis of the perception of political speech and audience engagement.

Authors