Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
This paper analyzes variation in the position of Members of the European Parliament (MEPs) on migration-related issues – not only variation between MEPs, their European party groups, and national political parties, but specifically within MEPs across speeches and over time.
Prior research has treated MEPs’ position as fixed and shown that these positions largely reflect the views expressed by national parties in the domestic political setting and vary along the traditional left-right dimension, but with a clear preference of Euroskeptic MEPs for domestic over European policy solutions. Instead of focusing on variation between politicians and their parties, we are mostly interested in explaining changes in the positions MEPs take over the course of the European election cycle. We investigate the timing of when MEPs portray themselves as being concerned about either the security implications of refugee migration or the rights of asylum seekers.
Drawing on insights from American politics and the literature on the shift in campaign positions between primaries and general elections, we advance a theoretical argument on how, controlling for socioeconomic variables, party affiliation, and public opinion, MEPs adjust their statements on migration prior to their national, intra-party nomination and the elections for the European Parliament.
To empirically test our claims, we develop a novel, AI-based approach to analyzing text as data. Where previous research has relied on roll-call vote data, the manual coding of MEP speeches, or the automated analysis of heavily preprocessed chunks of words, we use a large language model to code almost 1,000 entire speeches by over 200 MEPs in European Parliament debates across two election cycles. To highlight the advantages, similarities, and differences of our AI-based approach, we systematically compare our results to those produced by human and more traditional computer-based coding. We show the conditions under which GPT-4 outperforms such unsupervised scaling method as Wordfish, introduce and incorporate alternative measures of inter-coder reliability and the uncertainty in the coding process into our analysis, and explore the potential and implications of prompt engineering for future research into political speech that turns from content to sentiment analysis and the analysis of audience perception, reception, and engagement.