Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Recent advancements have established large language models (LLMs) as significant tools in various fields, including political science. However, their integration into political science research presents challenges, particularly regarding linguistic diversity and computational demands. This paper explores these challenges, focusing on low-resource languages and the computational intensity of model fine-tuning. We propose a novel approach using transfer learning on a BERT model, tailored to the Flemish languageāa Dutch dialect cluster. Our methodology involves initially adapting existing Dutch language models to Flemish nuances using news corpora, followed by a more focused adaptation to the political domain utilizing Flemish parliamentary and political reporting sources. The final stage involves training these models for a custom sentiment analysis task aimed at assessing political sentiment in Flemish newspapers. We compare the performance of our Flemish-specific model against general Dutch-language models, offering insights into the effectiveness of LLMs in political science for low-resource languages. This study not only provides a practical guide for adapting LLMs to underrepresented languages in political contexts but also contributes empirical data on their utility in political science research.