Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Session Submission Type: Created Panel
The panels offers critical perspectives on the use of Generative Large Language Models (LLMs) in Political Science research. Papers offer studies on the on the impact of different learning approaches (Zero-Shot, Few-Shot, and Fine-Tuning), the impact of uncertainty and robustness quantification on LLM outputs, the differences between open-source and proprietary models, the extent to which biases can be identified in the models' output, how useful they are in processing open-ended responses in political surveys, and the challenges posed by identifying frames.
Detecting Policy Frames from Text - Meysam Alizadeh, University of Zurich; Maƫl Dominique Kubli, University of Zurich; Fabrizio Gilardi, University of Zurich
Beyond Human Judgment: How to Evaluate Language Model Uncertainty - Arthur Spirling, Princeton University; Christopher Barrie; Alexis Palmer, Dartmouth College
Negativity Bias in Political Perception Revisited - Richard R. Lau, Rutgers University, New Brunswick; Barea M. Sinno, Rutgers University; George D. Quinn, Rutgers University
Is GPT-4 Right Wing or Left Wing? - Joan C. Timoneda, Purdue University; Christina Walker, Purdue University