Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Generative AI like ChatGPT is transforming how people seek information, acquire knowledge, and even understand politics. This paper presents the first systematic evidence that the training and output of generative AI are contaminated by political propaganda. We show that several popular generative AI can produce propagandic responses to a wide variety of questions about political institutions, leaders, and historical events. Using a rare dataset of known propaganda news articles and text analysis of billions of texts in popular open-source training data for AI, We quantify the amount of propaganda that AI encounters in training. To establish the causal link between the presence of propaganda in the training data and generative AI's output, we use a large-scale lab experiment to show how varying the amount of propaganda in training can influence the output of generative AI.