Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
An emerging literature suggests Artificial Intelligence (AI) can greatly enhance autocrats' repressive capability and strengthen authoritarian control. This paper argues that AI’s ability to do so may be hampered by existing repressive institutions. In particular, I suggest that autocrats suffer from a "Digital Dictator’s Dilemma," a repression-information trade-off in which citizens' strategic behavior in the face of repression diminishes the amount of useful information in the data for training AI. This trade-off poses a fundamental limitation in AI's usefulness as a tool of authoritarian control - the more repression there is, the less information there will be in AI's training data, and the worse AI will perform. I illustrate this argument with a first-of-its-kind AI experiment to replicate and test censorship AI systems, leveraging a unique dataset on censorship in China. I show that AI's accuracy in censorship decreases with more pre-existing censorship and repression. The drop in AI’s performance is larger during times of political crisis, when people reveal their true preferences. I further show that this problem cannot be easily fixed with more data. Ironically, however, the existence of the free world can help boost AI's ability to censor.