Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
As advances in artificial intelligence technology accelerate, governments have struggled to keep pace with developing rules and procedures to protect rights and balance societal-level tradeoffs surrounding the new technology. This is especially true in the area of AI-enhanced surveillance. As surveillance technologies become more powerful, what determines how citizens and governments understand appropriate safeguards? This paper presents results of a pre-registered survey experiment that randomizes exposure to seven commonly cited issues surrounding AI surveillance technology: (1) its compatibility with international human rights obligations, (2) the possibility of limited use leading to more expansive and intrusive practices through “mission creep,” (3) the promise of using AI surveillance to combat crime and terrorism, (4) the concern that widespread adoption of Chinese produced AI surveillance technology may facilitate intelligence gathering on the part of the Chinese government, (5) the possibility that enhanced surveillance will be used to monitor civil society, dissidents, and journalists, (6) questionable reliability, including the potential for racial and other forms of bias, and (7) overregulation of this new technology may stifle innovation. Respondents are then asked about their overall concern about and trust of the technology, attitudes about regulation, and willingness to engage in political participation to advocate for government regulation of new surveillance technologies.