Individual Submission Summary
Share...

Direct link:

Download

Democracy and Responsible Intelligence in the Age of AI

Thu, September 5, 12:00 to 1:30pm, Marriott Philadelphia Downtown, Franklin 1

Abstract

Machine learning and artificial intelligence (AI) technologies have disrupted a number of fields, including national intelligence. Generative AI platforms like ChatGPT3 and its successors have stoked fears of election interference, while enhanced biometric scanning technologies raise privacy concerns among civil libertarians. Yet these same technologies also promise more efficient data collection, deeper analysis, and quicker dissemination of intelligence to policymakers and other consumers. Resolving tensions between democratic principles of transparency and governmental accountability, on one hand, and increasingly powerful state capacity, on the other, is especially important for the intelligence communities of liberal democracies around the world. Insofar as citizens confer trust on governments that are both effective and accountable, AI and related technologies add new dimensions to an old question; namely, how can democracies responsibly conduct intelligence operations on internal populations?

This paper reframes ongoing debates about technology and intelligence as questions of responsible democratic government. A work of applied political theory, we draw on liberal political philosophy and a series of case studies to argue that AI is an amplifying force capable either of exacerbating pathologies within already existing democracies – e.g. racially biased law enforcement practices, invasive policing, etc. – or of enhancing effective, trustworthy governance. The difference between these outcomes is largely a question of how responsibly democratic governments deploy these technologies as they emerge.

The paper proceeds in three sections. Section I defines responsible intelligence through appeal to liberal democratic principles – specifically fairness and transparency – to establish a responsible intelligence framework. Section II explores our responsible intelligence framework through a study of AI’s use in border security in the United States and the European Union. By comparing the uses of different technologies (deployed under different regulatory regimes) to the common challenge of migration across borders, we demonstrate that AI’s effects are neither uniform nor universally positive or negative. Our concluding section recommends concrete policy reforms that will promote responsible intelligence practices in democratic governments consistent with their normative foundations.

Authors