Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Session Submission Type: Full Paper Panel
Citizens are increasingly interacting on digital platforms, which have become sites for public discourse and community building. However, the interactions that take place within them are far from spontaneous. Quite the contrary— they are thoroughly mediated by artificial intelligence, either partially or entirely, with deep and lasting effects both within and outside of these platforms. This panel scrutinizes the impact of such intermediation on democracy and justice. It discusses the modalities of content moderation (whether it is top-down or bottom-up, corporate or community-directed, fully automated or moderated by humans), the normative implications of each, and the variety of impacts they can have on society.
In his paper, Michael Blake notes that platform users manufacture a personality for their online interactions, just as they create manufactured personalities offline. These personalities then become a critical part of their moral horizons, both in public and private life. However, online and offline public personalities differ in that the former are shaped by complex AI algorithms. The problem is that such curation is opaque and likely to breed conspiratorial thinking because users may come to believe that all their encounters, including those offline, are curated by an opaque agency. No social interaction is random anymore.
Jennifer Forestal addresses a challenge for digital platforms in democratic environments: fostering marginalized social groups while simultaneously preventing them from becoming violent and anti-democratic. The most common strategies for achieving this dual goal are technological affordances and corporate content moderation policies. Forestal considers an alternative: bottom-up, user-directed moderating practices. Focusing on the case of incels (short for 'involuntary celibate') in Reddit communities, she argues that while communities might share the same misogynistic ideology, those moderated in this way are less likely to display extremist tendencies.
Juan Espindola explores the morality of human content moderation. The problem with human moderation is that it has been documented to inflict great psychological harm on moderators, who are often exposed to a steady flow of disturbing imagery, including child sexual abuse imagery and extreme violence. Human moderation is commonly criticized because of the working conditions of moderators, but that is just the surface of the problem. The morality of the practice is dubious at best for reasons that go beyond improper working conditions, hitherto unrecognized, including the erosion of the moderator’s moral integrity.
Jeffrey Howard notes that public discourse now largely takes place on digital platforms and that most content moderation deploys machine intelligence (based on datasets trained by humans). Notwithstanding the merits of automated moderation (its speed, its sparing humans from exposure to harmful content), Howard examines some normative objections to automated content moderation. He makes the case that some concerns about using AI for content moderation are overstated, while others are potentially redressable. He concludes that the prospects of ethical mechanized content moderation are attainable.
“A Mutual Strangeness and Repulsion”: Content Curation and Moral Personality - Michael Blake, University of Washington
Combating Hate and Extremism through User-Directed Practices: The Case of Incels - Jennifer Forestal, Loyola University, Chicago
Algorithms without Trauma: Against Human Content Moderation - Juan Espindola, UNAM
Moderation by Machine: The Political Morality of Automated Censorship - Jeffrey Howard, University College London