Session Submission Summary
Share...

Direct link:

Promise and Perils of Content Moderation on Digital Platforms

Sat, September 7, 2:00 to 3:30pm, Loews Philadelphia Hotel, Commonwealth B

Session Submission Type: Full Paper Panel

Session Description

Citizens are increasingly interacting on digital platforms, which have become sites for public discourse and community building. However, the interactions that take place within them are far from spontaneous. Quite the contrary— they are thoroughly mediated by artificial intelligence, either partially or entirely, with deep and lasting effects both within and outside of these platforms. This panel scrutinizes the impact of such intermediation on democracy and justice. It discusses the modalities of content moderation (whether it is top-down or bottom-up, corporate or community-directed, fully automated or moderated by humans), the normative implications of each, and the variety of impacts they can have on society.


In his paper, Michael Blake notes that platform users manufacture a personality for their online interactions, just as they create manufactured personalities offline. These personalities then become a critical part of their moral horizons, both in public and private life. However, online and offline public personalities differ in that the former are shaped by complex AI algorithms. The problem is that such curation is opaque and likely to breed conspiratorial thinking because users may come to believe that all their encounters, including those offline, are curated by an opaque agency. No social interaction is random anymore.

Jennifer Forestal addresses a challenge for digital platforms in democratic environments: fostering marginalized social groups while simultaneously preventing them from becoming violent and anti-democratic. The most common strategies for achieving this dual goal are technological affordances and corporate content moderation policies. Forestal considers an alternative: bottom-up, user-directed moderating practices. Focusing on the case of incels (short for 'involuntary celibate') in Reddit communities, she argues that while communities might share the same misogynistic ideology, those moderated in this way are less likely to display extremist tendencies.

Juan Espindola explores the morality of human content moderation. The problem with human moderation is that it has been documented to inflict great psychological harm on moderators, who are often exposed to a steady flow of disturbing imagery, including child sexual abuse imagery and extreme violence. Human moderation is commonly criticized because of the working conditions of moderators, but that is just the surface of the problem. The morality of the practice is dubious at best for reasons that go beyond improper working conditions, hitherto unrecognized, including the erosion of the moderator’s moral integrity.

Jeffrey Howard notes that public discourse now largely takes place on digital platforms and that most content moderation deploys machine intelligence (based on datasets trained by humans). Notwithstanding the merits of automated moderation (its speed, its sparing humans from exposure to harmful content), Howard examines some normative objections to automated content moderation. He makes the case that some concerns about using AI for content moderation are overstated, while others are potentially redressable. He concludes that the prospects of ethical mechanized content moderation are attainable.

Sub Unit

Individual Presentations

Chair

Discussant