Individual Submission Summary
Share...

Direct link:

Social Media's New Referees? Democratic Governance of AI Content Moderation Bots

Thu, September 5, 2:00 to 3:30pm, Marriott Philadelphia Downtown, 310

Abstract

The release of ChatGPT in November 2022 has thrust artificial intelligence (AI) into the public sphere with debates about its effects upon different sectors of the economy, politics, and everyday life, including online content moderation. In addition to the moderation triad of reduce, remove, and inform, comes the application of large-language model (LLM) automated agents for acting as a liaison between social media platforms and their users. However, it is unclear what the public thinks about automated agents, which begs the question: is there a plausible future in which humans and company-supported bots interact more regularly around platform rules and community guidelines? We argue thatĀ if such AI solutions are deployed, there must be consultation with public stakeholders over alignment with democratic values. Based on nationally representative surveys of the United Kingdom, the United States, and Canada, we assess how people view the use of new AI technologies, primarily the use of LLMs by social media companies for the purposes of content moderation. We find that about half of survey respondents indicate that it would be acceptable for company chatbots to start public conversations with users who appear to violate rules or platform community guidelines. Persons who have more regular experiences with consumer-facing chatbots are less likely to be worried in general about the use of these technologies on social media. However, most respondents across all three countries worry that if companies deploy chatbots supported by generative AI and engage in conversations with users, the chatbots may not understand context, may ruin the social experience of connecting with other humans, and may make flawed decisions. This study raises questions about a potential future where humans and machines interact a great deal more as common actors on the same technical surfaces, such as social media platforms. We conclude with a framework for democratic governance of AI content moderation technologies, including processes for accountability and transparency.

Authors