Individual Submission Summary
Share...

Direct link:

Misinformation Detection with Generative AI

Sun, September 8, 8:00 to 9:30am, Pennsylvania Convention Center (PCC), 110B

Abstract

Misinformation is a serious challenge for democracies today. There has been concern recently that breakthroughs in generative AI will worsen this issue by increasing the number and quality of fake news stories spread on social media. However, is it possible to use the same technologies to identify and counteract misinformation effectively? In this paper, we propose to build an AI agent around a Large Language Model core, which will help the general public, journalists, and researchers evaluate the information they encounter and the data they work with. By leveraging web retrieval and uncertainty resolution methods, this agent will be capable of identifying and retrieving the information and evidence it needs to make accurate evaluations of fake news stories as they happen in real time. By also adding uncertainty quantification and explainability, our system will incorporate trustworthiness and graceful failure that previous approaches lacked in a domain where perfect accuracy is impossible. We will address the generalization and practical failures in previous systems by carefully refining existing misinformation datasets and testing in real-world settings. Furthermore, we aim to conduct a survey focusing on the quality and limitations of these datasets to benchmark the most reliable ones for research. We will test our agent to detect misinformation on these large datasets in the context of the upcoming 2024 US primary election campaign by focusing on Reddit’s political discussions, enabling us to study the spread of misinformation patterns with previously unattainable scale and accuracy. Combining these components will provide a path for a more efficient approach to mitigating misinformation in online communities.

Authors