Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Researchers are intrigued by AI’s dual impact on governance, both as a disruptor and a contributor. Studies indicate that AI may exacerbate the spread of misinformation in the digital realm and radicalize the public, adversely affecting governance. In contrast, some research demonstrates that AI tools can be employed to mitigate misinformation and support public policies. This proposed study explores how generative AI contributes to debunking misinformation and facilitating governance.
Research on the credibility and effect of AI-generated content on governance has not yielded conclusive results (Huang & Wang, 2023). Studies found that narratives attributed to AI could reduce the hostile bias toward the media (Cloudy et al., 2021), and the texts’ perceived credibility was not undermined by AI authorship (Henestrosa et al., 2023). Further, scholars suggested that credible information about climate change is more effective in shaping public opinion than human-authored political news (Stefanone et al., 2019) and climate change information (Samantray & Pin, 2019), leading to Hypothesis 1 (H1).
H1: The information generated by AI is more credible and more effective than real-world news in debunking misinformation and prioritizing climate change policies.
Secondly, when the origin of the information is concealed, individuals are usually unable to differentiate between human-authored text and that generated by GPT-2 (Kreps et al., 2022). Meanwhile, research shows that people may perceive news headlines generated by AI as less trustworthy compared to those authored by humans, regardless of their authenticity (Longoni et al., 2022). However, 73% of respondents trust information created by generative AI in 2023 (Capgemini Research Institute, 2023). Therefore, the effect of the latest AI needs to be explored (H2).
H2: Given the same information, disclosing that the information is generated by AI will reduce the expected effect in H1, i.e., its effect on climate change.
Thirdly, content modality matters. The current research mainly focuses on the public opinion effects of textual characteristics contents (Schreiner et al., 2021; Tolochko et al., 2019). Comparably, visual information is often perceived as more credible (Hameleers et al., 2020) and persuasive (Zhou et al., 2021). However, for generative AI, the distinct effects of visuals compared to texts have yet to be systematically investigated (H3).
H3: Generative AI can more effectively debunk misinformation and sway public opinion by producing visuals rather than texts.
To examine these hypotheses, we will collect 5,945 videos debunking misinformation around climate change from YouTube using a validated list of keywords, retrieve their covers and titles, slice them into tens of thousands of image frames, and download their subtitles with corresponding time stamps. Then, we will apply OpenAI’s ChatGPT to generate texts from titles and OpenAI’s DALL-E to generate images from the existing titles. Image frames and subtitles are saved for further analysis.
We are conducting an automated visual and textual analysis through machine learning to compare AI-generated information with real-world information and assess the dynamics of their dissemination. Complimentarily, we are designing a survey experiment including six provisional steps: 1) recruiting 1,140 (based on our statistical assumptions) respondents from the general public; 2) collecting pre-treatment data about the respondents’ attitude to climate change, Generative AI, and their sociopolitical characteristics; 3) randomizing participants and assigning them equally into six groups; 4) treating the six groups with irrelevant texts (placebo), real-world texts, AI-generated images (concealed), and AI-generated texts (concealed), AI-generated images (revealed), and AI-generated texts (revealed), respectively; 5) collecting the respondents’ post-treatment perceived significance of climate change and their endorsement of associated governmental policies; and 6) tracking how persistent is the public opinion effect among respondents in subsequent three months.
Combining machine learning and a survey experiment, this study delves into how generative AI gathers public support for climate change policies. Firstly, it pioneers the exploration of generative AI as a constructive force of governance. Secondly, it sheds light on how generative AI enables governments to effectively foster consensus on public issues at scale. Thirdly, the combined analysis of visual and textual information allows a nuanced understanding of the role of AI in public opinion. More broadly, this research highlights the potential for governments to leverage emerging technologies in combating misinformation and fostering citizens’ support for socially beneficial policies.
Keywords: Generative AI; Governance; Misinformation; Public Opinion; Climate Change; Multimodal Analysis
(Reference List and dataset available by request)