Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
In 2020, Twitter (now called X) added banners that marked tweets that contained misinformation. This was applied to a variety of topics, including COVID-19, absentee voting, and election misinformation. A 2021 poll conducted by the Pew Research Policy found that 69% of American Twitter said that they received news from the social media website. 59% of Twitter (X) users say that the social media site is an important way to keep up with news but not the most important, while 8% of respondents say that it is the most important way to keep up with the news (Mitchell, 2021).
In the two most recent presidential elections, misinformation has had a drastic effect on voters and their perceptions and attitudes about elections. This study will examine how misinformation labels affect an individual's trust when confronted with election related tweets. It will utilize an experimental design to test if a misinformation label on a tweet will affect the trust an individual has in the factuality of the information. It will be randomized to show participants tweets without showing an individual’s name attached or showing their party leader saying the tweet. This study expects to find that individuals are more likely to believe a tweet is true if it comes from their party elite, even if it does have a misinformation content label.
As demonstrated by the polls, a substantial amount of Americans received their news from Twitter. However, Twitter has been known to spread misinformation or misleading information, as well. In a 2020 press release about the misinformation policy, Twitter acknowledged their role in spreading misinformation about COVID-19. In this release, three types of misleading content were identified as misleading information, disputed claims, and unverified claims. The level of potential harm one of these tweets contain affected the label it has. For example, tweets with misleading information with only a moderate level for harm have been labeled, whereas a severe level of harm due to that misleading information have been removed.
This study seeks to determine whether misinformation tags to election-related tweets impact the degree to which voters trust election information from news and social media sources. It hypothesizes that moderate users will not trust tweets that are marked with the misinformation tag, whereas more active Twitter users and strongly aligned partisans will not be swayed as easily due to the misinformation tag.