Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Social media platforms play a pivotal role in contemporary political discourse, serving as arenas for public engagement. But while platforms have become important in shaping political debates, the prevalence of hateful language and misinformation poses pressing challenges. The effective implementation and transparent communication of content moderation policies and community guidelines to users is thus important not only in fostering constructive political discussions but also in helping users understand what is and what is not admissible in online discussion. But how do users interpret the platforms' guidelines when assessing potentially harmful content? We argue that comprehensibility depends on the accessibility and visibility of the information presented in the platforms' guidelines. Drawing on a sample of 300 subjects in a laboratory setting, we address this question by asking respondents to adjudicate potentially harmful content on ten social media platforms with respect to whether it constitutes a violation of the platform's guidelines. We further include questions about a number of social and political attitudes that may condition their performance in the task. Tracking respondents' browsing behavior while users navigate platforms’ community guidelines, we measure temporal variation, stability, and consistency in the adjudication of harmful content, allowing us to better understand the efficacy of content moderation guidelines of varying transparency.