Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
The EU’s Digital Services Act (DSA) aims at increasing transparency in mostly opaque content moderation practices of social media platforms. But what exactly does transparency mean in this context? Existing research and policies conceptualize transparency as the provision of information about procedures, measures, tools, and the scope of content moderation. We argue that this is too narrow and that it is essential to also consider how information about prohibited content is communicated to users. Community guidelines are written to familiarize users with the platforms’ rules in a comprehensible manner, which means that transparent provision of information would require users to be able to clearly understand what behavior is prohibited and what is admissible. Building on a novel dataset comprising content moderation policies of the most popular social media platforms, we measure the complexity of community guidelines for comparative analysis. We rely on computational methods such as large language models (LLMs) to categorize the prohibited content across platforms and to determine the complexity of the guidelines’ structure and language. We derive recommendations where current regulation regarding transparency in content moderation could be improved.