Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Framing, as defined by (Baumgartner 2008), is "the representation of an issue from a specific perspective, often sidelining alternate viewpoints". It holds a pivotal role in political science, especially in the realms of public policy and political communication. The concept of policy frames underscores the idea that public problems are not monolithically defined nor do they have universally accepted solutions. Rather, political entities champion varied understandings rooted in their individual interests and ideologies. The dominant frames significantly influence policy outcomes. Despite the importance of policy frames, their empirical measurement has been challenging due to the multifaceted ways they are articulated by political figures. Frames have long been an essential tool in political communication research, aiding in comprehending media coverage, its perspectives, and its evolutionary dynamics. Political framing studies often probe into the biases introduced in political dialogues to influence perspectives on specific issues.
Various framing analysis methodologies find application across disciplines like communication sciences and political sciences. Traditionally, manual analysis, supported by a predefined codebook, has been employed. However, with the digital age ushering in vast data streams, automated frame analysis has become a paramount goal. Several approaches exist to study frames in expansive corpora, but a universally accepted and efficient method remains elusive. Especially unsupervised methods are still under scrutiny for their theoretical fidelity to traditional frame analysis.
Existing research often employs well-tread but potentially restricted methods for frame measurement, primarily focusing on domain-specific data. While such studies offer crucial insights into specific issues, they often fall short as the foundational models might not measure frames in their genuine theoretical essence. Some exceptions do exist where models, tailored to the data of interest, capture frames with more precision using supervised methods. While these studies mark significant strides, they tend to confine their scope to analyzing framing along singular dimensions or within specific domains.
In this study, we have dual objectives. Firstly, we endeavor to juxtapose various computational methods employed for frame measurement, elucidating their strengths and limitations. Secondly, we aspire to delineate a framework for subsequent research on frame detection. This framework, amalgamating a spectrum of methods, aims to autonomously pinpoint and measure pertinent frames across diverse contexts, addressing the challenges faced by extant methodologies. At the core of our methodological exploration, we utilize three datasets encompassing tweets and newspaper articles related to content moderation and AI regulation.
Preliminary results show that for news articles data, Large Language Models (LLMs) like ChatGPT and HuggingChat yielded inconsistent outcomes contingent on their learning paradigm. In contrast, traditional classifiers such as RandomForest, SVM, and Logistic Regression unveiled a diverse spectrum of machine learning capabilities when applied to framing tasks. Although there have been significant advancements in unsupervised techniques, tools including LDA and BERTopic appear to gravitate towards classic topic modeling, neglecting the intricate differences in framing. Similarly, within the context of tweets, the text's concise nature requires a unique framing detection approach. The observed patterns, particularly among transformer models like TweetBert and DistilRoberta, shed light on their efficacy in different scenarios. However, even cutting-edge LLMs face challenges in this domain, reinforcing the complexity of the task. Furthermore, our investigation into unsupervised models for tweets revealed a consistent trend where the nuances of framing remain elusive.