Individual Submission Summary
Share...

Direct link:

Cross-Domain Transfer Learning for Polarized Text

Thu, September 5, 2:00 to 3:30pm, Pennsylvania Convention Center (PCC), 112A

Abstract

Much social science research relies on supervised learning to analyze political text data, which requires a substantial amount of human-labeled data. When it comes to stance detection, such a task can be more intricate due to task diversity. Here, the focus is not on the general polarity of the unit of analysis, but rather on the polarity expressed towards a specific target. Scholars need to allocate resources to annotate training data for each new target. It can be exhaustive and cost-ineffective for researchers interested in a new task. In response to these challenges, scholars have turned to transfer learning as a solution. The idea is to utilize the knowledge acquired in the source task to enhance the model’s performance on the target task. This paper evaluates the effectiveness of cross-domain transfer learning regarding stance detection when it comes to polarized text. More specifically, I focus on the stance towards China in US media. Since no study has previously examined the stance towards China, there are no existing annotated datasets. Instead, I leverage sources known for their stance towards China and use them as training data. Specifically, I utilize Global Times, a Chinese international daily newspaper affiliated with the CCP, and Breitbart News, a far-right conservative media outlet, for this task. I report the performance of the models through the confusion matrix. While researchers have proposed cross-domain transfer learning to address the data scarcity problem in topic classification, we have limited knowledge of its performance in stance detection. This research sheds light on the application of cross-domain transfer learning, offering insights into its effectiveness in addressing data scarcity challenges in a unique context within social science research.

Author