Public deliberations play a crucial role in democratic systems. However, the unstructured nature of deliberations leads to challenges for moderators to analyze the large volume of data produced. This paper aims to solve this challenge by automatically identifying subjective topic
...
Public deliberations play a crucial role in democratic systems. However, the unstructured nature of deliberations leads to challenges for moderators to analyze the large volume of data produced. This paper aims to solve this challenge by automatically identifying subjective topics behind public discourse by leveraging Large Language Models (LLMs). The study is structured around two core objectives: Identifying Gold Labels and Exploring Subjective Human Labels. The results highlight that fine-tuning the LLaMa-2 model with QLoRa outperforms other methods for Identifying Gold Labels, while the Few-Shot Chain of Thoughts method, enhanced with EmotionPrompt, is particularly effective in capturing subjective variations in human annotations. However, the study also underscores significant limitations, such as the dependency on large, high-quality annotated datasets and the tendency of models to produce hallucinations. These findings highlight the potential of LLMs to identify subjective topics behind public discourse, while also emphasizing the need for further research to address these challenges.