This work investigates how the rich semantic embeddings of pre-trained language models can be used to help understand the general attitudes of an online community. This work describes a novel prediction model that can ingest statements describing an arbitrary context and a piece of content, and output answers to a set of 'attitude questions' describing the relationship between them. Typically, annotating answers to questions like "Does this contain sarcasm?", or "Is this content positive with respect to this context?" requires costly human interaction. In this work, we consider the ability of large language models to answer these questions, while under the constraint of a small dataset using a novel prediction head. We show that this methodology can accurately answer these attitude questions, compare the model to off-the-shelf language model approaches, and describe a method for collecting and annotating attitude question data sets. The novel attitude question answering model achieves a 89% accuracy on the attitude question answering task, outperforming the ablated models (87%) as well as the off the shelf models using BERT-based Sequence Classification (13%), BART-based Natural Language Inference (88%), and RoBERTa-based Question-Answering (87%).
Shuwen DengPaul PrasseDavid L. ReichTobias SchefferLena A. Jäger
Chenxiao LiuShuai LuWeizhu ChenDaxin JiangA. SvyatkovskiyShihua FuNeel SundaresanNan Duan