JOURNAL ARTICLE

Understanding Online Attitudes with Pre-Trained Language Models

Abstract

This work investigates how the rich semantic embeddings of pre-trained language models can be used to help understand the general attitudes of an online community. This work describes a novel prediction model that can ingest statements describing an arbitrary context and a piece of content, and output answers to a set of 'attitude questions' describing the relationship between them. Typically, annotating answers to questions like "Does this contain sarcasm?", or "Is this content positive with respect to this context?" requires costly human interaction. In this work, we consider the ability of large language models to answer these questions, while under the constraint of a small dataset using a novel prediction head. We show that this methodology can accurately answer these attitude questions, compare the model to off-the-shelf language model approaches, and describe a method for collecting and annotating attitude question data sets. The novel attitude question answering model achieves a 89% accuracy on the attitude question answering task, outperforming the ablated models (87%) as well as the off the shelf models using BERT-based Sequence Classification (13%), BART-based Natural Language Inference (88%), and RoBERTa-based Question-Answering (87%).

Keywords:
Computer science Natural language processing Artificial intelligence

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
26
Refs
0.33
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Social Media and Politics
Social Sciences →  Social Sciences →  Communication
Misinformation and Its Impacts
Social Sciences →  Social Sciences →  Sociology and Political Science
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Robustness of Pre-trained Language Models for Natural Language Understanding

Utama, Prasetya Ajie

Journal:   TUbilio (Technical University of Darmstadt) Year: 2024
BOOK-CHAPTER

Pre-trained Language Models

Huaping ZhangJianyun Shang

Year: 2025 Pages: 73-90
BOOK-CHAPTER

Pre-trained Language Models

Gerhard PaaßSven Giesselbach

Artificial intelligence: foundations, theory, and algorithms/Artificial intelligence: Foundations, theory, and algorithms Year: 2023 Pages: 19-78
© 2026 ScienceGate Book Chapters — All rights reserved.