JOURNAL ARTICLE

Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations

Abstract

Extracting generalized and robust representations is a major challenge in emotion recognition in conversations (ERC).To address this, we propose a supervised adversarial contrastive learning (SACL) framework for learning classspread structured representations in a supervised manner.SACL applies contrast-aware adversarial training to generate worst-case samples and uses joint class-spread contrastive learning to extract structured representations.It can effectively utilize label-level feature consistency and retain fine-grained intra-class features.To avoid the negative impact of adversarial perturbations on context-dependent data, we design a contextual adversarial training (CAT) strategy to learn more diverse features from context and enhance the model's context robustness.Under the framework with CAT, we develop a sequence-based SACL-LSTM to learn label-consistent and context-robust features for ERC.Experiments on three datasets show that SACL-LSTM achieves state-of-the-art performance on ERC.Extended experiments prove the effectiveness of SACL and CAT.

Keywords:
Adversarial system Consistency (knowledge bases) Context (archaeology) Feature (linguistics) Joint (building) Key (lock) Structured prediction

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.46
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Emotion and Mood Recognition
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
Sentiment Analysis and Opinion Mining
Physical Sciences →  Computer Science →  Artificial Intelligence
Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.