Dou HuYinan BaoLingwei WeiWei ZhouSonglin Hu
Extracting generalized and robust representations is a major challenge in emotion recognition in conversations (ERC).To address this, we propose a supervised adversarial contrastive learning (SACL) framework for learning classspread structured representations in a supervised manner.SACL applies contrast-aware adversarial training to generate worst-case samples and uses joint class-spread contrastive learning to extract structured representations.It can effectively utilize label-level feature consistency and retain fine-grained intra-class features.To avoid the negative impact of adversarial perturbations on context-dependent data, we design a contextual adversarial training (CAT) strategy to learn more diverse features from context and enhance the model's context robustness.Under the framework with CAT, we develop a sequence-based SACL-LSTM to learn label-consistent and context-robust features for ERC.Experiments on three datasets show that SACL-LSTM achieves state-of-the-art performance on ERC.Extended experiments prove the effectiveness of SACL and CAT.
Dou HuYinan BaoLingwei WeiWei ZhouSonglin Hu
Dou HuYinan BaoLingwei WeiWei ZhouSonglin Hu
Zhuorong LiDaiwei YuMinghui WuCanghong JinHongnian Yu
Kailai YangTianlin ZhangHassan AlhuzaliSophia Ananiadou