Understanding fine-grained sentiment dynamics in human conversations is a central goal for next-generation artificial intelligence, especially in scenarios where interactions are rich in both modalities and context. To advance research in this area, we organize the Multimodal Conversational Aspect-based Sentiment Analysis (MCABSA) challenge to the community of aspect-based sentiment analysis. The MCABSA challenge introduces two novel subtasks: 1) Panoptic Sentiment Sextuple Extraction, panoramically recognizing holder, target, aspect, opinion, sentiment, and rationale from multi-turn, multi-party multimodal dialogue; and 2) Sentiment Flipping Analysis, detecting the dynamic sentiment transformation throughout the conversation along with the causal reasons. To support these tasks, we present the PanoSent dataset, a high-quality, large-scale benchmark featuring multi-turn, multi-party dialogues annotated with both explicit and implicit sentiment elements across text, image, audio, and video modalities. PanoSent offers extensive real-world scenario coverage, providing a comprehensive testbed for multimodal conversational sentiment analysis. The challenge has attracted widespread participation from both academia and industry, with over 30 teams registered and more than 100 successful submissions. In this paper, we introduce the task, dataset, and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants. Further details of the challenge can be found at https://panosent.github.io/MM25-challenge.
Han ZhangHao FeiHong HanLizi LiaoErik CambriaMin Zhang
Zhiqiang GaoShihao GaoZixing ZhangYihao GuoHongyu ChenJing Han
Xudong HanKai LiuYanlin LiHao LiZheng Wang