Multi-modal Anchor Gated Transformer with Knowledge Distillation for Emotion Recognition in Conversation

Multi-modal Anchor Gated Transformer with Knowledge Distillation for Emotion Recognition in Conversation

Jie Li, Shifei Ding, Lili Guo, Xuan Li

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 8141-8149. https://doi.org/10.24963/ijcai.2025/905

Emotion Recognition in Conversation (ERC) aims to detect the emotions of individual utterances within a conversation. Generating efficient and modality-specific representations for each utterance remains a significant challenge. Previous studies have proposed various models to integrate features extracted using different modality-specific encoders. However, they neglect the varying contributions of modalities to this task and introduce high complexity by aligning modalities at the frame level. To address these challenges, we propose the Multi-modal Anchor Gated Transformer with Knowledge Distillation (MAGTKD) for the ERC task. Specifically, prompt learning is employed to enhance textual modality representations, while knowledge distillation is utilized to strengthen representations of weaker modalities. Furthermore, we introduce a multi-modal anchor gated transformer to effectively integrate utterance-level representations across modalities. Extensive experiments on the IEMOCAP and MELD datasets demonstrate the effectiveness of knowledge distillation in enhancing modality representations and achieve state-of-the-art performance in emotion recognition. Our code is available at: https://github.com/JieLi-dd/MAGTKD.
Keywords:
Natural Language Processing: NLP: Sentiment analysis, stylistic analysis, and argument mining
Computer Vision: CV: Multimodal learning
Machine Learning: ML: Multi-modal learning