MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning
MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning
Ziyou Jiang, Lin Shi, Celia Chen, Fangwen Mu, Yumin Zhang, Qing Wang
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 4164-4170.
https://doi.org/10.24963/ijcai.2022/578
The main goal of dialogue disentanglement is to separate the mixed utterances from a chat slice into independent dialogues. Existing models often utilize either an utterance-to-utterance (U2U) prediction to determine whether two utterances that have the “reply-to” relationship belong to one dialogue, or an utterance-to-thread (U2T) prediction to determine which dialogue-thread a given utterance should belong to. Inspired by mutual leaning, we propose MuiDial, a novel dialogue disentanglement model, to exploit the intent of each utterance and feed the intent to a mutual learning U2U-U2T disentanglement model. Experimental results and in-depth analysis on several benchmark datasets demonstrate the effectiveness and generalizability of our approach.
Keywords:
Natural Language Processing: Dialogue and Interactive Systems
Natural Language Processing: Applications
Natural Language Processing: Knowledge Extraction