Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 3808-3814. https://doi.org/10.24963/ijcai.2021/524

Meeting summarization is a challenging task due to its dynamic interaction nature among multiple speakers and lack of sufficient training data. Existing methods view the meeting as a linear sequence of utterances while ignoring the diverse relations between each utterance. Besides, the limited labeled data further hinders the ability of data-hungry neural models. In this paper, we try to mitigate the above challenges by introducing dialogue-discourse relations. First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations. The core module is a relational graph encoder, where the utterances and discourse relations are modeled in a graph interaction manner. Moreover, we devise a Dialogue Discourse-Aware Data Augmentation (DDADA) strategy to construct a pseudo-summarization corpus from existing input meetings, which is 20 times larger than the original dataset and can be used to pretrain DDAMS. Experimental results on AMI and ICSI meeting datasets show that our full system can achieve SOTA performance. Our codes and outputs are available at https://github.com/xcfcode/DDAMS/.
Keywords:
Natural Language Processing: Natural Language Summarization
Natural Language Processing: Natural Language Generation