Generating Reasonable Legal Text through the Combination of Language Modeling and Question Answering

Generating Reasonable Legal Text through the Combination of Language Modeling and Question Answering

Weijing Huang, Xianfeng Liao, Zhiqiang Xie, Jiang Qian, Bojin Zhuang, Shaojun Wang, Jing Xiao

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3687-3693. https://doi.org/10.24963/ijcai.2020/510

Due to the improvement of Language Modeling, the emerging NLP assistant tools aiming for text generation greatly reduce the human workload on writing documents. However, the generation of legal text faces greater challenges than ordinary texts because of its high requirement for keeping logic reasonable, which can not be guaranteed by Language Modeling right now. To generate reasonable legal documents, we propose a novel method CoLMQA, which (1) combines Language Modeling and Question Answering, (2) generates text with slots by Language Modeling, and (3) fills the slots by our proposed Question Answering method named Transformer-based Key-Value Memory Networks. In CoLMQA, the slots represent the text part that needs to be highly constrained by logic, such as the name of the law and the number of the law article. And the Question Answering fills the slots in context with the help of Legal Knowledge Base to keep logic reasonable. The experiment verifies the quality of legal documents generated by CoLMQA, surpassing the documents generated by pure Language Modeling.
Keywords:
Natural Language Processing: Natural Language Generation
Natural Language Processing: Question Answering
Natural Language Processing: NLP Applications and Tools