Triple-to-Text Generation with an Anchor-to-Prototype Framework

Triple-to-Text Generation with an Anchor-to-Prototype Framework

Ziran Li, Zibo Lin, Ning Ding, Hai-Tao Zheng, Ying Shen

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3780-3786. https://doi.org/10.24963/ijcai.2020/523

Generating a textual description from a set of RDF triplets is a challenging task in natural language generation. Recent neural methods have become the mainstream for this task, which often generate sentences from scratch. However, due to the huge gap between the structured input and the unstructured output, the input triples alone are insufficient to decide an expressive and specific description. In this paper, we propose a novel anchor-to-prototype framework to bridge the gap between structured RDF triples and natural text. The model retrieves a set of prototype descriptions from the training data and extracts writing patterns from them to guide the generation process. Furthermore, to make a more precise use of the retrieved prototypes, we employ a triple anchor that aligns the input triples into groups so as to better match the prototypes. Experimental results on both English and Chinese datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both automatic and manual evaluation, demonstrating the benefit of learning guidance from retrieved prototypes to facilitate triple-to-text generation.
Keywords:
Natural Language Processing: Natural Language Generation
Natural Language Processing: Natural Language Processing
Natural Language Processing: NLP Applications and Tools
Natural Language Processing: Other