Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation

Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation

Yang Gao, Christian M. Meyer, Mohsen Mesgar, Iryna Gurevych

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2350-2356. https://doi.org/10.24963/ijcai.2019/326

Document summarisation can be formulated as a sequential decision-making problem, which can be solved by Reinforcement Learning (RL) algorithms. The predominant RL paradigm for summarisation learns a cross-input policy, which requires considerable time, data and parameter tuning due to the huge search spaces and the delayed rewards. Learning input-specific RL policies is a more efficient alternative, but so far depends on handcrafted rewards, which are difficult to design and yield poor performance. We propose RELIS, a novel RL paradigm that learns a reward function with Learning-to-Rank (L2R) algorithms at training time and uses this reward function to train an input-specific RL policy at test time. We prove that RELIS guarantees to generate near-optimal summaries with appropriate L2R and RL algorithms. Empirically, we evaluate our approach on extractive multi-document summarisation. We show that RELIS reduces the training time by two orders of magnitude compared to the state-of-the-art models while performing on par with them.
Keywords:
Machine Learning: Learning Preferences or Rankings
Machine Learning: Reinforcement Learning
Natural Language Processing: Natural Language Generation
Natural Language Processing: Natural Language Summarization
Machine Learning Applications: Applications of Reinforcement Learning