Reasoning Like Human: Hierarchical Reinforcement Learning for Knowledge Graph Reasoning

Reasoning Like Human: Hierarchical Reinforcement Learning for Knowledge Graph Reasoning

Guojia Wan, Shirui Pan, Chen Gong, Chuan Zhou, Gholamreza Haffari

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 1926-1932. https://doi.org/10.24963/ijcai.2020/267

Knowledge Graphs typically suffer from incompleteness. A popular approach to knowledge graph completion is to infer missing knowledge by multihop reasoning over the information found along other paths connecting a pair of entities. However, multi-hop reasoning is still challenging because the reasoning process usually experiences multiple semantic issue that a relation or an entity has multiple meanings. In order to deal with the situation, we propose a novel Hierarchical Reinforcement Learning framework to learn chains of reasoning from a Knowledge Graph automatically. Our framework is inspired by the hierarchical structure through which human handle cognitionally ambiguous cases. The whole reasoning process is decomposed into a hierarchy of two-level Reinforcement Learning policies for encoding historical information and learning structured action space. As a consequence, it is more feasible and natural for dealing with the multiple semantic issue. Experimental results show that our proposed model achieves substantial improvements in ambiguous relation tasks.
Keywords:
Knowledge Representation and Reasoning: Reasoning about Knowledge and Belief
Data Mining: Mining Graphs, Semi Structured Data, Complex Data
Machine Learning Applications: Applications of Reinforcement Learning