Complex Query Answering with Neural Link Predictors (Extended Abstract)*
Complex Query Answering with Neural Link Predictors (Extended Abstract)*
Pasquale Minervini, Erik Arakelyan, Daniel Daza, Michael Cochez
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 5309-5313.
https://doi.org/10.24963/ijcai.2022/741
Neural link predictors are useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries containing logical conjunctions (∧), disjunctions (∨), and existential quantifiers (∃). We propose a framework for efficiently answering complex queries on in- complete Knowledge Graphs. We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor. We then analyse two solutions to the optimisation problem, including gradient-based and combinatorial search. In our experiments, the proposed approach produces more accurate results than state-of-the-art methods — black-box models trained on millions of generated queries — without the need for training on a large and diverse set of complex queries. Using orders of magnitude less training data, we obtain relative improvements ranging from 8% up to 40% in Hits@3 across multiple knowledge graphs. We find that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms. All our source code and datasets are available online (https://github.com/uclnlp/cqd).
Keywords:
Artificial Intelligence: General