Learning First-Order Rules with Differentiable Logic Program Semantics

Learning First-Order Rules with Differentiable Logic Program Semantics

Kun Gao, Katsumi Inoue, Yongzhi Cao, Hanpin Wang

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3008-3014. https://doi.org/10.24963/ijcai.2022/417

Learning first-order logic programs (LPs) from relational facts which yields intuitive insights into the data is a challenging topic in neuro-symbolic research. We introduce a novel differentiable inductive logic programming (ILP) model, called differentiable first-order rule learner (DFOL), which finds the correct LPs from relational facts by searching for the interpretable matrix representations of LPs. These interpretable matrices are deemed as trainable tensors in neural networks (NNs). The NNs are devised according to the differentiable semantics of LPs. Specifically, we first adopt a novel propositionalization method that transfers facts to NN-readable vector pairs representing interpretation pairs. We replace the immediate consequence operator with NN constraint functions consisting of algebraic operations and a sigmoid-like activation function. We map the symbolic forward-chained format of LPs into NN constraint functions consisting of operations between subsymbolic vector representations of atoms. By applying gradient descent, the trained well parameters of NNs can be decoded into precise symbolic LPs in forward-chained logic format. We demonstrate that DFOL can perform on several standard ILP datasets, knowledge bases, and probabilistic relation facts and outperform several well-known differentiable ILP models. Experimental results indicate that DFOL is a precise, robust, scalable, and computationally cheap differentiable ILP model.
Keywords:
Machine Learning: Relational Learning
Knowledge Representation and Reasoning: Learning and reasoning
Machine Learning: Neuro-Symbolic Methods