Learning Relational Representations with Auto-encoding Logic Programs

Learning Relational Representations with Auto-encoding Logic Programs

Sebastijan Dumancic, Tias Guns, Wannes Meert, Hendrik Blockeel

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Understanding Intelligence and Human-level AI in the New Machine Learning era. Pages 6081-6087. https://doi.org/10.24963/ijcai.2019/842

Deep learning methods capable of handling relational data have proliferated over the past years. In contrast to traditional relational learning methods that leverage first-order logic for representing such data, these methods aim at re-representing symbolic relational data in Euclidean space. They offer better scalability, but can only approximate rich relational structures and are less flexible in terms of reasoning. This paper introduces a novel framework for relational representation learning that combines the best of both worlds. This framework, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the the original and latent representation is done by means of logic programs instead of neural networks. We show how learning can be cast as a constraint optimisation problem for which existing solvers can be used. The use of logic as a representation language makes the proposed framework more accurate (as the representation is exact, rather than approximate), more flexible, and more interpretable than deep learning methods. We experimentally show that these latent representations are indeed beneficial in relational learning tasks.
Keywords:
Special Track on Understanding Intelligence and Human-level AI in the New Machine Learning era: Knowledge representations for Learning (Special Track on Human AI and Machine Learning)
Special Track on Understanding Intelligence and Human-level AI in the New Machine Learning era: Learning knowledge representations (Special Track on Human AI and Machine Learning)