Induction of Interpretable Possibilistic Logic Theories from Relational Data

Induction of Interpretable Possibilistic Logic Theories from Relational Data

Ondrej Kuzelka, Jesse Davis, Steven Schockaert

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 1153-1159. https://doi.org/10.24963/ijcai.2017/160

The field of statistical relational learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which makes them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.
Keywords:
Knowledge Representation, Reasoning, and Logic: Common-Sense Reasoning
Machine Learning: Relational Learning
Uncertainty in AI: Uncertainty Representations
Uncertainty in AI: Uncertainty in AI