Human-Driven FOL Explanations of Deep Learning

Human-Driven FOL Explanations of Deep Learning

Gabriele Ciravegna, Francesco Giannini, Marco Gori, Marco Maggini, Stefano Melacci

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2234-2240. https://doi.org/10.24963/ijcai.2020/309

Deep neural networks are usually considered black-boxes due to their complex internal architecture, that cannot straightforwardly provide human-understandable explanations on how they behave. Indeed, Deep Learning is still viewed with skepticism in those real-world domains in which incorrect predictions may produce critical effects. This is one of the reasons why in the last few years Explainable Artificial Intelligence (XAI) techniques have gained a lot of attention in the scientific community. In this paper, we focus on the case of multi-label classification, proposing a neural network that learns the relationships among the predictors associated to each class, yielding First-Order Logic (FOL)-based descriptions. Both the explanation-related network and the classification-related network are jointly learned, thus implicitly introducing a latent dependency between the development of the explanation mechanism and the development of the classifiers. Our model can integrate human-driven preferences that guide the learning-to-explain process, and it is presented in a unified framework. Different typologies of explanations are evaluated in distinct experiments, showing that the proposed approach discovers new knowledge and can improve the classifier performance.
Keywords:
Machine Learning: Explainable Machine Learning
Machine Learning: Interpretability
Machine Learning: Neuro-Symbolic Methods