On Preferred Abductive Explanations for Decision Trees and Random Forests
On Preferred Abductive Explanations for Decision Trees and Random Forests
Gilles Audemard, Steve Bellart, Louenas Bounia, Frederic Koriche, Jean-Marie Lagniez, Pierre Marquis
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 643-650.
https://doi.org/10.24963/ijcai.2022/91
Abductive explanations take a central place in eXplainable Artificial Intelligence (XAI) by clarifying with few features
the way data instances are classified. However, instances may have exponentially many minimum-size abductive explanations, and
this source of complexity holds even for ``intelligible'' classifiers, such as decision trees. When the number of such abductive explanations is huge,
computing one of them, only, is often not informative enough. Especially, better explanations than the one
that is derived may exist. As a way to circumvent this issue, we propose to leverage
a model of the explainee, making precise her / his preferences about explanations, and to compute only
preferred explanations. In this paper, several models are pointed out and discussed. For each model, we present and
evaluate an algorithm for computing preferred majoritary reasons, where majoritary reasons are specific abductive
explanations suited to random forests. We show that in practice the preferred majoritary reasons for an instance
can be far less numerous than its majoritary reasons.
Keywords:
AI Ethics, Trust, Fairness: Explainability and Interpretability
AI Ethics, Trust, Fairness: Trustworthy AI
Constraint Satisfaction and Optimization: Constraints and Machine Learning
Knowledge Representation and Reasoning: Preference Modelling and Preference-Based Reasoning