Most General Explanations of Tree Ensembles

Most General Explanations of Tree Ensembles

Yacine Izza, Akexey Ignatiev, Sasha Rubin, Joao Marques-Silva, Peter J. Stuckey

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5463-5471. https://doi.org/10.24963/ijcai.2025/608

Explainable Artificial Intelligence (XAI) is critical for attaining trust in the operation of AI systems. A key question of an AI system is ``why was this decision made this way''. Formal approaches to XAI use a formal model of the AI system to identify abductive explanations. While abductive explanations may be applicable to a large number of inputs sharing the same concrete values, more general explanations may be preferred for numeric inputs. So-called inflated abductive explanations give intervals for each feature ensuring that any input whose values fall withing these intervals is still guaranteed to make the same prediction. Inflated explanations cover a larger portion of the input space, and hence are deemed more general explanations. But there can be many (inflated) abductive explanations for an instance. Which is the best? In this paper, we show how to find a most general abductive explanation for an AI decision. This explanation covers as much of the input space as possible, while still being a correct formal explanation of the model's behaviour. Given that we only want to give a human one explanation for a decision, the most general explanation gives us the explanation with the broadest applicability, and hence the one most likely to seem sensible.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
Constraint Satisfaction and Optimization: CSO: Constraint optimization problems
Knowledge Representation and Reasoning: KRR: Diagnosis and abductive reasoning
Constraint Satisfaction and Optimization: CSO: Constraint satisfaction