On Tackling Explanation Redundancy in Decision Trees (Extended Abstract)

On Tackling Explanation Redundancy in Decision Trees (Extended Abstract)

Yacine Izza, Alexey Ignatiev, Joao Marques-Silva

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Journal Track. Pages 6900-6904. https://doi.org/10.24963/ijcai.2023/779

Claims about the interpretability of decision trees can be traced back to the origins of machine learning (ML). Indeed, given some input consistent with a decision tree's path, the explanation for the resulting prediction consists of the features in that path. Moreover, a growing number of works propose the use of decision trees, and of other so-called interpretable models, as a possible solution for deploying ML models in high-risk applications. This paper overviews recent theoretical and practical results which demonstrate that for most decision trees, tree paths exhibit so-called explanation redundancy, in that logically sound explanations can often be significantly more succinct than what the features in the path dictates. More importantly, such decision tree explanations can be computed in polynomial-time, and so can be produced with essentially no effort other than traversing the decision tree. The experimental results, obtained on a large range of publicly available decision trees, support the paper's claims.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
AI Ethics, Trust, Fairness: ETF: Trustworthy AI
Constraint Satisfaction and Optimization: CSO: Constraint satisfaction
Constraint Satisfaction and Optimization: CSO: Satisfiabilty
Knowledge Representation and Reasoning: KRR: Automated reasoning and theorem proving
Knowledge Representation and Reasoning: KRR: Diagnosis and abductive reasoning
Machine Learning: ML: Symbolic methods