Interpretable DNFs

Interpretable DNFs

Martin C. Cooper, Imane Bousdira, Clément Carbonnel

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4985-4993. https://doi.org/10.24963/ijcai.2025/555

A classifier is considered interpretable if each of its decisions has an explanation which is small enough to be easily understood by a human user. A DNF can be seen as a binary classifier kappa over boolean domains. The size of an explanation of a positive decision taken by a DNF kappa is bounded by the size of the terms in kappa, since we can explain a positive decision by giving a term of kappa that evaluates to true. Since both positive and negative decisions must be explained, we consider that interpretable DNFs are those kappa for which both kappa and its complement can be expressed as DNFs composed of terms of bounded size. In this paper, we investigate the family of k-DNFs whose complements can also be expressed as k-DNFs. We compare two such families, namely depth-k decision trees and nested k-DNFs, a novel family of models. Experimental evidence indicates that nested k-DNFs are an interesting alternative to decision trees in terms of interpretability and accuracy.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
Knowledge Representation and Reasoning: General
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
Machine Learning: General