Counterfactual Explanations Under Model Multiplicity and Their Use in Computational Argumentation

Counterfactual Explanations Under Model Multiplicity and Their Use in Computational Argumentation

Gianvincenzo Alfano, Adam Gould, Francesco Leofante, Antonio Rago, Francesca Toni

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4321-4329. https://doi.org/10.24963/ijcai.2025/481

Counterfactual explanations (CXs) are widely recognised as an essential technique for providing recourse recommendations for AI models. However, it is not obvious how to determine CXs in model multiplicity scenarios, where equally performing but different models can be obtained for the same task. In this paper, we propose novel qualitative and quantitative definitions of CXs based on explicit, nested quantification over (groups) of model decisions. We also study properties of these notions and identify decision problems of interest therefor. While our CXs are broadly applicable, in this paper we instantiate them within computational argumentation where model multiplicity naturally emerges, e.g. with incomplete and case-based argumentation frameworks. We then illustrate the suitability of our CXs for model multiplicity in legal and healthcare contexts, before analysing the complexity of the associated decision problems.
Keywords:
Knowledge Representation and Reasoning: KRR: Argumentation
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability