Efficient and Rigorous Model-Agnostic Explanations
Efficient and Rigorous Model-Agnostic Explanations
Joao Marques-Silva, Jairo A. Lefebre-Lobaina, Maria Vanina Martinez
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 2637-2646.
https://doi.org/10.24963/ijcai.2025/294
Explainable artificial intelligence (XAI) is at the core of trustworthy AI. The best-known methods of XAI are sub-symbolic. Unfortunately, these methods do not give guarantees of rigor. Logic-based XAI addresses the lack of rigor of sub-symbolic methods, but in turn it exhibits some drawbacks. These include scalability, explanation size, but also the need to access the details of the machine learning model. Furthermore, access to the details of an ML model may reveal sensitive information. This
paper builds on recent work on symbolic model-agnostic XAI, which is based on explaining samples of behavior of a blackbox ML model, and proposes efficient algorithms for the computation of explanations. The experiments confirm the scalability of the novel algorithms.
Keywords:
Constraint Satisfaction and Optimization: CSO: Constraint programming
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
Constraint Satisfaction and Optimization: CSO: Constraint learning and acquisition
Constraint Satisfaction and Optimization: CSO: Satisfiabilty
