Statistically Significant Concept-based Explanation of Image Classifiers via Model Knockoffs
Statistically Significant Concept-based Explanation of Image Classifiers via Model Knockoffs
Kaiwen Xu, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 519-526.
https://doi.org/10.24963/ijcai.2023/58
A concept-based classifier can explain the decision process of a deep learning model by human understandable concepts in image classification problems. However, sometimes concept-based explanations may cause false positives, which misregards unrelated concepts as important for the prediction task. Our goal is to find the statistically significant concept for classification to prevent misinterpretation. In this study, we propose a method using a deep learning model to learn the image concept and then using the knockoff sample to select the important concepts for prediction by controlling the False Discovery Rate (FDR) under a certain value. We evaluate the proposed method in our experiments on both synthetic and real data. Also, it shows that our method can control the FDR properly while selecting highly interpretable concepts to improve the trustworthiness of the model.
Keywords:
AI Ethics, Trust, Fairness: ETF: Trustworthy AI
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
Machine Learning: ML: Explainable/Interpretable machine learning