Good Explanations in Explainable Artificial Intelligence (XAI): Evidence from Human Explanatory Reasoning
Good Explanations in Explainable Artificial Intelligence (XAI): Evidence from Human Explanatory Reasoning
Ruth M.J. Byrne
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Survey Track. Pages 6536-6544.
https://doi.org/10.24963/ijcai.2023/733
Insights from cognitive science about how people understand explanations can be instructive for the development of robust, user-centred explanations in eXplainable Artificial Intelligence (XAI). I survey key tendencies that people exhibit when they construct explanations and make inferences from them, of relevance to the provision of automated explanations for decisions by AI systems. I first review experimental discoveries of some tendencies people exhibit when they construct explanations, including evidence on the illusion of explanatory depth, intuitive versus reflective explanations, and explanatory stances. I then consider discoveries of how people reason about causal explanations, including evidence on inference suppression, causal discounting, and explanation simplicity. I argue that central to the XAI endeavor is the requirement that automated explanations provided by an AI system should make sense to human users.
Keywords:
Survey: Humans and AI
Survey: AI Ethics, Trust, Fairness
Survey: Machine Learning