Explanation Perspectives from the Cognitive Sciences---A Survey

Explanation Perspectives from the Cognitive Sciences---A Survey

Ramya Srinivasan, Ajay Chander

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Survey track. Pages 4812-4818. https://doi.org/10.24963/ijcai.2020/670

With growing adoption of AI across fields such as healthcare, finance, and the justice system, explaining an AI decision has become more important than ever before. Development of human-centric explainable AI (XAI) systems necessitates an understanding of the requirements of the human-in-the-loop seeking the explanation. This includes the cognitive behavioral purpose that the explanation serves for its recipients, and the structure that the explanation uses to reach those ends. An understanding of the psychological foundations of explanations is thus vital for the development of effective human-centric XAI systems. Towards this end, we survey papers from the cognitive science literature that address the following broad questions: (1) what is an explanation, (2) what are explanations for, and 3) what are the characteristics of good and bad explanations. We organize the insights gained therein by means of highlighting the advantages and shortcomings of various explanation structures and theories, discuss their applicability across different domains, and analyze their utility to various types of humans-in-the-loop. We summarize the key takeaways for human-centric design of XAI systems, and recommend strategies to bridge the existing gap between XAI research and practical needs. We hope this work will spark the development of novel human-centric XAI systems.
Keywords:
Human aspects in AI: general
Safe, Explainable, and Trustworthy AI: general