Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant

Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant

Kacper Sokol, Peter Flach

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence

The prevalence of automated decision making, influencing important aspects of our lives -- e.g., school admission, job market, insurance and banking -- has resulted in increasing pressure from society and regulators to make this process more transparent and ensure its explainability, accountability and fairness. We demonstrate a prototype voice-enabled device, called Glass-Box, which users can question to understand automated decisions and identify the underlying model's biases and errors. Our system explains algorithmic predictions with class-contrastive counterfactual statements (e.g., ``Had a number of conditions been different:...the prediction would change...''), which show a difference in a particular scenario that causes an algorithm to ``change its mind''. Such explanations do not require any prior technical knowledge to understand, hence are suitable for a lay audience, who interact with the system in a natural way -- through an interactive dialogue. We demonstrate the capabilities of the device by allowing users to impersonate a loan applicant who can question the system to understand the automated decision that he received.
Keywords:
Machine Learning: Classification
Machine Learning: Machine Learning
Natural Language Processing: Dialogue
Humans and AI: Human-Computer Interaction