Identifying Drivers of Predictive Aleatoric Uncertainty

Identifying Drivers of Predictive Aleatoric Uncertainty

Pascal Iversen, Simon Witzke, Katharina Baum, Bernhard Y. Renard

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5453-5462. https://doi.org/10.24963/ijcai.2025/607

Explainability and uncertainty quantification are key to trustable artificial intelligence. However, the reasoning behind uncertainty estimates is generally left unexplained. Identifying the drivers of uncertainty complements explanations of point predictions in recognizing model limitations and enhancing transparent decision-making. So far, explanations of uncertainties have been rarely studied. The few exceptions rely on Bayesian neural networks or technically intricate approaches, such as auxiliary generative models, thereby hindering their broad adoption. We propose a straightforward approach to explain predictive aleatoric uncertainties. We estimate uncertainty in regression as predictive variance by adapting a neural network with a Gaussian output distribution. Subsequently, we apply out-of-the-box explainers to the model's variance output. This approach can explain uncertainty influences more reliably than complex published approaches, which we demonstrate in a synthetic setting with a known data-generating process. We substantiate our findings with a nuanced, quantitative benchmark including synthetic and real, tabular and image datasets. For this, we adapt metrics from conventional XAI research to uncertainty explanations. Overall, the proposed method explains uncertainty estimates with little modifications to the model architecture and outperforms more intricate methods in most settings.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
Machine Learning: ML: Trustworthy machine learning
Uncertainty in AI: UAI: Uncertainty representations
AI Ethics, Trust, Fairness: ETF: Trustworthy AI