Predictive Uncertainty Estimation for Tractable Deep Probabilistic Models
Predictive Uncertainty Estimation for Tractable Deep Probabilistic Models
Julissa Villanueva Llerena
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 5210-5211.
https://doi.org/10.24963/ijcai.2020/745
Tractable Deep Probabilistic Models (TPMs) are generative models based on arithmetic circuits that allow for exact marginal inference in linear time. These models have obtained promising results in several machine learning tasks. Like many other models, TPMs can produce over-confident incorrect inferences, especially on regions with small statistical support. In this work, we will develop efficient estimators of the predictive uncertainty that are robust to data scarcity and outliers. We investigate two approaches. The first approach measures the variability of the output to perturbations of the model weights. The second approach captures the variability of the prediction to changes in the model architecture. We will evaluate the approaches on challenging tasks such as image completion and multilabel classification.
Keywords:
Trust, Fairness, Bias: General
Uncertainty in AI: Graphical Models
Machine Learning: Probabilistic Machine Learning