Likelihood-free Out-of-Distribution Detection with Invertible Generative Models

Likelihood-free Out-of-Distribution Detection with Invertible Generative Models

Amirhossein Ahmadian, Fredrik Lindsten

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2119-2125. https://doi.org/10.24963/ijcai.2021/292

Likelihood of generative models has been used traditionally as a score to detect atypical (Out-of-Distribution, OOD) inputs. However, several recent studies have found this approach to be highly unreliable, even with invertible generative models, where computing the likelihood is feasible. In this paper, we present a different framework for generative model--based OOD detection that employs the model in constructing a new representation space, instead of using it directly in computing typicality scores, where it is emphasized that the score function should be interpretable as the similarity between the input and training data in the new space. In practice, with a focus on invertible models, we propose to extract low-dimensional features (statistics) based on the model encoder and complexity of input images, and then use a One-Class SVM to score the data. Contrary to recently proposed OOD detection methods for generative models, our method does not require computing likelihood values. Consequently, it is much faster when using invertible models with iteratively approximated likelihood (e.g. iResNet), while it still has a performance competitive with other related methods.
Keywords:
Machine Learning: Deep Learning
Uncertainty in AI: Uncertainty Representations
Data Mining: Anomaly/Outlier Detection