Understanding the Effect of Bias in Deep Anomaly Detection

Understanding the Effect of Bias in Deep Anomaly Detection

Ziyu Ye, Yuxin Chen, Haitao Zheng

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 3314-3320. https://doi.org/10.24963/ijcai.2021/456

Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data. Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples. However, the labeled data often does not align with the target distribution and introduces harmful bias to the trained model. In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection. Concretely, we view anomaly detection as a supervised learning task where the objective is to optimize the recall at a given false positive rate. We formally study the relative scoring bias of an anomaly detector, defined as the difference in performance with respect to a baseline anomaly detector. We establish the first finite sample rates for estimating the relative scoring bias for deep anomaly detection, and empirically validate our theoretical results on both synthetic and real-world datasets. We also provide an extensive empirical study on how a biased training anomaly set affects the anomaly score function and therefore the detection performance on different anomaly classes. Our study demonstrates scenarios in which the biased anomaly set can be useful or problematic, and provides a solid benchmark for future research.
Keywords:
Machine Learning: Deep Learning
Data Mining: Anomaly/Outlier Detection
Machine Learning: Semi-Supervised Learning