Experimental Comparison and Survey of Twelve Time Series Anomaly Detection Algorithms (Extended Abstract)

Experimental Comparison and Survey of Twelve Time Series Anomaly Detection Algorithms (Extended Abstract)

Cynthia Freeman, Jonathan Merriman, Ian Beaver, Abdullah Mueen

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Journal Track. Pages 5737-5741. https://doi.org/10.24963/ijcai.2022/801

The existence of an anomaly detection method that is optimal for all domains is a myth. Thus, there exists a plethora of anomaly detection methods which increases every year for a wide variety of domains. But a strength can also be a weakness; given this massive library of methods, how can one select the best method for their application? Current literature is focused on creating new anomaly detection methods or large frameworks for experimenting with multiple methods at the same time. However, and especially as the literature continues to expand, an extensive evaluation of every anomaly detection method is simply not feasible. To reduce this evaluation burden, we present guidelines to intelligently choose the optimal anomaly detection methods based on the characteristics the time series displays such as seasonality, trend, level change concept drift, and missing time steps. We provide a comprehensive experimental validation and survey of twelve anomaly detection methods over different time series characteristics to form guidelines based on several metrics: the AUC (Area Under the Curve), windowed F-score, and Numenta Anomaly Benchmark (NAB) scoring model. Applying our methodologies can save time and effort by surfacing the most promising anomaly detection methods instead of experimenting extensively with a rapidly expanding library of anomaly detection methods, especially in an online setting.
Keywords:
Data Mining: Anomaly/Outlier Detection
Machine Learning: Applications
Machine Learning: Time-series; Data Streams