Contextual Outlier Interpretation

Contextual Outlier Interpretation

Ninghao Liu, Donghwa Shin, Xia Hu

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2461-2467. https://doi.org/10.24963/ijcai.2018/341

While outlier detection has been intensively studied in many applications, interpretation is becoming increasingly important to help people trust and evaluate the developed detection models through providing intrinsic reasons why the given outliers are identified. It is a nontrivial task for interpreting the abnormality of outliers due to the distinct characteristics of different detection models, complicated structures of data in certain applications, and imbalanced distribution of outliers and normal instances. In addition, contexts where outliers locate, as well as the relation between outliers and the contexts, are usually overlooked in existing interpretation frameworks. To tackle the issues, in this paper, we propose a Contextual Outlier INterpretation (COIN) framework to explain the abnormality of outliers spotted by detectors. The interpretability of an outlier is achieved through three aspects, i.e., outlierness score, attributes that contribute to the abnormality, and contextual description of its neighborhoods. Experimental results on various types of datasets demonstrate the flexibility and effectiveness of the proposed framework.
Keywords:
Machine Learning: Interpretability
Machine Learning Applications: Applications of Unsupervised Learning