Positive-Unlabeled Learning from Imbalanced Data

Positive-Unlabeled Learning from Imbalanced Data

Guangxin Su, Weitong Chen, Miao Xu

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2995-3001. https://doi.org/10.24963/ijcai.2021/412

Positive-unlabeled (PU) learning deals with the binary classification problem when only positive (P) and unlabeled (U) data are available, without negative (N) data. Existing PU methods perform well on the balanced dataset. However, in real applications such as financial fraud detection or medical diagnosis, data are always imbalanced. It remains unclear whether existing PU methods can perform well on imbalanced data. In this paper, we explore this problem and propose a general learning objective for PU learning targeting specially at imbalanced data. By this general learning objective, state-of-the-art PU methods based on optimizing a consistent risk can be adapted to conquer the imbalance. We theoretically show that in expectation, optimizing our learning objective is equivalent to learning a classifier on the oversampled balanced data with both P and N data available, and further provide an estimation error bound. Finally, experimental results validate the effectiveness of our proposal compared to state-of-the-art PU methods.
Keywords:
Machine Learning: Semi-Supervised Learning
Machine Learning: Weakly Supervised Learning