Learning from Few Positives: a Provably Accurate Metric Learning Algorithm to Deal with Imbalanced Data

Learning from Few Positives: a Provably Accurate Metric Learning Algorithm to Deal with Imbalanced Data

Rémi Viola, Rémi Emonet, Amaury Habrard, Guillaume Metzler, Marc Sebban

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2155-2161. https://doi.org/10.24963/ijcai.2020/298

Learning from imbalanced data, where the positive examples are very scarce, remains a challenging task from both a theoretical and algorithmic perspective. In this paper, we address this problem using a metric learning strategy. Unlike the state-of-the-art methods, our algorithm MLFP, for Metric Learning from Few Positives, learns a new representation that is used only when a test query is compared to a minority training example. From a geometric perspective, it artificially brings positive examples closer to the query without changing the distances to the negative (majority class) data. This strategy allows us to expand the decision boundaries around the positives, yielding a better F-Measure, a criterion which is suited to deal with imbalanced scenarios. Beyond the algorithmic contribution provided by MLFP, our paper presents generalization guarantees on the false positive and false negative rates. Extensive experiments conducted on several imbalanced datasets show the effectiveness of our method.
Keywords:
Machine Learning: Classification
Machine Learning Applications: Applications of Supervised Learning