Exploring Binary Classification Hidden within Partial Label Learning

Exploring Binary Classification Hidden within Partial Label Learning

Hengheng Luo, Yabin Zhang, Suyun Zhao, Hong Chen, Cuiping Li

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3285-3291. https://doi.org/10.24963/ijcai.2022/456

Partial label learning (PLL) is to learn a discriminative model under incomplete supervision, where each instance is annotated with a candidate label set. The basic principle of PLL is that the unknown correct label y of an instance x resides in its candidate label set s, i.e., P(y ∈ s | x) = 1. On which basis, current researches either directly model P(x | y) under different data generation assumptions or propose various surrogate multiclass losses, which all aim to encourage the model-based Pθ(y ∈ s | x)→1 implicitly. In this work, instead, we explicitly construct a binary classification task toward P(y ∈ s | x) based on the discriminative model, that is to predict whether the model-output label of x is one of its candidate labels. We formulate a novel risk estimator with estimation error bound for the proposed PLL binary classification risk. By applying logit adjustment based on disambiguation strategy, the practical approach directly maximizes Pθ(y ∈ s | x) while implicitly disambiguating the correct one from candidate labels simultaneously. Thorough experiments validate that the proposed approach achieves competitive performance against the state-of-the-art PLL methods.
Keywords:
Machine Learning: Weakly Supervised Learning
Machine Learning: Classification