Unbiased Risk Estimator to Multi-Labeled Complementary Label Learning

Unbiased Risk Estimator to Multi-Labeled Complementary Label Learning

Yi Gao, Miao Xu, Min-Ling Zhang

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 3732-3740. https://doi.org/10.24963/ijcai.2023/415

Multi-label learning (MLL) usually requires assigning multiple relevant labels to each instance. While a fully supervised MLL dataset needs a large amount of labeling effort, using complementary labels can help alleviate this burden. However, current approaches to learning from complementary labels are mainly designed for multi-class learning and assume that each instance has a single relevant label. This means that these approaches cannot be easily applied to MLL when only complementary labels are provided, where the number of relevant labels is unknown and can vary across instances. In this paper, we first propose the unbiased risk estimator for the multi-labeled complementary label learning (MLCLL) problem. We also provide an estimation error bound to ensure the convergence of the empirical risk estimator. In some cases, the unbiased estimator may give unbounded gradients for certain loss functions and result in overfitting. To mitigate this problem, we improve the risk estimator by minimizing a proper loss function, which has been shown to improve gradient updates. Our experimental results demonstrate the effectiveness of the proposed approach on various datasets.
Keywords:
Machine Learning: ML: Classification
Machine Learning: ML: Multi-label
Machine Learning: ML: Weakly supervised learning