Boosting Multi-Label Image Classification with Complementary Parallel Self-Distillation

Boosting Multi-Label Image Classification with Complementary Parallel Self-Distillation

Jiazhi Xu, Sheng Huang, Fengtao Zhou, Luwen Huangfu, Daniel Zeng, Bo Liu

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1495-1501. https://doi.org/10.24963/ijcai.2022/208

Multi-Label Image Classification (MLIC) appro-aches usually exploit label correlations to achieve good performance. However, emphasizing correlation like co-occurrence may overlook discriminative features and lead to model overfitting. In this study, we propose a generic framework named Parallel Self-Distillation (PSD) for boosting MLIC models. PSD decomposes the original MLIC task into several simpler MLIC sub-tasks via two elaborated complementary task decomposition strategies named Co-occurrence Graph Partition (CGP) and Dis-occurrence Graph Partition (DGP). Then, the MLIC models of fewer categories are trained with these sub-tasks in parallel for respectively learning the joint patterns and the category-specific patterns of labels. Finally, knowledge distillation is leveraged to learn a compact global ensemble of full categories with these learned patterns for reconciling the label correlation exploitation and model overfitting. Extensive results on MS-COCO and NUS-WIDE datasets demonstrate that our framework can be easily plugged into many MLIC approaches and improve performances of recent state-of-the-art approaches. The source code is released at https://github.com/Robbie-Xu/CPSD.
Keywords:
Computer Vision: Recognition (object detection, categorization)
Computer Vision: Machine Learning for Vision
Machine Learning: Multi-label