Identifying and Reusing Learnwares Across Different Label Spaces

Identifying and Reusing Learnwares Across Different Label Spaces

Jian-Dong Liu, Zhi-Hao Tan, Zhi-Hua Zhou

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5734-5742. https://doi.org/10.24963/ijcai.2025/638

The learnware paradigm focuses on leveraging numerous established high-performing models to solve machine learning tasks instead of starting from scratch. As the key concept of this paradigm, a learnware consists of a well-trained model of any structure and a specification that characterizes the model's capabilities, allowing it to be identified and reused for future tasks. Given the existence of numerous real-world models trained on diverse label spaces, effectively identifying and combining these models to address tasks involving previously unseen label spaces represents a critical challenge in this paradigm. In this paper, we make the first attempt to identify and reuse effective learnware combinations for tackling learning tasks across different label spaces, extending their applicability beyond the original purposes of individual learnwares. To this end, we introduce a statistical class-wise specification for establishing similarity relations between various label spaces. Leveraging these relations, we model the utility of a learnware combination as a minimum-cost maximum-flow problem, and further develop fine-grained learnware identification and assembly methods. Extensive experiments with thousands of heterogeneous models validate our approach, demonstrating that reusing identified learnware combinations can outperform both training from scratch and fine-tuning a generic pre-trained model.
Keywords:
Machine Learning: ML: Learnware/model reuse/transfer learning
Machine Learning: ML: Classification
Machine Learning: ML: Ensemble methods
Machine Learning: ML: Kernel methods