Modal Consistency based Pre-Trained Multi-Model Reuse
Modal Consistency based Pre-Trained Multi-Model Reuse
Yang Yang, De-Chuan Zhan, Xiang-Yu Guo, Yuan Jiang
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 3287-3293.
https://doi.org/10.24963/ijcai.2017/459
Multi-Model Reuse is one of the prominent problems in Learnware framework, while the main issue of Multi-Model Reuse lies in the final prediction acquisition from the responses of multiple pre-trained models. Different from multi-classifiers ensemble, there are only pre-trained models rather than the whole training sets provided in Multi-Model Reuse configuration. This configuration is closer to the real applications where the reliability of each model cannot be evaluated properly. In this paper, aiming at the lack of evaluation on reliability, the potential consistency spread on different modalities is utilized. With the consistency of pre-trained models on different modalities, we propose a Pre-trained Multi-Model Reuse approach PM2R with multi-modal data, which realizes the reusability of multiple models. PM2R can combine pre-trained multi-models efficiently without re-training, and consequently no more training data storage is required. We describe the more realistic Multi-Model Reuse setting comprehensively in our paper, and point out the differences among this setting, classifier ensemble and later fusion on multi-modal learning. Experiments on synthetic and real-world datasets validate the effectiveness of PM2R when it is compared with state-of-the-art ensemble/multi-modal learning methods under this more realistic setting.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Machine Learning: Multi-instance/Multi-label/Multi-view learning