Towards Robust Model Reuse in the Presence of Latent Domains

Towards Robust Model Reuse in the Presence of Latent Domains

Jie-Jing Shao, Zhanzhan Cheng, Yu-Feng Li, Shiliang Pu

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2957-2963. https://doi.org/10.24963/ijcai.2021/407

Model reuse tries to adapt well pre-trained models to a new target task, without access of raw data. It attracts much attention since it reduces the learning resources. Previous model reuse studies typically operate in a single-domain scenario, i.e., the target samples arise from one single domain. However, in practice the target samples often arise from multiple latent or unknown domains, e.g., the images for cars may arise from latent domains such as photo, line drawing, cartoon, etc. The methods based on single-domain may no longer be feasible for multiple latent domains and may sometimes even lead to performance degeneration. To address the above issue, in this paper we propose the MRL (Model Reuse for multiple Latent domains) method. Both domain characteristics and pre-trained models are considered for the exploration of instances in the target task. Theoretically, the overall considerations are packed in a bi-level optimization framework with a reliable generalization. Moreover, through an ensemble of multiple models, the model robustness is improved with a theoretical guarantee. Empirical results on diverse real-world data sets clearly validate the effectiveness of proposed algorithms.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Machine Learning: Semi-Supervised Learning