Consistent MetaReg: Alleviating Intra-task Discrepancy for Better Meta-knowledge

Consistent MetaReg: Alleviating Intra-task Discrepancy for Better Meta-knowledge

Pinzhuo Tian, Lei Qi, Shaokang Dong, Yinghuan Shi, Yang Gao

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2718-2724. https://doi.org/10.24963/ijcai.2020/377

In the few-shot learning scenario, the data-distribution discrepancy between training data and test data in a task usually exists due to the limited data. However, most existing meta-learning approaches seldom consider this intra-task discrepancy in the meta-training phase which might deteriorate the performance. To overcome this limitation, we develop a new consistent meta-regularization method to reduce the intra-task data-distribution discrepancy. Moreover, the proposed meta-regularization method could be readily inserted into existing optimization-based meta-learning models to learn better meta-knowledge. Particularly, we provide the theoretical analysis to prove that using the proposed meta-regularization, the conventional gradient-based meta-learning method can reach the lower regret bound. The extensive experiments also demonstrate the effectiveness of our method, which indeed improves the performances of the state-of-the-art gradient-based meta-learning models in the few-shot classification task.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning