Self-Paced Multitask Learning with Shared Knowledge

Self-Paced Multitask Learning with Shared Knowledge

Keerthiram Murugesan, Jaime Carbonell

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2522-2528. https://doi.org/10.24963/ijcai.2017/351

This paper introduces self-paced task selection to multitask learning, where instances from more closely related tasks are selected in a progression of easier-to-harder tasks, to emulate an effective human education strategy, but applied to multitask machine learning. We develop the mathematical foundation for the approach based on iterative selection of the most appropriate task, learning the task parameters, and updating the shared knowledge, optimizing a new bi-convex loss function. This proposed method applies quite generally, including to multitask feature learning, multitask learning with alternating structure optimization, etc. Results show that in each of the above formulations self-paced (easier-to-harder) task selection outperforms the baseline version of these methods in all the experiments.
Keywords:
Machine Learning: Feature Selection/Construction
Machine Learning: Transfer, Adaptation, Multi-task Learning