AccGenSVM: Selectively Transferring from Previous Hypotheses

AccGenSVM: Selectively Transferring from Previous Hypotheses

Diana Benavides-Prado, Yun Sing Koh, Patricia Riddle

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 1440-1446. https://doi.org/10.24963/ijcai.2017/199

In our research, we consider transfer learning scenarios where a target learner does not have access to the source data, but instead to hypotheses or models induced from it. This is called the Hypothesis Transfer Learning (HTL) problem. Previous approaches concentrated on transferring source hypotheses as a whole. We introduce a novel method for selectively transferring elements from previous hypotheses learned with Support Vector Machines. The representation of an SVM hypothesis as a set of support vectors allows us to treat this information as privileged to aid learning during a new task. Given a possibly large number of source hypotheses, our approach selects the source support vectors that more closely resemble the target data, and transfers their learned coefficients as constraints on the coefficients to be learned. This strategy increases the importance of relevant target data points based on their similarity to source support vectors, while learning from the target data. Our method shows important improvements on the convergence rate on three classification datasets of varying sizes, decreasing the number of iterations by up to 56% on average compared to learning with no transfer and up to 92% compared to regular HTL, while maintaining similar accuracy levels.
Keywords:
Machine Learning: Classification
Machine Learning: Machine Learning
Machine Learning: Transfer, Adaptation, Multi-task Learning