Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation

Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation

Cheng Feng, Chaoliang Zhong, Jie Wang, Ying Zhang, Jun Sun, Yasuto Yokota

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2958-2965. https://doi.org/10.24963/ijcai.2022/410

Existing unsupervised domain adaptation (UDA) studies focus on transferring knowledge in an offline manner. However, many tasks involve online requirements, especially in real-time systems. In this paper, we discuss Online UDA (OUDA) which assumes that the target samples are arriving sequentially as a small batch. OUDA tasks are challenging for prior UDA methods since online training suffers from catastrophic forgetting which leads to poor generalization. Intuitively, a good memory is a crucial factor in the success of OUDA. We formalize this intuition theoretically with a generalization bound where the OUDA target error can be bounded by the source error, the domain discrepancy distance, and a novel metric on forgetting in continuous online learning. Our theory illustrates the tradeoffs inherent in learning and remembering representations for OUDA. To minimize the proposed forgetting metric, we propose a novel source feature distillation (SFD) method which utilizes the source-only model as a teacher to guide the online training. In the experiment, we modify three UDA algorithms, i.e., DANN, CDAN, and MCC, and evaluate their performance on OUDA tasks with real-world datasets. By applying SFD, the performance of all baselines is significantly improved.
Keywords:
Machine Learning: Online Learning
Machine Learning: Learning Theory
Machine Learning: Theory of Deep Learning
Machine Learning: Unsupervised Learning