Counterfactual Knowledge Maintenance for Unsupervised Domain Adaptation
Counterfactual Knowledge Maintenance for Unsupervised Domain Adaptation
Yao Li, Yong Zhou, Jiaqi Zhao, Wen-liang Du, Rui Yao, Bing Liu
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 1476-1484.
https://doi.org/10.24963/ijcai.2025/165
Traditional unsupervised domain adaptation (UDA) struggles to extract rich semantics due to backbone limitations. Recent large-scale pre-trained visual-language models (VLMs) have shown strong zero-shot learning capabilities in UDA tasks. However, directly using VLMs results in a mixture of semantic and domain-specific information, complicating knowledge transfer. Complex scenes with subtle semantic differences are prone to misclassification, which in turn can result in the loss of features that are crucial for distinguishing between classes. To address these challenges, we propose a novel counterfactual knowledge maintenance UDA framework. Specifically, we employ counterfactual disentanglement to separate the representation of semantic information from domain features, thereby reducing domain bias. Furthermore, to clarify ambiguous visual information specific to classes, we maintain the discriminative knowledge of both visual and textual information. This approach synergistically leverages multimodal information to preserve modality-specific distinguishable features. We conducted extensive experimental evaluations on several public datasets to demonstrate the effectiveness of our method. The source code is available at https://github.com/LiYaolab/CMKUDA
Keywords:
Computer Vision: CV: Low-level Vision
Computer Vision: CV: Multimodal learning
Computer Vision: CV: Representation learning
Computer Vision: CV: Scene analysis and understanding
