Noise Optimized Conditional Diffusion for Domain Adaptation
Noise Optimized Conditional Diffusion for Domain Adaptation
Lingkun Luo, Shiqiang Hu, Liming Chen
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 1729-1737.
https://doi.org/10.24963/ijcai.2025/193
Pseudo-labeling is a cornerstone of Unsupervised Domain Adaptation (UDA), yet the scarcity of High-Confidence Pseudo-Labeled Target Domain Samples (hcpl-tds) often leads to inaccurate cross-domain statistical alignment, causing DA failures. To address this challenge, we propose Noise Optimized Conditional Diffusion for Domain Adaptation (NOCDDA), which seamlessly integrates the generative capabilities of conditional diffusion models with the decision-making requirements of DA to achieve task-coupled optimization for efficient adaptation. For robust cross-domain consistency, we modify the DA classifier to align with the conditional diffusion classifier within a unified optimization framework, enabling forward training on noise-varying cross-domain samples. Furthermore, we argue that the conventional N(0,I) initialization in diffusion models often generates class-confused hcpl-tds, compromising discriminative DA. To resolve this, we introduce a class-aware noise optimization strategy that refines sampling regions for reverse class-specific hcpl-tds generation, effectively enhancing cross-domain alignment. Extensive experiments across 5 benchmark datasets and 29 DA tasks demonstrate significant performance gains of NOCDDA over 31 state-of-the-art methods, validating its robustness and effectiveness.
Keywords:
Computer Vision: CV: Transfer, low-shot, semi- and un- supervised learning
Computer Vision: CV: Machine learning for vision
Machine Learning: ML: Multi-task and transfer learning
Machine Learning: ML: Unsupervised learning
