Robust Domain Adaptation: Representations, Weights and Inductive Bias (Extended Abstract)

Robust Domain Adaptation: Representations, Weights and Inductive Bias (Extended Abstract)

Victor Bouvier, Philippe Very, Clément Chastagnol, Myriam Tami, Céline Hudelot

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 4750-4754. https://doi.org/10.24963/ijcai.2021/644

Domain Invariant Representations (IR) has improved drastically the transferability of representations from a labelled source domain to a new and unlabelled target domain. Unsupervised Domain Adaptation (UDA) in presence of label shift remains an open problem. To this purpose, we present a bound of the target risk which incorporates both weights and invariant representations. Our theoretical analysis highlights the role of inductive bias in aligning distributions across domains. We illustrate it on standard benchmarks by proposing a new learning procedure for UDA. We observed empirically that weak inductive bias makes adaptation robust to label shift. The elaboration of stronger inductive bias is a promising direction for new UDA algorithms.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Machine Learning: Learning Theory