Stabilizing Adversarial Invariance Induction from Divergence Minimization Perspective

Stabilizing Adversarial Invariance Induction from Divergence Minimization Perspective

Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 1955-1962. https://doi.org/10.24963/ijcai.2020/271

Adversarial invariance induction (AII) is a generic and powerful framework for enforcing an invariance to nuisance attributes into neural network representations. However, its optimization is often unstable and little is known about its practical behavior. This paper presents an analysis of the reasons for the optimization difficulties and provides a better optimization procedure by rethinking AII from a divergence minimization perspective. Interestingly, this perspective indicates a cause of the optimization difficulties: it does not ensure proper divergence minimization, which is a requirement of the invariant representations. We then propose a simple variant of AII, called invariance induction by discriminator matching, which takes into account the divergence minimization interpretation of the invariant representations. Our method consistently achieves near-optimal invariance in toy datasets with various configurations in which the original AII is catastrophically unstable. Extentive experiments on four real-world datasets also support the superior performance of the proposed method, leading to improved user anonymization and domain generalization.
Keywords:
Machine Learning: Deep Learning
Machine Learning: Transfer, Adaptation, Multi-task Learning