Robust Regularization with Adversarial Labelling of Perturbed Samples

Robust Regularization with Adversarial Labelling of Perturbed Samples

Xiaohui Guo, Richong Zhang, Yaowei Zheng, Yongyi Mao

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2490-2496. https://doi.org/10.24963/ijcai.2021/343

Recent researches have suggested that the predictive accuracy of neural network may contend with its adversarial robustness. This presents challenges in designing effective regularization schemes that also provide strong adversarial robustness. Revisiting Vicinal Risk Minimization (VRM) as a unifying regularization principle, we propose Adversarial Labelling of Perturbed Samples (ALPS) as a regularization scheme that aims at improving the generalization ability and adversarial robustness of the trained model. ALPS trains neural networks with synthetic samples formed by perturbing each authentic input sample towards another one along with an adversarially assigned label. The ALPS regularization objective is formulated as a min-max problem, in which the outer problem is minimizing an upper-bound of the VRM loss, and the inner problem is L1-ball constrained adversarial labelling on perturbed sample. The analytic solution to the induced inner maximization problem is elegantly derived, which enables computational efficiency. Experiments on the SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a state-of-the-art regularization performance while also serving as an effective adversarial training scheme.
Keywords:
Machine Learning: Adversarial Machine Learning
Machine Learning: Classification