CAT: Customized Adversarial Training for Improved Robustness

CAT: Customized Adversarial Training for Improved Robustness

Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 673-679. https://doi.org/10.24963/ijcai.2022/95

Adversarial training has become one of the most effective methods for improving robustness of neural networks. However, it often suffers from poor generalization on both clean and perturbed data. Current robust training method always use a uniformed perturbation strength for every samples to generate adversarial examples during model training for improving adversarial robustness. However, we show it would lead worse training and generalizaiton error and forcing the prediction to match one-hot label. In this paper, therefore, we propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training. We first show theoretically the CAT scheme improves the generalization. Also, through extensive experiments, we show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods. The full version of this paper is available at https://arxiv.org/abs/2002.06789.
Keywords:
AI Ethics, Trust, Fairness: Safety & Robustness