Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation

Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation

Jun Xia, Ting Wang, Jiepin Ding, Xian Wei, Mingsong Chen

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1481-1487. https://doi.org/10.24963/ijcai.2022/206

Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between teacher and student models during knowledge distillation, ARGD can more effectively eradicate backdoors than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.
Keywords:
Computer Vision: Adversarial learning, adversarial attack and defense methods
Machine Learning: Adversarial Machine Learning
Multidisciplinary Topics and Applications: Security and Privacy