A Few Seconds Can Change Everything: Fast Decision-based Attacks against DNNs

A Few Seconds Can Change Everything: Fast Decision-based Attacks against DNNs

Ningping Mou, Baolin Zheng, Qian Wang, Yunjie Ge, Binqing Guo

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3342-3350. https://doi.org/10.24963/ijcai.2022/464

Previous researches have demonstrated deep learning models' vulnerabilities to decision-based adversarial attacks, which craft adversarial examples based solely on information from output decisions (top-1 labels). However, existing decision-based attacks have two major limitations, i.e., expensive query cost and being easy to detect. To bridge the gap and enlarge real threats to commercial applications, we propose a novel and efficient decision-based attack against black-box models, dubbed FastDrop, which only requires a few queries and work well under strong defenses. The crux of the innovation is that, unlike existing adversarial attacks that rely on gradient estimation and additive noise, FastDrop generates adversarial examples by dropping information in the frequency domain. Extensive experiments on three datasets demonstrate that FastDrop can escape the detection of the state-of-the-art (SOTA) black-box defenses and reduce the number of queries by 13~133× under the same level of perturbations compared with the SOTA attacks. FastDrop only needs 10~20 queries to conduct an attack against various black-box models within 1s. Besides, on commercial vision APIs provided by Baidu and Tencent, FastDrop achieves an attack success rate (ASR) of 100% with 10 queries on average, which poses a real and severe threat to real-world applications.
Keywords:
Machine Learning: Adversarial Machine Learning
Computer Vision: Adversarial learning, adversarial attack and defense methods