MiniMal: Hard-Label Adversarial Attack Against Static Malware Detection with Minimal Perturbation
MiniMal: Hard-Label Adversarial Attack Against Static Malware Detection with Minimal Perturbation
Chengyi Li, Zhiyuan Jiang , Yongjun Wang , Tian Xia , Yayuan Zhang , Yuhang Mao
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5589-5597.
https://doi.org/10.24963/ijcai.2025/622
Static malware detectors based on machine learning are integral to contemporary antivirus systems, but they are vulnerable to adversarial attacks. While existing research has demonstrated success with adversarial attacks in black-box hard-label scenarios, challenges such as high perturbation rates and incomplete retention of functional integrity remain. To address these issues, we propose a novel black-box hard-label attack method, MiniMal. MiniMal begins with initialized adversarial examples and utilizes binary search and particle swarm optimization algorithms to streamline the perturbation content, significantly reducing the perturbation rate of the adversarial examples. Furthermore, we propose a functionality verification method grounded in file format parsing and control flow graph comparisons to ensure the functional integrity of the adversarial examples. Experimental results indicate that MiniMal achieves an attack success rate of over 98% against three leading machine learning detectors, improving performance by approximately 4.8% to 7.1% compared to state-of-the-art methods. MiniMal reduces perturbation rates to below 40%, making them 9 to 11 times lower than those of previous methods. Additionally, functional verification via Cuckoo Sandbox revealed that the adversarial examples generated by MiniMal retained 100% functional integrity, even with various modifications applied.
Keywords:
Machine Learning: ML: Adversarial machine learning
Multidisciplinary Topics and Applications: MTA: Security and privacy
