Towards Adversarially Robust Deep Image Denoising

Towards Adversarially Robust Deep Image Denoising

Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1516-1522. https://doi.org/10.24963/ijcai.2022/211

This work systematically investigates the adversarial robustness of deep image denoisers (DIDs), i.e, how well DIDs can recover the ground truth from noisy observations degraded by adversarial perturbations. Firstly, to evaluate DIDs’ robustness, we propose a novel adversarial attack, namely Observation-based Zero-mean Attack (OBSATK), to craft adversarial zero-mean perturbations on given noisy images. We find that existing DIDs are vulnerable to the adversarial noise generated by OBSATK. Secondly, to robustify DIDs, we pro- pose an adversarial training strategy, hybrid adversarial training (HAT), that jointly trains DIDs with adversarial and non-adversarial noisy data to ensure that the reconstruction quality is high and the denoisers around non-adversarial data are locally smooth. The resultant DIDs can effectively remove various types of synthetic and adversarial noise. We also uncover that the robustness of DIDs benefits their generalization capability on unseen real-world noise. Indeed, HAT-trained DIDs can recover high-quality clean images from real-world noise even without training on real noisy data. Extensive experiments on benchmark datasets, including Set68, PolyU, and SIDD, corroborate the effectiveness of OBSATK and HAT.
Keywords:
Computer Vision: Adversarial learning, adversarial attack and defense methods
AI Ethics, Trust, Fairness: Safety & Robustness
AI Ethics, Trust, Fairness: Trustworthy AI
Computer Vision: Applications