On Adversarial Robustness of Demographic Fairness in Face Attribute Recognition
On Adversarial Robustness of Demographic Fairness in Face Attribute Recognition
Huimin Zeng, Zhenrui Yue, Lanyu Shang, Yang Zhang, Dong Wang
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 527-535.
https://doi.org/10.24963/ijcai.2023/59
Demographic fairness has become a critical objective when developing modern visual models for identity-sensitive applications, such as face attribute recognition (FAR). While great efforts have been made to improve the fairness of the models, the investigation on the adversarial robustness of the fairness (e.g., whether the fairness of the models could still be maintained under potential malicious fairness attacks) is largely ignored. Therefore, this paper explores the adversarial robustness of demographic fairness in FAR applications from both attacking and defending perspectives. In particular, we firstly present a novel fairness attack, who aims at corrupting the demographic fairness of face attribute classifiers. Next, to mitigate the effect of the fairness attack, we design an efficient defense algorithm called robust-fair training. With this defense, face attribute classifiers learn how to combat the bias introduced by the fairness attack. As such, the face attribute classifiers are not only trained to be fair, but the fairness is also robust. Our extensive experimental results show the effectiveness of both our proposed attack and defense methods across various model architectures and FAR applications. We believe our work could be strong baselines for future work on robust-fair AI models.
Keywords:
AI Ethics, Trust, Fairness: ETF: Bias
Computer Vision: CV: Bias, fairness and privacy