A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges

A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges

Usman Gohar, Lu Cheng

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Survey Track. Pages 6619-6627. https://doi.org/10.24963/ijcai.2023/742

The widespread adoption of Machine Learning systems, especially in more decision-critical applications such as criminal sentencing and bank loans, has led to increased concerns about fairness implications. Algorithms and metrics have been developed to mitigate and measure these discriminations. More recently, works have identified a more challenging form of bias called intersectional bias, which encompasses multiple sensitive attributes, such as race and gender, together. In this survey, we review the state-of-the-art in intersectional fairness. We present a taxonomy for intersectional notions of fairness and mitigation. Finally, we identify the key challenges and provide researchers with guidelines for future directions.
Keywords:
Survey: AI Ethics, Trust, Fairness
Survey: Machine Learning
Survey: Multidisciplinary Topics and Applications
Survey: Natural Language Processing