Towards Gender Fairness for Mental Health Prediction

Towards Gender Fairness for Mental Health Prediction

Jiaee Cheong, Selim Kuzucu, Sinan Kalkan, Hatice Gunes

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
AI for Good. Pages 5932-5940. https://doi.org/10.24963/ijcai.2023/658

Mental health is becoming an increasingly prominent health challenge. Despite a plethora of studies analysing and mitigating bias for a variety of tasks such as face recognition and credit scoring, research on machine learning (ML) fairness for mental health has been sparse to date. In this work, we focus on gender bias in mental health and make the following contributions. First, we examine whether bias exists in existing mental health datasets and algorithms. Our experiments were conducted using Depresjon, Psykose and D-Vlog. We identify that both data and algorithmic bias exist. Second, we analyse strategies that can be deployed at the pre-processing, in-processing and post-processing stages to mitigate for bias and evaluate their effectiveness. Third, we investigate factors that impact the efficacy of existing bias mitigation strategies and outline recommendations to achieve greater gender fairness for mental health. Upon obtaining counter-intuitive results on D-Vlog dataset, we undertake further experiments and analyses, and provide practical suggestions to avoid hampering bias mitigation efforts in ML for mental health.
Keywords:
AI for Good: AI Ethics, Trust, Fairness
AI for Good: Humans and AI
AI for Good: Multidisciplinary Topics and Applications