Quantifying Health Inequalities Induced by Data and AI Models

Quantifying Health Inequalities Induced by Data and AI Models

Honghan Wu, Aneeta Sylolypavan, Minhong Wang, Sarah Wild

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
AI for Good. Pages 5192-5198. https://doi.org/10.24963/ijcai.2022/721

AI technologies are being increasingly tested and applied in critical environments including healthcare. Without an effective way to detect and mitigate AI induced inequalities, AI might do more harm than good, potentially leading to the widening of underlying inequalities. This paper proposes a generic allocation-deterioration framework for detecting and quantifying AI induced inequality. Specifically, AI induced inequalities are quantified as the area between two allocation-deterioration curves. To assess the framework’s performance, experiments were conducted on ten synthetic datasets (N>33,000) generated from HiRID - a real-world Intensive Care Unit (ICU) dataset, showing its ability to accurately detect and quantify inequality proportionally to controlled inequalities. Extensive analyses were carried out to quantify health inequalities (a) embedded in two real-world ICU datasets; (b) induced by AI models trained for two resource allocation scenarios. Results showed that compared to men, women had up to 33% poorer deterioration in markers of prognosis when admitted to HiRID ICUs. All four AI models assessed were shown to induce significant inequalities (2.45% to 43.2%) for non-White compared to White patients. The models exacerbated data embedded inequalities significantly in 3 out of 8 assessments, one of which was >9 times worse.
Keywords:
AI Ethics, Trust, Fairness: Bias
AI Ethics, Trust, Fairness: Fairness & Diversity
Multidisciplinary Topics and Applications: Health and Medicine
AI Ethics, Trust, Fairness: Societal Impact of AI