Achieving Outcome Fairness in Machine Learning Models for Social Decision Problems

Achieving Outcome Fairness in Machine Learning Models for Social Decision Problems

Boli Fang, Miao Jiang, Pei-yi Cheng, Jerry Shen, Yi Fang

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 444-450. https://doi.org/10.24963/ijcai.2020/62

Effective complements to human judgment, artificial intelligence techniques have started to aid human decisions in complicated social decision problems across the world. Automated machine learning/deep learning(ML/DL) classification models, through quantitative modeling, have the potential to improve upon human decisions in a wide range of decision problems on social resource allocation such as Medicaid and Supplemental Nutrition Assistance Program(SNAP, commonly referred to as Food Stamps). However, given the limitations in ML/DL model design, these algorithms may fail to leverage various factors for decision making, resulting in improper decisions that allocate resources to individuals who may not be in the most need of such resource. In view of such an issue, we propose in this paper the strategy of fairgroups, based on the legal doctrine of disparate impact, to improve fairness in prediction outcomes. Experiments on various datasets demonstrate that our fairgroup construction method effectively boosts the fairness in automated decision making, while maintaining high prediction accuracy.
Keywords:
AI Ethics: Fairness
AI Ethics: Societal Impact of AI
AI Ethics: Moral Decision Making
Machine Learning: Classification