Delayed Impact of Fair Machine Learning

Delayed Impact of Fair Machine Learning

Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, Moritz Hardt

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Best Sister Conferences. Pages 6196-6200. https://doi.org/10.24963/ijcai.2019/862

Static classification has been the predominant focus of the study of fairness in machine learning. While most models do not consider how decisions change populations over time, it is conventional wisdom that fairness criteria promote the long-term well-being of groups they aim to protect. This work studies the interaction of static fairness criteria with temporal indicators of well-being. We show a simple one-step feedback model in which common criteria do not generally promote improvement over time, and may in fact cause harm. Our results highlight the importance of temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
Keywords:
Humans and AI: Ethical Issues in AI
Machine Learning: Learning Theory
Machine Learning: Classification
Machine Learning: Interpretability