Addressing the Long-term Impact of ML Decisions via Policy Regret
Addressing the Long-term Impact of ML Decisions via Policy Regret
David Lindner, Hoda Heidari, Andreas Krause
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 537-544.
https://doi.org/10.24963/ijcai.2021/75
Machine Learning (ML) increasingly informs the allocation of opportunities to individuals and communities in areas such as lending, education, employment, and beyond. Such decisions often impact their subjects' future characteristics and capabilities in an a priori unknown fashion. The decision-maker, therefore, faces exploration-exploitation dilemmas akin to those in multi-armed bandits.
Following prior work, we model communities as arms. To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm. We focus on reward functions that are initially increasing in the number of pulls but may become (and remain) decreasing after a certain point. We argue that an acceptable sequential allocation of opportunities must take an arm's potential for growth into account. We capture these considerations through the notion of policy regret, a much stronger notion than the often-studied external regret, and present an algorithm with provably sub-linear policy regret for sufficiently long time horizons. We empirically compare our algorithm with several baselines and find that it consistently outperforms them, in particular for long time horizons.
Keywords:
AI Ethics, Trust, Fairness: Societal Impact of AI
Uncertainty in AI: Sequential Decision Making
Machine Learning: Online Learning
AI Ethics, Trust, Fairness: Fairness