Mitigating Disparity while Maximizing Reward: Tight Anytime Guarantee for Improving Bandits
Mitigating Disparity while Maximizing Reward: Tight Anytime Guarantee for Improving Bandits
Vishakha Patil, Vineet Nair, Ganesh Ghalme, Arindam Khan
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 4100-4108.
https://doi.org/10.24963/ijcai.2023/456
We study the Improving Multi-Armed Bandit problem, where the reward obtained from an arm increases with the number of pulls it receives. This model provides an elegant abstraction for many real-world problems in domains such as education and employment, where decisions about the distribution of opportunities can affect the future capabilities of communities and the disparity between them. A decision-maker in such settings must consider the impact of her decisions on future rewards in addition to the standard objective of maximizing her cumulative reward at any time. We study the tension between two seemingly conflicting objectives in the horizon-unaware setting: a) maximizing the cumulative reward at any time and b) ensuring that arms with better long-term rewards get sufficient pulls even if they initially have low rewards. We show that, surprisingly, the two objectives are aligned with each other. Our main contribution is an anytime algorithm for the IMAB problem that achieves the best possible cumulative reward while ensuring that the arms reach their true potential given sufficient time. Our algorithm mitigates the initial disparity due to lack of opportunity and continues pulling an arm until it stops improving. We prove the optimality of our algorithm by showing that a) any algorithm for the IMAB problem, no matter how utilitarian, must suffer Omega(T) policy regret and Omega(k) competitive ratio with respect to the optimal offline policy, and b) the competitive ratio of our algorithm is O(k).
Keywords:
Machine Learning: ML: Online learning
AI Ethics, Trust, Fairness: ETF: Fairness and diversity
Uncertainty in AI: UAI: Sequential decision making