Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in Application to Preventive Healthcare
Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in Application to Preventive Healthcare
Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham, Milind Tambe
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 4039-4046.
https://doi.org/10.24963/ijcai.2021/556
In many public health settings, it is important for patients to adhere to health programs, such as taking medications and periodic health checks. Unfortunately, beneficiaries may gradually disengage from such programs, which is detrimental to their health. A concrete example of gradual disengagement has been observed by an organization that carries out a free automated call-based program for spreading preventive care information among pregnant women. Many women stop picking up calls after being enrolled for a few months. To avoid such disengagements, it is important to provide timely interventions. Such interventions are often expensive and can be provided to only a small fraction of the beneficiaries. We model this scenario as a restless multi-armed bandit (RMAB) problem, where each beneficiary is assumed to transition from one state to another depending on the intervention. Moreover, since the transition probabilities are unknown a priori, we propose a Whittle index based Q-Learning mechanism and show that it converges to the optimal solution. Our method improves over existing learning-based methods for RMABs on multiple benchmarks from literature and also on the maternal healthcare dataset.
Keywords:
Planning and Scheduling: Applications of Planning
Planning and Scheduling: Planning under Uncertainty
Uncertainty in AI: Sequential Decision Making