Mean Field Equilibrium in Multi-Armed Bandit Game with Continuous Reward

Mean Field Equilibrium in Multi-Armed Bandit Game with Continuous Reward

Xiong Wang, Riheng Jia

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 3118-3124. https://doi.org/10.24963/ijcai.2021/429

Mean field game facilitates analyzing multi-armed bandit (MAB) for a large number of agents by approximating their interactions with an average effect. Existing mean field models for multi-agent MAB mostly assume a binary reward function, which leads to tractable analysis but is usually not applicable in practical scenarios. In this paper, we study the mean field bandit game with a continuous reward function. Specifically, we focus on deriving the existence and uniqueness of mean field equilibrium (MFE), thereby guaranteeing the asymptotic stability of the multi-agent system. To accommodate the continuous reward function, we encode the learned reward into an agent state, which is in turn mapped to its stochastic arm playing policy and updated using realized observations. We show that the state evolution is upper semi-continuous, based on which the existence of MFE is obtained. As the Markov analysis is mainly for the case of discrete state, we transform the stochastic continuous state evolution into a deterministic ordinary differential equation (ODE). On this basis, we can characterize a contraction mapping for the ODE to ensure a unique MFE for the bandit game. Extensive evaluations validate our MFE characterization, and exhibit tight empirical regret of the MAB problem.
Keywords:
Machine Learning: Online Learning
Agent-based and Multi-agent Systems: Algorithmic Game Theory
Agent-based and Multi-agent Systems: Multi-agent Learning