Coordinated Versus Decentralized Exploration In Multi-Agent Multi-Armed Bandits

Coordinated Versus Decentralized Exploration In Multi-Agent Multi-Armed Bandits

Mithun Chakraborty, Kai Yee Phoebe Chua, Sanmay Das, Brendan Juba

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 164-170. https://doi.org/10.24963/ijcai.2017/24

In this paper, we introduce a multi-agent multi-armed bandit-based model for ad hoc teamwork with expensive communication. The goal of the team is to maximize the total reward gained from pulling arms of a bandit over a number of epochs. In each epoch, each agent decides whether to pull an arm, or to broadcast the reward it obtained in the previous epoch to the team and forgo pulling an arm. These decisions must be made only on the basis of the agent’s private information and the public information broadcast prior to that epoch. We first benchmark the achievable utility by analyzing an idealized version of this problem where a central authority has complete knowledge of rewards acquired from all arms in all epochs and uses a multiplicative weights update algorithm for allocating arms to agents. We then introduce an algorithm for the decentralized setting that uses a value-of-information based communication strategy and an exploration-exploitation strategy based on the centralized algorithm, and show experimentally that it converges rapidly to the performance of the centralized method.
Keywords:
Agent-based and Multi-agent Systems: Coordination and cooperation
Agent-based and Multi-agent Systems: Multi-agent Learning