Temporal Induced Self-Play for Stochastic Bayesian Games
Temporal Induced Self-Play for Stochastic Bayesian Games
Weizhe Chen, Zihan Zhou, Yi Wu, Fei Fang
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 96-103.
https://doi.org/10.24963/ijcai.2021/14
One practical requirement in solving dynamic games is to ensure that the players play well from any decision point onward. To satisfy this requirement, existing efforts focus on equilibrium refinement, but the scalability and applicability of existing techniques are limited. In this paper, we propose Temporal-Induced Self-Play (TISP), a novel reinforcement learning-based framework to find strategies with decent performances from any decision point onward. TISP uses belief-space representation, backward induction, policy learning, and non-parametric approximation. Building upon TISP, we design a policy-gradient-based algorithm TISP-PG. We prove that TISP-based algorithms can find approximate Perfect Bayesian Equilibrium in zero-sum one-sided stochastic Bayesian games with finite horizon. We test TISP-based algorithms in various games, including finitely repeated security games and a grid-world game. The results show that TISP-PG is more scalable than existing mathematical programming-based methods and significantly outperforms other learning-based methods.
Keywords:
Agent-based and Multi-agent Systems: Multi-agent Learning
Machine Learning Applications: Applications of Reinforcement Learning