Anticipatory Fictitious Play

Anticipatory Fictitious Play

Alex Cloud, Albert Wang, Wesley Kerr

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence

Fictitious play is an algorithm for computing Nash equilibria of matrix games. Recently, machine learning variants of fictitious play have been successfully applied to complicated real-world games. This paper presents a simple modification of fictitious play which is a strict improvement over the original: it has the same theoretical worst-case convergence rate, is equally applicable in a machine learning context, and enjoys superior empirical performance. We conduct an extensive comparison of our algorithm with fictitious play, proving an optimal O(1/t) convergence rate for certain classes of games, demonstrating superior performance numerically across a variety of games, and concluding with experiments that extend these algorithms to the setting of deep multiagent reinforcement learning.
Keywords:
Agent-based and Multi-agent Systems: MAS: Multi-agent learning
Game Theory and Economic Paradigms: GTEP: Noncooperative games
Machine Learning: ML: Reinforcement learning