Abstract

Proceedings Abstracts of the Twenty-Fourth International Joint Conference on Artificial Intelligence

Simultaneous Abstraction and Equilibrium Finding in Games / 489
Noam Brown, Tuomas Sandholm
PDF

A key challenge in solving extensive-form games is dealing with large, or even infinite, action spaces. In games of imperfect information, the leading approach is to find a Nash equilibrium in a smaller abstract version of the game that includes only a few actions at each decision point, and then map the solution back to the original game. However, it is difficult to know which actions should be included in the abstraction without first solving the game, and it is infeasible to solve the game without first abstracting it. We introduce a method that combines abstraction with equilibrium finding by enabling actions to be added to the abstraction at run time. This allows an agent to begin learning with a coarse abstraction, and then to strategically insert actions at points that the strategy computed in the current abstraction deems important. The algorithm can quickly add actions to the abstraction while provably not having to restart the equilibrium finding. It enables anytime convergence to a Nash equilibrium of the full game even in infinite games. Experiments show it can outperform fixed abstractions at every stage of the run: early on it improves as quickly as equilibrium finding in coarse abstractions, and later it converges to a better solution than does equilibrium finding in fine-grained abstractions.