Abstract

 

Learning Graphical Game Models

Graphical games provide compact representation of a multiagent interaction when agents' payoffs depend only on actions of agents in their local neighborhood. We formally describe the problem of learning a graphical game model from limited observation of the payoff function, define three performance metrics for evaluating learned games, and investigate several learning algorithms based on minimizing empirical loss. Our first algorithm is a branch-and-bound search, which takes advantage of the structure of the empirical loss function to derive upper and lower bounds on loss at every node of the search tree. We also examine a greedy heuristic and local search algorithms. Our experiments with directed graphical games show that (i) when only a small sample of profile payoffs is available, branch-and-bound significantly outperforms other methods, and has competitive running time, but (ii) when many profiles are observed, greedy is nearly optimal and considerably better than other methods, at a fraction of branch-and-bound's running time. The results are comparable for undirected graphical games and when payoffs are sampled with noise.

Quang Duong, Yevgeniy Vorobeychik, Satinder Singh, Michael Wellman