Inducing Stackelberg Equilibrium through Spatio-Temporal Sequential Decision-Making in Multi-Agent Reinforcement Learning

Inducing Stackelberg Equilibrium through Spatio-Temporal Sequential Decision-Making in Multi-Agent Reinforcement Learning

Bin Zhang, Lijuan Li, Zhiwei Xu, Dapeng Li, Guoliang Fan

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 353-361. https://doi.org/10.24963/ijcai.2023/40

In multi-agent reinforcement learning (MARL), self-interested agents attempt to establish equilibrium and achieve coordination depending on game structure. However, existing MARL approaches are mostly bound by the simultaneous actions of all agents in the Markov game (MG) framework, and few works consider the formation of equilibrium strategies via asynchronous action coordination. In view of the advantages of Stackelberg equilibrium (SE) over Nash equilibrium, we construct a spatio-temporal sequential decision-making structure derived from the MG and propose an N-level policy model based on a conditional hypernetwork shared by all agents. This approach allows for asymmetric training with symmetric execution, with each agent responding optimally conditioned on the decisions made by superior agents. Agents can learn heterogeneous SE policies while still maintaining parameter sharing, which leads to reduced cost for learning and storage and enhanced scalability as the number of agents increases. Experiments demonstrate that our method effectively converges to the SE policies in repeated matrix game scenarios, and performs admirably in immensely complex settings including cooperative tasks and mixed tasks.
Keywords:
Agent-based and Multi-agent Systems: MAS: Coordination and cooperation
Agent-based and Multi-agent Systems: MAS: Multi-agent learning
Game Theory and Economic Paradigms: GTEP: Noncooperative games
Machine Learning: ML: Reinforcement learning