Multi-Agent Concentrative Coordination with Decentralized Task Representation

Multi-Agent Concentrative Coordination with Decentralized Task Representation

Lei Yuan, Chenghe Wang, Jianhao Wang, Fuxiang Zhang, Feng Chen, Cong Guan, Zongzhang Zhang, Chongjie Zhang, Yang Yu

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 599-605. https://doi.org/10.24963/ijcai.2022/85

Value-based multi-agent reinforcement learning (MARL) methods hold the promise of promoting coordination in cooperative settings. Popular MARL methods mainly focus on the scalability or the representational capacity of value functions. Such a learning paradigm can reduce agents' uncertainties and promote coordination. However, they fail to leverage the task structure decomposability, which generally exists in real-world multi-agent systems (MASs), leading to a significant amount of time exploring the optimal policy in complex scenarios. To address this limitation, we propose a novel framework Multi-Agent Concentrative Coordination (MACC) based on task decomposition, with which an agent can implicitly form local groups to reduce the learning space to facilitate coordination. In MACC, agents first learn representations for subtasks from their local information and then implement an attention mechanism to concentrate on the most relevant ones. Thus, agents can pay targeted attention to specific subtasks and improve coordination. Extensive experiments on various complex multi-agent benchmarks demonstrate that MACC achieves remarkable performance compared to existing methods.
Keywords:
Agent-based and Multi-agent Systems: Coordination and Cooperation
Agent-based and Multi-agent Systems: Agreement Technologies: Argumentation
Agent-based and Multi-agent Systems: Agreement Technologies: Negotiation and Contract-Based Systems
Agent-based and Multi-agent Systems: Mechanism Design
Agent-based and Multi-agent Systems: Multi-agent Learning