Learning with Generated Teammates to Achieve Type-Free Ad-Hoc Teamwork
Learning with Generated Teammates to Achieve Type-Free Ad-Hoc Teamwork
Dong Xing, Qianhui Liu, Qian Zheng, Gang Pan
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 472-478.
https://doi.org/10.24963/ijcai.2021/66
In ad-hoc teamwork, an agent is required to cooperate with unknown teammates without prior coordination. To swiftly adapt to an unknown teammate, most works adopt a type-based approach, which pre-trains the agent with a set of pre-prepared teammate types, then associates the unknown teammate with a particular type. Typically, these types are collected manually. This hampers previous works by both the availability and diversity of types they manage to obtain. To eliminate these limitations, this work addresses to achieve ad-hoc teamwork in a type-free approach. Specifically, we propose the model of Entropy-regularized Deep Recurrent Q-Network (EDRQN) to generate teammates automatically, meanwhile utilize them to pre-train our agent. These teammates are obtained from scratch and are designed to perform the task with various behaviors, therefore their availability and diversity are both ensured. We evaluate our model on several benchmark domains of ad-hoc teamwork. The result shows that even if our model has no access to any pre-prepared teammate types, it still achieves significant performance.
Keywords:
Agent-based and Multi-agent Systems: Cooperative Games
Agent-based and Multi-agent Systems: Coordination and Cooperation
Uncertainty in AI: Sequential Decision Making