A Hierarchical Approach to Population Training for Human-AI Collaboration

A Hierarchical Approach to Population Training for Human-AI Collaboration

Yi Loo, Chen Gong, Malika Meghjani

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 3011-3019. https://doi.org/10.24963/ijcai.2023/336

A major challenge for deep reinforcement learning (DRL) agents is to collaborate with novel partners that were not encountered by them during the training phase. This is specifically worsened by an increased variance in action responses when the DRL agents collaborate with human partners due to the lack of consistency in human behaviors. Recent work have shown that training a single agent as the best response to a diverse population of training partners significantly increases an agent's robustness to novel partners. We further enhance the population-based training approach by introducing a Hierarchical Reinforcement Learning (HRL) based method for Human-AI Collaboration. Our agent is able to learn multiple best-response policies as its low-level policy while at the same time, it learns a high-level policy that acts as a manager which allows the agent to dynamically switch between the low-level best-response policies based on its current partner. We demonstrate that our method is able to dynamically adapt to novel partners of different play styles and skill levels in the 2-player collaborative Overcooked game environment. We also conducted a human study in the same environment to test the effectiveness of our method when partnering with real human subjects. Code is available at https://gitlab.com/marvl-hipt/hipt.
Keywords:
Humans and AI: HAI: Human-AI collaboration
Machine Learning: ML: Deep reinforcement learning
Agent-based and Multi-agent Systems: MAS: Human-agent interaction