MA2CL:Masked Attentive Contrastive Learning for Multi-Agent Reinforcement Learning

MA2CL:Masked Attentive Contrastive Learning for Multi-Agent Reinforcement Learning

Haolin Song, Mingxiao Feng, Wengang Zhou, Houqiang Li

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 4226-4234. https://doi.org/10.24963/ijcai.2023/470

Recent approaches have utilized self-supervised auxiliary tasks as representation learning to improve the performance and sample efficiency of vision-based reinforcement learning algorithms in single-agent settings. However, in multi-agent reinforcement learning (MARL), these techniques face challenges because each agent only receives partial observation from an environment influenced by others, resulting in correlated observations in the agent dimension. So it is necessary to consider agent-level information in representation learning for MARL. In this paper, we propose an effective framework called Multi-Agent Masked Attentive Contrastive Learning (MA2CL), which encourages learning representation to be both temporal and agent-level predictive by reconstructing the masked agent observation in latent space. Specifically, we use an attention reconstruction model for recovering and the model is trained via contrastive learning. MA2CL allows better utilization of contextual information at the agent level, facilitating the training of MARL agents for cooperation tasks. Extensive experiments demonstrate that our method significantly improves the performance and sample efficiency of different MARL algorithms and outperforms other methods in various vision-based and state-based scenarios.
Keywords:
Machine Learning: ML: Deep reinforcement learning
Agent-based and Multi-agent Systems: MAS: Coordination and cooperation
Machine Learning: ML: Representation learning