Balancing Individual Preferences and Shared Objectives in Multiagent Reinforcement Learning
Balancing Individual Preferences and Shared Objectives in Multiagent Reinforcement Learning
Ishan Durugkar, Elad Liebman, Peter Stone
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2505-2511.
https://doi.org/10.24963/ijcai.2020/347
In multiagent reinforcement learning scenarios, it is often the case that independent agents must jointly learn to perform a cooperative task. This paper focuses on such a scenario in which agents have individual preferences regarding how to accomplish the shared task. We consider a framework for this setting which balances individual preferences against task rewards using a linear mixing scheme. In our theoretical analysis we establish that agents can reach an equilibrium that leads to optimal shared task reward even when they consider individual preferences which aren't fully aligned with this task. We then empirically show, somewhat counter-intuitively, that there exist mixing schemes that outperform a purely task-oriented baseline. We further consider empirically how to optimize the mixing scheme.
Keywords:
Machine Learning: Reinforcement Learning
Agent-based and Multi-agent Systems: Coordination and Cooperation
Agent-based and Multi-agent Systems: Multi-agent Learning