Emergent Prosociality in Multi-Agent Games Through Gifting

Emergent Prosociality in Multi-Agent Games Through Gifting

Woodrow Z. Wang, Mark Beliaev, Erdem Bıyık, Daniel A. Lazar, Ramtin Pedarsani, Dorsa Sadigh

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 434-442. https://doi.org/10.24963/ijcai.2021/61

Coordination is often critical to forming prosocial behaviors -- behaviors that increase the overall sum of rewards received by all agents in a multi-agent game. However, state of the art reinforcement learning algorithms often suffer from converging to socially less desirable equilibria when multiple equilibria exist. Previous works address this challenge with explicit reward shaping, which requires the strong assumption that agents can be forced to be prosocial. We propose using a less restrictive peer-rewarding mechanism, gifting, that guides the agents toward more socially desirable equilibria while allowing agents to remain selfish and decentralized. Gifting allows each agent to give some of their reward to other agents. We employ a theoretical framework that captures the benefit of gifting in converging to the prosocial equilibrium by characterizing the equilibria's basins of attraction in a dynamical system. With gifting, we demonstrate increased convergence of high risk, general-sum coordination games to the prosocial equilibrium both via numerical analysis and experiments.
Keywords:
Agent-based and Multi-agent Systems: Coordination and Cooperation
Agent-based and Multi-agent Systems: Multi-agent Learning
Agent-based and Multi-agent Systems: Noncooperative Games