Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial Training

Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial Training

Peide Huang, Mengdi Xu, Fei Fang, Ding Zhao

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3099-3106. https://doi.org/10.24963/ijcai.2022/430

Robust Reinforcement Learning (RL) focuses on improving performances under model errors or adversarial attacks, which facilitates the real-life deployment of RL agents. Robust Adversarial Reinforcement Learning (RARL) is one of the most popular frameworks for robust RL. However, most of the existing literature models RARL as a zero-sum simultaneous game with Nash equilibrium as the solution concept, which could overlook the sequential nature of RL deployments, produce overly conservative agents, and induce training instability. In this paper, we introduce a novel hierarchical formulation of robust RL -- a general-sum Stackelberg game model called RRL-Stack -- to formalize the sequential nature and provide extra flexibility for robust training. We develop the Stackelberg Policy Gradient algorithm to solve RRL-Stack, leveraging the Stackelberg learning dynamics by considering the adversary's response. Our method generates challenging yet solvable adversarial environments which benefit RL agents' robust learning. Our algorithm demonstrates better training stability and robustness against different testing conditions in the single-agent robotics control and multi-agent highway merging tasks.
Keywords:
Machine Learning: Deep Reinforcement Learning
Agent-based and Multi-agent Systems: Multi-agent Learning
Machine Learning: Robustness