Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning

Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning

Anshuka Rangi, Haifeng Xu, Long Tran-Thanh, Massimo Franceschetti

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3394-3400. https://doi.org/10.24963/ijcai.2022/471

To understand the security threats to reinforcement learning (RL) algorithms, this paper studies poisoning attacks to manipulate any order-optimal learning algorithm towards a targeted policy in episodic RL and examines the potential damage of two natural types of poisoning attacks, i.e., the manipulation of reward or action. We discover that the effect of attacks crucially depends on whether the rewards are bounded or unbounded. In bounded reward settings, we show that only reward manipulation or only action manipulation cannot guarantee a successful attack. However, by combining reward and action manipulation, the adversary can manipulate any order-optimal learning algorithm to follow any targeted policy with \Theta(\sqrt{T}) total attack cost, which is order-optimal, without any knowledge of the underlying MDP. In contrast, in unbounded reward settings, we show that reward manipulation attacks are sufficient for an adversary to successfully manipulate any order-optimal learning algorithm to follow any targeted policy using \tilde{O}(\sqrt{T}) amount of contamination. Our results reveal useful insights about what can or cannot be achieved by poisoning attacks, and are set to spur more work on the design of robust RL algorithms.
Keywords:
Machine Learning: Online Learning
Machine Learning: Reinforcement Learning