Decentralized Anomaly Detection in Cooperative Multi-Agent Reinforcement Learning

Decentralized Anomaly Detection in Cooperative Multi-Agent Reinforcement Learning

Kiarash Kazari, Ezzeldin Shereen, Gyorgy Dan

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 162-170. https://doi.org/10.24963/ijcai.2023/19

We consider the problem of detecting adversarial attacks against cooperative multi-agent reinforcement learning. We propose a decentralized scheme that allows agents to detect the abnormal behavior of one compromised agent. Our approach is based on a recurrent neural network (RNN) trained during cooperative learning to predict the action distribution of other agents based on local observations. The predicted distribution is used for computing a normality score for the agents, which allows the detection of the misbehavior of other agents. To explore the robustness of the proposed detection scheme, we formulate the worst-case attack against our scheme as a constrained reinforcement learning problem. We propose to compute an attack policy by optimizing the corresponding dual function using reinforcement learning. Extensive simulations on various multi-agent benchmarks show the effectiveness of the proposed detection scheme in detecting state-of-the-art attacks and in limiting the impact of undetectable attacks.
Keywords:
Agent-based and Multi-agent Systems: MAS: Multi-agent learning
AI Ethics, Trust, Fairness: ETF: Trustworthy AI