Imitative Attacker Deception in Stackelberg Security Games

Imitative Attacker Deception in Stackelberg Security Games

Thanh Nguyen, Haifeng Xu

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 528-534. https://doi.org/10.24963/ijcai.2019/75

To address the challenge of uncertainty regarding the attacker’s payoffs, capabilities, and other characteristics, recent work in security games has focused on learning the optimal defense strategy from observed attack data. This raises a natural concern that the strategic attacker may mislead the defender by deceptively reacting to the learning algorithms. This paper focuses on understanding how such attacker deception affects the game equilibrium. We examine a basic deception strategy termed imitative deception, in which the attacker simply pretends to have a different payoff assuming his true payoff is unknown to the defender. We provide a clean characterization about the game equilibrium as well as optimal algorithms to compute the equilibrium. Our experiments illustrate significant defender loss due to imitative attacker deception, suggesting the potential side effect of learning from the attacker.
Keywords:
Agent-based and Multi-agent Systems: Noncooperative Games
Agent-based and Multi-agent Systems: Multi-agent Planning