Partial Adversarial Behavior Deception in Security Games

Partial Adversarial Behavior Deception in Security Games

Thanh H. Nguyen, Arunesh Sinha, He He

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 283-289. https://doi.org/10.24963/ijcai.2020/40

Learning attacker behavior is an important research topic in security games as security agencies are often uncertain about attackers' decision making. Previous work has focused on developing various behavioral models of attackers based on historical attack data. However, a clever attacker can manipulate its attacks to fail such attack-driven learning, leading to ineffective defense strategies. We study attacker behavior deception with three main contributions. First, we propose a new model, named partial behavior deception model, in which there is a deceptive attacker (among multiple attackers) who controls a portion of attacks. Our model captures real-world security scenarios such as wildlife protection in which multiple poachers are present. Second, we introduce a new scalable algorithm, GAMBO, to compute an optimal deception strategy of the deceptive attacker. Our algorithm employs the projected gradient descent and uses the implicit function theorem for the computation of gradient. Third, we conduct a comprehensive set of experiments, showing a significant benefit for the attacker and loss for the defender due to attacker deception.
Keywords:
Agent-based and Multi-agent Systems: Algorithmic Game Theory
Agent-based and Multi-agent Systems: Noncooperative Games
Machine Learning: Adversarial Machine Learning