Generating Behavior-Diverse Game AIs with Evolutionary Multi-Objective Deep Reinforcement Learning
Generating Behavior-Diverse Game AIs with Evolutionary Multi-Objective Deep Reinforcement Learning
Ruimin Shen, Yan Zheng, Jianye Hao, Zhaopeng Meng, Yingfeng Chen, Changjie Fan, Yang Liu
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3371-3377.
https://doi.org/10.24963/ijcai.2020/466
Generating diverse behaviors for game artificial intelligence (Game AI) has been long recognized as a challenging task in the game industry. Designing a Game AI with a satisfying behavioral characteristic (style) heavily depends on the domain knowledge and is hard to achieve manually. Deep reinforcement learning sheds light on advancing the automatic Game AI design. However, most of them focus on creating a superhuman Game AI, ignoring the importance of behavioral diversity in games. To bridge the gap, we introduce a new framework, named EMOGI, which can automatically generate desirable styles with almost no domain knowledge. More importantly, EMOGI succeeds in creating a range of diverse styles, providing behavior-diverse Game AIs. Evaluations on the Atari and real commercial games indicate that, compared to existing algorithms, EMOGI performs better in generating diverse behaviors and significantly improves the efficiency of Game AI design.
Keywords:
Machine Learning Applications: Applications of Reinforcement Learning
Machine Learning Applications: Game Playing
Heuristic Search and Game Playing: Game Playing and Machine Learning