Obstacle Tower: A Generalization Challenge in Vision, Control, and Planning

Obstacle Tower: A Generalization Challenge in Vision, Control, and Planning

Arthur Juliani, Ahmed Khalifa, Vincent-Pierre Berges, Jonathan Harper, Ervin Teng, Hunter Henry, Adam Crespi, Julian Togelius, Danny Lange

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2684-2691. https://doi.org/10.24963/ijcai.2019/373

The rapid pace of recent research in AI has been driven in part by the presence of fast and challenging simulation environments. These environments often take the form of games; with tasks ranging from simple board games, to competitive video games. We propose a new benchmark - Obstacle Tower: a high fidelity, 3D, 3rd person, procedurally generated environment. An agent in Obstacle Tower must learn to solve both low-level control and high-level planning problems in tandem while learning from pixels and a sparse reward signal. Unlike other benchmarks such as the Arcade Learning Environment, evaluation of agent performance in Obstacle Tower is based on an agent's ability to perform well on unseen instances of the environment. In this paper we outline the environment and provide a set of baseline results produced by current state-of-the-art Deep RL methods as well as human players. These algorithms fail to produce agents capable of performing near human level.
Keywords:
Machine Learning: Reinforcement Learning
Agent-based and Multi-agent Systems: Agent-Based Simulation and Emergence
Heuristic Search and Game Playing: General Game Playing and General Video Game Playing
Heuristic Search and Game Playing: Game Playing and Machine Learning