Search-Based Testing of Reinforcement Learning

Search-Based Testing of Reinforcement Learning

Martin Tappler, Filip Cano Córdoba, Bernhard K. Aichernig, Bettina Könighofer

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 503-510. https://doi.org/10.24963/ijcai.2022/72

Evaluation of deep reinforcement learning (RL) is inherently challenging. Especially the opaqueness of learned policies and the stochastic nature of both agents and environments make testing the behavior of deep RL agents difficult. We present a search-based testing framework that enables a wide range of novel analysis capabilities for evaluating the safety and performance of deep RL agents. For safety testing, our framework utilizes a search algorithm that searches for a reference trace that solves the RL task. The backtracking states of the search, called boundary states, pose safety-critical situations. We create safety test-suites that evaluate how well the RL agent escapes safety-critical situations near these boundary states. For robust performance testing, we create a diverse set of traces via fuzz testing. These fuzz traces are used to bring the agent into a wide variety of potentially unknown states from which the average performance of the agent is compared to the average performance of the fuzz traces. We apply our search-based testing approach on RL for Nintendo's Super Mario Bros.
Keywords:
Agent-based and Multi-agent Systems: Formal Verification, Validation and Synthesis
AI Ethics, Trust, Fairness: Safety & Robustness
Machine Learning: Deep Reinforcement Learning
Search: Search and Machine Learning