Robustly Learning Composable Options in Deep Reinforcement Learning

Robustly Learning Composable Options in Deep Reinforcement Learning

Akhil Bagaria, Jason Senthil, Matthew Slivinski, George Konidaris

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2161-2169. https://doi.org/10.24963/ijcai.2021/298

Hierarchical reinforcement learning (HRL) is only effective for long-horizon problems when high-level skills can be reliably sequentially executed. Unfortunately, learning reliably composable skills is difficult, because all the components of every skill are constantly changing during learning. We propose three methods for improving the composability of learned skills: representing skill initiation regions using a combination of pessimistic and optimistic classifiers; learning re-targetable policies that are robust to non-stationary subgoal regions; and learning robust option policies using model-based RL. We test these improvements on four sparse-reward maze navigation tasks involving a simulated quadrupedal robot. Each method successively improves the robustness of a baseline skill discovery method, substantially outperforming state-of-the-art flat and hierarchical methods.
Keywords:
Machine Learning: Deep Reinforcement Learning
Robotics: Learning in Robotics
Machine Learning: Reinforcement Learning