Option Transfer and SMDP Abstraction with Successor Features
Option Transfer and SMDP Abstraction with Successor Features
Dongge Han, Sebastian Tschiatschek
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3036-3042.
https://doi.org/10.24963/ijcai.2022/421
Abstraction plays an important role in the generalisation of knowledge and skills and is key to sample efficient learning. In this work, we study joint temporal and state abstraction in reinforcement learning, where temporally-extended actions in the form of options induce temporal abstractions, while aggregation of similar states with respect to abstract options induces state abstractions. Many existing abstraction schemes ignore the interplay of state and temporal abstraction. Consequently, the considered option policies often cannot be directly transferred to new environments due to changes in the state space and transition dynamics. To address this issue, we propose a novel abstraction scheme building on successor features. This includes an algorithm for transferring abstract options across different environments and a state abstraction mechanism that allows us to perform efficient planning with the transferred options.
Keywords:
Machine Learning: Reinforcement Learning
Planning and Scheduling: Hierarchical Planning