Self-Predictive Dynamics for Generalization of Vision-based Reinforcement Learning

Self-Predictive Dynamics for Generalization of Vision-based Reinforcement Learning

Kyungsoo Kim, Jeongsoo Ha, Yusung Kim

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3150-3156. https://doi.org/10.24963/ijcai.2022/437

Vision-based reinforcement learning requires efficient and robust representations of image-based observations, especially when the images contain distracting (task-irrelevant) elements such as shadows, clouds, and light. It becomes more important if those distractions are not exposed during training. We design a Self-Predictive Dynamics (SPD) method to extract task-relevant features efficiently, even in unseen observations after training. SPD uses weak and strong augmentations in parallel, and learns representations by predicting inverse and forward transitions across the two-way augmented versions. In a set of MuJoCo visual control tasks and an autonomous driving task (CARLA), SPD outperforms previous studies in complex observations, and significantly improves the generalization performance for unseen observations. Our code is available at https://github.com/unigary/SPD.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning: Representation learning
Machine Learning: Self-supervised Learning