Multi-policy Grounding and Ensemble Policy Learning for Transfer Learning with Dynamics Mismatch

Multi-policy Grounding and Ensemble Policy Learning for Transfer Learning with Dynamics Mismatch

Hyun-Rok Lee, Ram Ananth Sreenivasan, Yeonjeong Jeong, Jongseong Jang, Dongsub Shim, Chi-Guhn Lee

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3171-3177. https://doi.org/10.24963/ijcai.2022/440

We propose a new transfer learning algorithm between tasks with different dynamics. The proposed algorithm solves an Imitation from Observation problem (IfO) to ground the source environment to the target task before learning an optimal policy in the grounded environment. The learned policy is deployed in the target task without additional training. A particular feature of our algorithm is the employment of multiple rollout policies during training with a goal to ground the environment more globally; hence, it is named as Multi-Policy Grounding (MPG). The quality of final policy is further enhanced via ensemble policy learning. We demonstrate the superiority of the proposed algorithm analytically and numerically. Numerical studies show that the proposed multi-policy approach allows comparable grounding with single policy approach with a fraction of target samples, hence the algorithm is able to maintain the quality of obtained policy even as the number of interactions with the target environment becomes extremely small.
Keywords:
Machine Learning: Multi-task and Transfer Learning
Machine Learning: Deep Reinforcement Learning
Machine Learning: Ensemble Methods
Machine Learning: Generative Adverserial Networks
Machine Learning: Reinforcement Learning