Inter-Task Similarity for Lifelong Reinforcement Learning in Heterogeneous Tasks

Inter-Task Similarity for Lifelong Reinforcement Learning in Heterogeneous Tasks

Sergio A. Serrano

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 4915-4916. https://doi.org/10.24963/ijcai.2021/689

Reinforcement learning (RL) is a learning paradigm in which an agent interacts with the environment it inhabits to learn in a trial-and-error way. By letting the agent acquire knowledge from its own experience, RL has been successfully applied to complex domains such as robotics. However, for non-trivial problems, training an RL agent can take very long periods of time. Lifelong machine learning (LML) is a learning setting in which the agent learns to solve tasks sequentially, by leveraging knowledge accumulated from previously solved tasks to learn better/faster in a new one. Most LML works heavily rely on the assumption that tasks are similar to each other. However, this may not be true for some domains with a high degree of task-diversity that could benefit from adopting a lifelong learning approach, e.g., service robotics. Therefore, in this research we will address the problem of learning to solve a sequence of RL heterogeneous tasks (i.e., tasks that differ in their state-action space).
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Machine Learning: Reinforcement Learning
Machine Learning: Incremental Learning
Robotics: Learning in Robotics