Building Portable Options: Skill Transfer in Reinforcement Learning

George Konidaris, Andrew G. Barto

The options framework provides methods for reinforcement learning agents to build new high-level skills. However, since options are usually learned in the same state space as the problem the agent is solving, they cannot be used in other tasks that are similar but have different state spaces. We introduce the notion of learning options in agent-space, the space generated by a feature set that is present and retains the same semantics across successive problem instances, rather than in problem-space. Agent-space options can be reused in later tasks that share the same agent-space but have different problem-spaces. We present experimental results demonstrating the use of agent-space options in building transferrable skills, and show that they perform best when used in conjunction with problem-space options.