Abstract

Imitation Learning in Relational Domains: A Functional-Gradient Boosting Approach
Imitation Learning in Relational Domains: A Functional-Gradient Boosting Approach
Sriraam Natarajan, Saket Joshi, Prasad Tadepalli, Kristian Kersting, Jude Shavlik
Imitation learning refers to the problem of learning how to behave by observinga teacher in action. We consider imitation learning in relational domains, in which there is a varying number of objects and relations among them. In prior work, simple relational policies are learned by viewing imitation learning as supervised learning of a function from states to actions. For propositional worlds, functional gradient methods have been proved to be beneficial. They are simpler to implement than most existing methods, more efficient, more naturally satisfy common constraints on the cost function, and better represent our prior beliefs about the form of the function. Building on recent generalizations of functional gradient boosting to relational representations, we implement a functional gradient boosting approach to imitation learning in relational domains. In particular, given a set of traces from the human teacher, our system learns a policy in the form of a set of relational regression trees that additively approximate the functional gradients. The use of multiple additive trees combined with relational representation allows for learning more expressive policies than what has been done before. We demonstrate the usefulness of our approach in several different domains.