Abstract

Proceedings Abstracts of the Twenty-Fifth International Joint Conference on Artificial Intelligence

Learning Social Affordance for Human-Robot Interaction / 3454
Tianmin Shu, M. S. Ryoo, Song-Chun Zhu

In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: our approach learns structural representations of human-human (and human-object-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions.

PDF