Abstract

Mediating Between Qualitative and Quantitative Representations for Task-Orientated Human-Robot Interaction

Mediating Between Qualitative and Quantitative Representations for Task-Orientated Human-Robot Interaction

Michael Brenner, Nick Hawes, John Kelleher, Jeremy Wyatt

In human-robot interaction (HRI) it is essential that the robot interprets and reacts to a human's utterances in a manner that reflects their intended meaning. In this paper we present a collection of novel techniques that allow a robot to interpret and execute spoken commands describing manipulation goals involving qualitative spatial constraints (e.g. "put the red ball near the blue cube"). The resulting implemented system integrates computer vision, potential field models of spatial relationships, and action planning to mediate between the continuous real world, and discrete, qualitative representations used for symbolic reasoning.