Grounding the Meaning of Words through Vision and Interactive Gameplay / 1895
Natalie Parde, Adam Hair, Michalis Papakostas, Konstantinos Tsiakas, Maria Dagioglou, Vangelis Karkaletsis, Rodney D. Nielsen
Currently, there exists a need for simple, easily-accessible methods with which individuals lacking advanced technical training can expand and customize their robot's knowledge. This work presents a means to satisfy that need, by abstracting the task of training robots to learn about the world around them as a vision- and dialogue-based game, I Spy. In our implementation of I Spy, robots gradually learn about objects and the concepts that describe those objects through repeated gameplay. We show that I Spy is an effective approach for teaching robots how to model new concepts using representations comprised of visual attributes. The results from 255 test games show that the system was able to correctly determine which object the human had in mind 67% of the time. Furthermore, a model evaluation showed that the system correctly understood the visual representations of its learned concepts with an average of 65% accuracy. Human accuracy against the same evaluation standard was just 88% on average.