Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue

Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue

Rahul R. Divekar, Xiangyang Mou, Lisha Chen, Maíra Gatti de Bayser, Melina Alberio Guerra, Hui Su

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence

In a setting where two AI agents embodied as animated humanoid avatars are engaged in a conversation with one human and each other, we see two challenges. One, determination by the AI agents about which one of them is being addressed. Two, determination by the AI agents if they may/could/should speak at the end of a turn. In this work we bring these two challenges together and explore the participation of AI agents in multi-party conversations. Particularly, we show two embodied AI shopkeeper agents who sell similar items aiming to get the business of a user by competing with each other on the price. In this scenario, we solve the first challenge by using headpose (estimated by deep learning techniques) to determine who the user is talking to. For the second challenge we use deontic logic to model rules of a negotiation conversation.
Keywords:
AI: Human-Computer Interactive Systems
AI: Multiagent Systems
AI: Others