Language-Conditioned Open-Vocabulary Mobile Manipulation with Pretrained Models
Language-Conditioned Open-Vocabulary Mobile Manipulation with Pretrained Models
Shen Tan, Dong Zhou, Xiangyu Shao, Junqiao Wang, Guanghui Sun
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 8778-8786.
https://doi.org/10.24963/ijcai.2025/976
Open-vocabulary mobile manipulation (OVMM) that involves the handling of novel and unseen objects across different workspaces remains a significant challenge for real-world robotic applications. In this paper, we propose a novel Language-conditioned Open-Vocabulary Mobile Manipulation framework, named LOVMM, incorporating the large language model (LLM) and vision-language model (VLM) to tackle various mobile manipulation tasks in household environments. Our approach is capable of solving various OVMM tasks with free-form natural language instructions (e.g. "toss the food boxes on the office room desk to the trash bin in the corner", and "pack the bottles from the bed to the box in the guestroom"). Extensive experiments simulated in complex household environments show strong zero-shot generalization and multi-task learning abilities of LOVMM. Moreover, our approach can also generalize to multiple tabletop manipulation tasks and achieve better success rates compared to other state-of-the-art methods.
Keywords:
Robotics: ROB: Manipulation
Robotics: ROB: Learning in robotics
Robotics: ROB: Robotics and vision
