Semantic Linking Maps for Active Visual Object Search (Extended Abstract)
Semantic Linking Maps for Active Visual Object Search (Extended Abstract)
Zhen Zeng, Adrian Röfer, Odest Chadwicke Jenkins
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 4864-4868.
https://doi.org/10.24963/ijcai.2021/667
We aim for mobile robots to function in a variety of common human environments, which requires them to efficiently search previously unseen target objects. We can exploit background knowledge about common spatial relations between landmark objects and target objects to narrow down search space. In this paper, we propose an active visual object search strategy method through our introduction of the Semantic Linking Maps (SLiM) model. SLiM simultaneously maintains the belief over a target object's location as well as landmark objects' locations, while accounting for probabilistic inter-object spatial relations. Based on SLiM, we describe a hybrid search strategy that selects the next best view pose for searching for the target object based on the maintained belief. We demonstrate the efficiency of our SLiM-based search strategy through comparative experiments in simulated environments. We further demonstrate the real-world applicability of SLiM-based search in scenarios with a Fetch mobile manipulation robot.
Keywords:
Robotics: Cognitive Robotics
Robotics: Robotics and Vision
Robotics: Vision and Perception
Uncertainty in AI: Graphical Models