External Memory Matters: Generalizable Object-Action Memory for Retrieval-Augmented Long-Term Video Understanding
External Memory Matters: Generalizable Object-Action Memory for Retrieval-Augmented Long-Term Video Understanding
Jisheng Dang, Huicheng Zheng, Xudong Wu, Jingmei Jiao, Bimei Wang, Jun Yang, Bin Hu, Jianhuang Lai, Tat Seng Chua
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 864-872.
https://doi.org/10.24963/ijcai.2025/97
Long video understanding with Large Language Models (LLMs) enables the description of objects that are not explicitly present in the training data. However, continuous changes in known objects and the emergence of new ones require up-to-date knowledge of objects and their dynamics for effective understanding of the open world. To alleviate this, we propose an efficient Retrieval-Enhanced Video Understanding method, dubbed REVU, which leverages external knowledge to enhance the performance of open-world learning. First, REVU introduces an extensible external text-object memory with minimal text-visual mapping, involving static and dynamic multimodal information to help LLMs-based models align text and vision features. Second, REVU retrieves object information from external databases and dynamically integrates frame-specific data from videos, enabling effective knowledge aggregation to comprehend the open world. We conducted experiments on multiple benchmark datasets, and our model demonstrates strong adaptability to out-of-domain data without requiring additional fine-tuning or re-training. Experiments on benchmark video understanding datasets reveal that our model achieves state-of-the-art performance and robust generalization.
Keywords:
Computer Vision: CV: Multimodal learning
Computer Vision: CV: Video analysis and understanding
Computer Vision: CV: Scene analysis and understanding
Computer Vision: CV: Recognition (object detection, categorization)
