Multilevel Hierarchical Network with Multiscale Sampling for Video Question Answering

Multilevel Hierarchical Network with Multiscale Sampling for Video Question Answering

Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1276-1282. https://doi.org/10.24963/ijcai.2022/178

Video question answering (VideoQA) is challenging given its multimodal combination of visual understanding and natural language processing. While most existing approaches ignore the visual appearance-motion information at different temporal scales, it is unknown how to incorporate the multilevel processing capacity of a deep learning model with such multiscale information. Targeting these issues, this paper proposes a novel Multilevel Hierarchical Network (MHN) with multiscale sampling for VideoQA. MHN comprises two modules, namely Recurrent Multimodal Interaction (RMI) and Parallel Visual Reasoning (PVR). With a multiscale sampling, RMI iterates the interaction of appearance-motion information at each scale and the question embeddings to build the multilevel question-guided visual representations. Thereon, with a shared transformer encoder, PVR infers the visual cues at each level in parallel to fit with answering different question types that may rely on the visual information at relevant levels. Through extensive experiments on three VideoQA datasets, we demonstrate improved performances than previous state-of-the-arts and justify the effectiveness of each part of our method.
Keywords:
Computer Vision: Vision and language 
Computer Vision: Scene analysis and understanding   
Computer Vision: Video analysis and understanding   
Machine Learning: Multi-modal learning
Natural Language Processing: Question Answering