Entity-aware and Motion-aware Transformers for Language-driven Action Localization

Entity-aware and Motion-aware Transformers for Language-driven Action Localization

Shuo Yang, Xinxiao Wu

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1552-1558. https://doi.org/10.24963/ijcai.2022/216

Language-driven action localization in videos is a challenging task that involves not only visual-linguistic matching but also action boundary prediction. Recent progress has been achieved through aligning language queries to video segments, but estimating precise boundaries is still under-explored. In this paper, we propose entity-aware and motion-aware Transformers that progressively localize actions in videos by first coarsely locating clips with entity queries and then finely predicting exact boundaries in a shrunken temporal region with motion queries. The entity-aware Transformer incorporates the textual entities into visual representation learning via cross-modal and cross-frame attentions to facilitate attending action-related video clips. The motion-aware Transformer captures fine-grained motion changes at multiple temporal scales via integrating long short-term memory into the self-attention module to further improve the precision of action boundary prediction. Extensive experiments on the Charades-STA and TACoS datasets demonstrate that our method achieves better performance than existing methods.
Keywords:
Computer Vision: Vision and language 
Computer Vision: Image and Video retrieval 
Computer Vision: Video analysis and understanding