SBAT: Video Captioning with Sparse Boundary-Aware Transformer

SBAT: Video Captioning with Sparse Boundary-Aware Transformer

Tao Jin, Siyu Huang, Ming Chen, Yingming Li, Zhongfei Zhang

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 630-636. https://doi.org/10.24963/ijcai.2020/88

In this paper, we focus on the problem of applying the transformer structure to video captioning effectively. The vanilla transformer is proposed for uni-modal language generation task such as machine translation. However, video captioning is a multimodal learning problem, and the video features have much redundancy between different time steps. Based on these concerns, we propose a novel method called sparse boundary-aware transformer (SBAT) to reduce the redundancy in video representation. SBAT employs boundary-aware pooling operation for scores from multihead attention and selects diverse features from different scenarios. Also, SBAT includes a local correlation scheme to compensate for the local information loss brought by sparse operation. Based on SBAT, we further propose an aligned cross-modal encoding scheme to boost the multimodal interaction. Experimental results on two benchmark datasets show that SBAT outperforms the state-of-the-art methods under most of the metrics.
Keywords:
Computer Vision: Language and Vision
Computer Vision: Video: Events, Activities and Surveillance