From Pixels to Objects: Cubic Visual Attention for Visual Question Answering

From Pixels to Objects: Cubic Visual Attention for Visual Question Answering

Jingkuan Song, Pengpeng Zeng, Lianli Gao, Heng Tao Shen

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 906-912. https://doi.org/10.24963/ijcai.2018/126

Recently, attention-based Visual Question Answering (VQA) has achieved great success by utilizing question to selectively target different visual areas that are related to the answer. Existing visual attention models are generally planar, i.e., different channels of the last conv-layer feature map of an image share the same weight. This conflicts with the attention mechanism because CNN features are naturally spatial and channel-wise. Also, visual attention models are usually conducted on pixel-level, which may cause region discontinuous problem. In this paper we propose a Cubic Visual Attention (CVA) model by successfully applying a novel channel and spatial attention on object regions to improve VQA task. Specifically, instead of attending to pixels, we first take advantage of the object proposal networks to generate a set of object candidates and extract their associated conv features. Then, we utilize the question to guide channel attention and spatial attention calculation based on the con-layer feature map. Finally, the attended visual features and the question are combined to infer the answer. We assess the performance of our proposed CVA on three public image QA datasets, including COCO-QA, VQA and Visual7W. Experimental results show that our proposed method significantly outperforms the state-of-the-arts.
Keywords:
Machine Learning: Deep Learning
Computer Vision: Language and Vision