Densely Connected Attention Flow for Visual Question Answering

Densely Connected Attention Flow for Visual Question Answering

Fei Liu, Jing Liu, Zhiwei Fang, Richang Hong, Hanqing Lu

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 869-875. https://doi.org/10.24963/ijcai.2019/122

Learning effective interactions between multi-modal features is at the heart of visual question answering (VQA). A common defect of the existing VQA approaches is that they only consider a very limited amount of interactions, which may be not enough to model latent complex image-question relations that are necessary for accurately answering questions. Therefore, in this paper, we propose a novel DCAF (Densely Connected Attention Flow) framework for modeling dense interactions. It densely connects all pairwise layers of the network via Attention Connectors, capturing fine-grained interplay between image and question across all hierarchical levels. The proposed Attention Connector efficiently connects the multi-modal features at any two layers with symmetric co-attention, and produces interaction-aware attention features. Experimental results on three publicly available datasets show that the proposed method achieves state-of-the-art performance.
Keywords:
Computer Vision: Language and Vision
Computer Vision: Computer Vision