Multi-View Visual Semantic Embedding

Multi-View Visual Semantic Embedding

Zheng Li, Caili Guo, Zerun Feng, Jenq-Neng Hwang, Xijun Xue

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1130-1136. https://doi.org/10.24963/ijcai.2022/158

Visual Semantic Embedding (VSE) is a dominant method for cross-modal vision-language retrieval. Its purpose is to learn an embedding space so that visual data can be embedded in a position close to the corresponding text description. However, there are large intra-class variations in the vision-language data. For example, multiple texts describing the same image may be described from different views, and the descriptions of different views are often dissimilar. The mainstream VSE method embeds samples from the same class in similar positions, which will suppress intra-class variations and lead to inferior generalization performance. This paper proposes a Multi-View Visual Semantic Embedding (MV-VSE) framework, which learns multiple embeddings for one visual data and explicitly models intra-class variations. To optimize MV-VSE, a multi-view upper bound loss is proposed, and the multi-view embeddings are jointly optimized while retaining intra-class variations. MV-VSE is plug-and-play and can be applied to various VSE models and loss functions without excessively increasing model complexity. Experimental results on the Flickr30K and MS-COCO datasets demonstrate the superior performance of our framework.
Keywords:
Computer Vision: Vision and languageĀ 
Computer Vision: Image and Video retrievalĀ 
Machine Learning: Multi-modal learning