A Survey of Vision-Language Pre-Trained Models

A Survey of Vision-Language Pre-Trained Models

Yifan Du, Zikang Liu, Junyi Li, Wayne Xin Zhao

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Survey Track. Pages 5436-5443. https://doi.org/10.24963/ijcai.2022/762

As transformer evolves, pre-trained models have advanced at a breakneck pace in recent years. They have dominated the mainstream techniques in natural language processing (NLP) and computer vision (CV). How to adapt pre-training to the field of Vision-and-Language (V-L) learning and improve downstream task performance becomes a focus of multimodal learning. In this paper, we review the recent progress in Vision-Language Pre-Trained Models (VL-PTMs). As the core content, we first briefly introduce several ways to encode raw images and texts to single-modal embeddings before pre-training. Then, we dive into the mainstream architectures of VL-PTMs in modeling the interaction between text and image representations. We further present widely-used pre-training tasks, and then we introduce some common downstream tasks. We finally conclude this paper and present some promising research directions. Our survey aims to provide researchers with synthesis and pointer to related research.
Keywords:
Survey Track: Natural Language Processing
Survey Track: Computer Vision