From Association to Generation: Text-only Captioning by Unsupervised Cross-modal Mapping

From Association to Generation: Text-only Captioning by Unsupervised Cross-modal Mapping

Junyang Wang, Ming Yan, Yi Zhang, Jitao Sang

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 4326-4334. https://doi.org/10.24963/ijcai.2023/481

With the development of Vision-Language Pre-training Models (VLPMs) represented by CLIP and ALIGN, significant breakthroughs have been achieved for association-based visual tasks such as image classification and image-text retrieval by the zero-shot capability of CLIP without fine-tuning. However, CLIP is hard to apply to generation-based tasks. This is due to the lack of decoder architecture and pre-training tasks for generation. Although previous works have created generation capacity for CLIP through additional language models, a modality gap between the CLIP representations of different modalities and the inability of CLIP to model the offset of this gap, which results in the failure of the concept to transfer across modes. To solve the problem, we try to map images/videos to the language modality and generate captions from the language modality. In this paper, we propose the K-nearest-neighbor Cross-modality Mapping (Knight), a zero-shot method from association to generation. With vision-free unsupervised training, Knight achieves state-of-the-art performance in zero-shot methods for image captioning and video captioning.
Keywords:
Machine Learning: ML: Multi-modal learning
Computer Vision: CV: Vision and languageĀ 
Natural Language Processing: NLP: Language generation