Learning Prototype via Placeholder for Zero-shot Recognition

Learning Prototype via Placeholder for Zero-shot Recognition

Zaiquan Yang, Yang Liu, Wenjia Xu, Chong Huang, Lei Zhou, Chao Tong

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1559-1565. https://doi.org/10.24963/ijcai.2022/217

Zero-shot learning (ZSL) aims to recognize unseen classes by exploiting semantic descriptions shared between seen classes and unseen classes. Current methods show that it is effective to learn visual-semantic alignment by projecting semantic embeddings into the visual space as class prototypes. However, such a projection function is only concerned with seen classes. When applied to unseen classes, the prototypes often perform suboptimally due to domain shift. In this paper, we propose to learn prototypes via placeholders, termed LPL, to eliminate the domain shift between seen and unseen classes. Specifically, we combine seen classes to hallucinate new classes which play as placeholders of the unseen classes in the visual and semantic space. Placed between seen classes, the placeholders encourage prototypes of seen classes to be highly dispersed. And more space is spared for the insertion of well-separated unseen ones. Empirically, well-separated prototypes help counteract visual-semantic misalignment caused by domain shift. Furthermore, we exploit a novel semantic-oriented fine-tuning method to guarantee the semantic reliability of placeholders. Extensive experiments on five benchmark datasets demonstrate the significant performance gain of LPL over the state-of-the-art methods.
Keywords:
Computer Vision: Transfer, low-shot, semi- and un- supervised learning   
Computer Vision: Recognition (object detection, categorization)
Computer Vision: Representation Learning
Computer Vision: Vision and language