Norm-guided Adaptive Visual Embedding for Zero-Shot Sketch-Based Image Retrieval

Norm-guided Adaptive Visual Embedding for Zero-Shot Sketch-Based Image Retrieval

Wenjie Wang, Yufeng Shi, Shiming Chen, Qinmu Peng, Feng Zheng, Xinge You

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 1106-1112. https://doi.org/10.24963/ijcai.2021/153

Zero-shot sketch-based image retrieval (ZS-SBIR), which aims to retrieve photos with sketches under the zero-shot scenario, has shown extraordinary talents in real-world applications. Most existing methods leverage language models to generate class-prototypes and use them to arrange the locations of all categories in the common space for photos and sketches. Although great progress has been made, few of them consider whether such pre-defined prototypes are necessary for ZS-SBIR, where locations of unseen class samples in the embedding space are actually determined by visual appearance and a visual embedding actually performs better. To this end, we propose a novel Norm-guided Adaptive Visual Embedding (NAVE) model, for adaptively building the common space based on visual similarity instead of language-based pre-defined prototypes. To further enhance the representation quality of unseen classes for both photo and sketch modality, modality norm discrepancy and noisy label regularizer are jointly employed to measure and repair the modality bias of the learned common embedding. Experiments on two challenging datasets demonstrate the superiority of our NAVE over state-of-the-art competitors.
Keywords:
Computer Vision: Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation
Machine Learning: Deep Learning
Machine Learning: Multi-instance; Multi-label; Multi-view learning