Equally-Guided Discriminative Hashing for Cross-modal Retrieval

Equally-Guided Discriminative Hashing for Cross-modal Retrieval

Yufeng Shi, Xinge You, Feng Zheng, Shuo Wang, Qinmu Peng

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 4767-4773. https://doi.org/10.24963/ijcai.2019/662

Cross-modal hashing intends to project data from two modalities into a common hamming space to perform cross-modal retrieval efficiently. Despite satisfactory performance achieved on real applications, existing methods are incapable of effectively preserving semantic structure to maintain inter-class relationship and improving discriminability to make intra-class samples aggregated simultaneously, which thus limits the higher retrieval performance. To handle this problem, we propose Equally-Guided Discriminative Hashing (EGDH), which jointly takes into consideration semantic structure and discriminability. Specifically, we discover the connection between semantic structure preserving and discriminative methods. Based on it, we directly encode multi-label annotations that act as high-level semantic features to build a common semantic structure preserving classifier. With the common classifier to guide the learning of different modal hash functions equally, hash codes of samples are intra-class aggregated and inter-class relationship preserving. Experimental results on two benchmark datasets demonstrate the superiority of EGDH compared with the state-of-the-arts.
Keywords:
Multidisciplinary Topics and Applications: Information Retrieval
Computer Vision: Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation
Machine Learning Applications: Applications of Supervised Learning
Machine Learning: Deep Learning