Unsupervised Deep Hashing via Binary Latent Factor Models for Large-scale Cross-modal Retrieval
Unsupervised Deep Hashing via Binary Latent Factor Models for Large-scale Cross-modal Retrieval
Gengshen Wu, Zijia Lin, Jungong Han, Li Liu, Guiguang Ding, Baochang Zhang, Jialie Shen
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2854-2860.
https://doi.org/10.24963/ijcai.2018/396
Despite its great success, matrix factorization based cross-modality hashing suffers from two problems: 1) there is no engagement between feature learning and binarization; and 2) most existing methods impose the relaxation strategy by discarding the discrete constraints when learning the hash function, which usually yields suboptimal solutions. In this paper, we propose a novel multimodal hashing framework, referred as Unsupervised Deep Cross-Modal Hashing (UDCMH), for multimodal data search in a self-taught manner via integrating deep learning and matrix factorization with binary latent factor models. On one hand, our unsupervised deep learning framework enables the feature learning to be jointly optimized with the binarization. On the other hand, the hashing system based on the binary latent factor models can generate unified binary codes by solving a discrete-constrained objective function directly with no need for a relaxation step. Moreover, novel Laplacian constraints are incorporated into the objective function, which allow to preserve not only the nearest neighbors that are commonly considered in the literature but also the farthest neighbors of data, even if the semantic labels are not available. Extensive experiments on multiple datasets highlight the superiority of the proposed framework over several state-of-the-art baselines.
Keywords:
Machine Learning: Classification
Machine Learning: Data Mining
Machine Learning: Feature Selection ; Learning Sparse Models
Machine Learning: Deep Learning