Abstract

Proceedings Abstracts of the Twenty-Third International Joint Conference on Artificial Intelligence

Online Hashing / 1422
Long-Kai Huang, Qiang Yang, Wei-Shi Zheng

Hash function learning has been recently received more and more attentions in fast search for large scale data. However, existing popular learning based hashing methods are batch-based learning models and thus incur large scale computational problem for learning an optimal model on a large scale of labelled data and cannot handle data which comes sequentially. In this paper, we address the problem by developing an online hashing learning algorithm to get hashing model accommodate to each new pair of data. At the same time the new updated hash model is penalized by the last learned model in order to retain important information learned in previous rounds. We also derive a tight bound for the cumulative loss of our proposed online learning algorithm. The experimental results demonstrate superiority of the proposed online hashing model on searching both metric distance neighbors and semantical similar neighbors in the experiments.