Approximate Large-scale Multiple Kernel k-means Using Deep Neural Network
Approximate Large-scale Multiple Kernel k-means Using Deep Neural Network
Yueqing Wang, Xinwang Liu, Yong Dou, Rongchun Li
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 3006-3012.
https://doi.org/10.24963/ijcai.2017/419
Multiple kernel clustering (MKC) algorithms have been extensively studied and applied to various applications. Although they demonstrate great success in both the theoretical aspects and applications, existing MKC algorithms cannot be applied to large-scale clustering tasks due to: i) the heavy computational cost to calculate the base kernels; and ii) insufficient memory to load the kernel matrices. In this paper, we propose an approximate algorithm to overcome these issues, and to make it be applicable to large-scale applications. Specifically, our algorithm trains a deep neural network to regress the indicating matrix generated by MKC algorithms on a small subset, and then obtains the approximate indicating matrix of the whole data set using the trained network, and finally performs the $k$-means on the output of our network. By mapping features into indicating matrix directly, our algorithm avoids computing the full kernel matrices, which dramatically decreases the memory requirement. Extensive experiments show that our algorithm consumes less time than most comparatively similar algorithms, while it achieves comparable performance with MKC algorithms.
Keywords:
Machine Learning: Kernel Methods
Machine Learning: Unsupervised Learning
Machine Learning: Multi-instance/Multi-label/Multi-view learning