Stacked Similarity-Aware Autoencoders

Stacked Similarity-Aware Autoencoders

Wenqing Chu, Deng Cai

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 1561-1567. https://doi.org/10.24963/ijcai.2017/216

As one of the most popular unsupervised learning approaches, the autoencoder aims at transforming the inputs to the outputs with the least discrepancy. The conventional autoencoder and most of its variants only consider the one-to-one reconstruction, which ignores the intrinsic structure of the data and may lead to overfitting. In order to preserve the latent geometric information in the data, we propose the stacked similarity-aware autoencoders. To train each single autoencoder, we first obtain the pseudo class label of each sample by clustering the input features. Then the hidden codes of those samples sharing the same category label will be required to satisfy an additional similarity constraint. Specifically, the similarity constraint is implemented based on an extension of the recently proposed center loss. With this joint supervision of the autoencoder reconstruction error and the center loss, the learned feature representations not only can reconstruct the original data, but also preserve the geometric structure of the data. Furthermore, a stacked framework is introduced to boost the representation capacity. The experimental results on several benchmark datasets show the remarkable performance improvement of the proposed algorithm compared with other autoencoder based approaches.
Keywords:
Machine Learning: Neural Networks
Machine Learning: Unsupervised Learning