Triplet Enhanced AutoEncoder: Model-free Discriminative Network Embedding

Triplet Enhanced AutoEncoder: Model-free Discriminative Network Embedding

Yao Yang, Haoran Chen, Junming Shao

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 5363-5369. https://doi.org/10.24963/ijcai.2019/745

Deep autoencoder is widely used in dimensionality reduction because of the expressive power of the neural network. Therefore, it is naturally suitable for embedding tasks, which essentially compresses high-dimensional information into a low-dimensional latent space. In terms of network representation, methods based on autoencoder such as SDNE and DNGR have achieved comparable results with the state-of-arts. However, all of them do not leverage label information, which leads to the embeddings lack the characteristic of discrimination. In this paper, we present Triplet Enhanced AutoEncoder (TEA), a new deep network embedding approach from the perspective of metric learning. Equipped with the triplet-loss constraint, the proposed approach not only allows capturing the topological structure but also preserving the discriminative information. Moreover, unlike existing discriminative embedding techniques, TEA is independent of any specific classifier, we call it the model-free property. Extensive empirical results on three public datasets (i.e, Cora, Citeseer and BlogCatalog) show that TEA is stable and achieves state-of-the-art performance compared with both supervised and unsupervised network embedding approaches on various percentages of labeled data. The source code can be obtained from https://github.com/yybeta/TEA.
Keywords:
Natural Language Processing: Embeddings
Machine Learning Applications: Networks