MEGAN: A Generative Adversarial Network for Multi-View Network Embedding

MEGAN: A Generative Adversarial Network for Multi-View Network Embedding

Yiwei Sun, Suhang Wang, Tsung-Yu Hsieh, Xianfeng Tang, Vasant Honavar

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 3527-3533. https://doi.org/10.24963/ijcai.2019/489

Data from many real-world applications can be naturally represented by multi-view networks where the different views encode different types of relationships (e.g., friendship, shared interests in music, etc.) between real-world individuals or entities. There is an urgent need for methods to obtain low-dimensional, information preserving and typically nonlinear embeddings of such multi-view networks. However, most of the work on multi-view learning focuses on data that lack a network structure, and most of the work on network embeddings has focused primarily on single-view networks. Against this background, we consider the multi-view network representation learning problem, i.e., the problem of constructing low-dimensional information preserving embeddings of multi-view networks. Specifically, we investigate a novel Generative Adversarial Network (GAN) framework for Multi-View Network Embedding, namely MEGAN, aimed at preserving the information from the individual network views, while accounting for connectivity across (and hence complementarity of and correlations between) different views. The results of our experiments on two real-world multi-view data sets show that the embeddings obtained using MEGAN outperform the state-of-the-art methods on node classification, link prediction and visualization tasks.
Keywords:
Machine Learning: Data Mining
Machine Learning: Multi-instance;Multi-label;Multi-view learning
Machine Learning Applications: Networks