Abstract

Proceedings Abstracts of the Twenty-Fourth International Joint Conference on Artificial Intelligence

Word Embedding Revisited: A New Representation Learning and Explicit Matrix Factorization Perspective / 3650
Yitan Li, Linli Xu, Fei Tian, Liang Jiang, Xiaowei Zhong, Enhong Chen
PDF

Recently significant advances have been witnessed in the area of distributed word representations based on neural networks, which are also known as word embeddings. Among the new word embedding models, skip-gram negative sampling (SGNS) in the word2vec toolbox has attracted much attention due to its simplicity and effectiveness. However, the principles of SGNS remain not well understood, except for a recent work that explains SGNS as an implicit matrix factorization of the pointwise mutual information (PMI) matrix. In this paper, we provide a new perspective for further understanding SGNS. We point out that SGNS is essentially a representation learning method, which learns to represent the co-occurrence vector for a word. Based on the representation learning view, SGNS is in fact an explicit matrix factorization (EMF) of the words’ co-occurrence matrix. Furthermore, extended supervised word embedding can be established based on our proposed representation learning view.