Hierarchical Linear Disentanglement of Data-Driven Conceptual Spaces

Hierarchical Linear Disentanglement of Data-Driven Conceptual Spaces

Rana Alshaikh, Zied Bouraoui, Steven Schockaert

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3573-3579. https://doi.org/10.24963/ijcai.2020/494

Conceptual spaces are geometric meaning representations in which similar entities are represented by similar vectors. They are widely used in cognitive science, but there has been relatively little work on learning such representations from data. In particular, while standard representation learning methods can be used to induce vector space embeddings from text corpora, these differ from conceptual spaces in two crucial ways. First, the dimensions of a conceptual space correspond to salient semantic features, known as quality dimensions, whereas the dimensions of learned vector space embeddings typically lack any clear interpretation. This has been partially addressed in previous work, which has shown that it is possible to identify directions in learned vector spaces which capture semantic features. Second, conceptual spaces are normally organised into a set of domains, each of which is associated with a separate vector space. In contrast, learned embeddings represent all entities in a single vector space. Our hypothesis in this paper is that such single-space representations are sub-optimal for learning quality dimensions, due to the fact that semantic features are often only relevant to a subset of the entities. We show that this issue can be mitigated by identifying features in a hierarchical fashion. Intuitively, the top-level features split the vector space into different domains, making it possible to subsequently identify domain-specific quality dimensions.
Keywords:
Natural Language Processing: Natural Language Processing
Machine Learning: Interpretability
Humans and AI: Cognitive Modeling