Compositionality Decomposed: How do Neural Networks Generalise? (Extended Abstract)
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Journal track. Pages 5065-5069. https://doi.org/10.24963/ijcai.2020/708
Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality of language and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests for models that are formulated on a task-independent level. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET, apply the resulting tests to three popular sequence-to-sequence models and provide an in-depth analysis of the results.
Natural Language Processing: Natural Language Semantics
Machine Learning: Deep Learning
Natural Language Processing: Natural Language Processing
Machine Learning: Interpretability