Submodularity-Inspired Data Selection for Goal-Oriented Chatbot Training Based on Sentence Embeddings
Submodularity-Inspired Data Selection for Goal-Oriented Chatbot Training Based on Sentence Embeddings
Mladen Dimovski, Claudiu Musat, Vladimir Ilievski, Andreea Hossman, Michael Baeriswyl
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 4019-4025.
https://doi.org/10.24963/ijcai.2018/559
Spoken language understanding (SLU) systems, such as goal-oriented chatbots or personal assistants, rely on an initial natural language understanding (NLU) module to determine the intent and to extract the relevant information from the user queries they take as input. SLU systems usually help users to solve problems in relatively narrow domains and require a large amount of in-domain training data. This leads to significant data availability issues that inhibit the development of successful systems.
To alleviate this problem, we propose a technique of data selection in the low-data regime that enables us to train with fewer labeled sentences, thus smaller labelling costs.
We propose a submodularity-inspired data ranking function, the ratio-penalty marginal gain, for selecting data points to label based only on the information extracted from the textual embedding space. We show that the distances in the embedding space are a viable source of information that can be used for data selection. Our method outperforms two known active learning techniques and enables cost-efficient training of the NLU unit. Moreover, our proposed selection technique does not need the model to be retrained in between the selection steps, making it time efficient as well.
Keywords:
Machine Learning: Active Learning
Natural Language Processing: Natural Language Processing
Natural Language Processing: NLP Applications and Tools
Natural Language Processing: Tagging, chunking, and parsing
Natural Language Processing: Embeddings