The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning

The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning

Bonggun Shin, Hao Yang, Jinho D. Choi

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 3439-3445. https://doi.org/10.24963/ijcai.2019/477

Recent advances in deep learning have facilitated the demand of neural models for real applications. In practice, these applications often need to be deployed with limited resources while keeping high accuracy. This paper touches the core of neural models in NLP, word embeddings, and presents an embedding distillation framework that remarkably reduces the dimension of word embeddings without compromising accuracy. A new distillation ensemble approach is also proposed that trains a high-efficient student model using multiple teacher models. In our approach, the teacher models play roles only during training such that the student model operates on its own without getting supports from the teacher models during decoding, which makes it run as fast and light as any single model. All models are evaluated on seven document classification datasets and show significant advantage over the teacher models for most cases. Our analysis depicts insightful transformation of word embeddings from distillation and suggests a future direction to ensemble approaches using neural models.
Keywords:
Machine Learning: Ensemble Methods
Natural Language Processing: NLP Applications and Tools
Machine Learning: Deep Learning
Natural Language Processing: Embeddings