IMF: Integrating Matched Features Using Attentive Logit in Knowledge Distillation

IMF: Integrating Matched Features Using Attentive Logit in Knowledge Distillation

Jeongho Kim, Hanbeen Lee, Simon S. Woo

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 974-982. https://doi.org/10.24963/ijcai.2023/108

Knowledge distillation (KD) is an effective method for transferring the knowledge of a teacher model to a student model, that aims to improve the latter's performance efficiently. Although generic knowledge distillation methods such as softmax representation distillation and intermediate feature matching have demonstrated improvements with various tasks, only marginal improvements are shown in student networks due to their limited model capacity. In this work, to address the student model's limitation, we propose a novel flexible KD framework, Integrating Matched Features using Attentive Logit in Knowledge Distillation (IMF). Our approach introduces an intermediate feature distiller (IFD) to improve the overall performance of the student model by directly distilling the teacher's knowledge into branches of student models. The generated output of IFD, which is trained by the teacher model, is effectively combined by attentive logit. We use only a few blocks of the student and the trained IFD during inference, requiring an equal or less number of parameters. Through extensive experiments, we demonstrate that IMF consistently outperforms other state-of-the-art methods with a large margin over the various datasets in different tasks without extra computation.
Keywords:
Computer Vision: CV: Structural and model-based approaches, knowledge representation and reasoning
Computer Vision: CV: Representation learning