DL-KDD: Dual-Lightness Knowledge Distillation for Action Recognition in the Dark

DL-KDD: Dual-Lightness Knowledge Distillation for Action Recognition in the Dark

Chi-Jui Chang, Oscar Tai-Yuan Chen, Vincent S. Tseng

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
AI4Tech: AI Enabling Technologies. Pages 9140-9148. https://doi.org/10.24963/ijcai.2025/1016

Human action recognition in dark videos is a challenging task for computer vision due to the low quality of the videos filmed in the dark. Recent studies focused on applying dark enhancement methods to improve the visibility of the video. However, such video processing results in the loss of critical information in the original (un-enhanced) video. Conversely, traditional two-stream methods are capable of learning information from both original and enhanced videos, but it can lead to a significant increase in the computational cost. To address these challenges, we propose a novel knowledge-distillation-based framework, named Dual-Lightness KnowleDge Distillation (DL-KDD), which simultaneously resolves the aforementioned issues by enabling a student model to obtain both original features and light-enhanced knowledge without additional complexity, thus improving the performance of the model and avoiding extra computational cost. Through comprehensive evaluations, the proposed DL-KDD, with only original video required as input during the inference phase, significantly outperforms state-of-the-art methods on the widely-used dark video datasets. The results highlight the excellence of our proposed knowledge-distillation-based framework for dark video human action recognition.
Keywords:
Advanced AI4Tech: Deep AI4Tech
Domain-specific AI4Tech: AI4Safety
Domain-specific AI4Tech: AI4Security