Base-Detail Feature Learning Framework for Visible-Infrared Person Re-Identification

Base-Detail Feature Learning Framework for Visible-Infrared Person Re-Identification

Zhihao Gong, Lian Wu, Yong Xu

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 1035-1043. https://doi.org/10.24963/ijcai.2025/116

Visible-infrared person re-identification (VIReID) provides a solution for ReID tasks in 24-hour scenarios; however, significant challenges persist in achieving satisfactory performance due to the substantial discrepancies between visible (VIS) and infrared (IR) modalities. Existing methods inadequately leverage information from different modalities, primarily focusing on digging distinguishing features from modality-shared information while neglecting modality-specific details. To fully utilize differentiated minutiae, we propose a Base-Detail Feature Learning Framework (BDLF) that enhances the learning of both base and detail knowledge, thereby capitalizing on both modality-shared and modality-specific information. Specifically, the proposed BDLF mines detail and base features through a lossless detail feature extraction module and a complementary base embedding generation mechanism, respectively, supported by a novel correlation restriction method that ensures the features gained by BDLF enrich both detail and base knowledge across VIS and IR features. Comprehensive experiments conducted on the SYSU-MM01, RegDB, and LLCM datasets validate the effectiveness of BDLF.
Keywords:
Computer Vision: CV: Image and video retrieval 
Computer Vision: CV: Biometrics, face, gesture and pose recognition
Computer Vision: CV: Machine learning for vision
Computer Vision: CV: Multimodal learning