Modality-aware Style Adaptation for RGB-Infrared Person Re-Identification

Modality-aware Style Adaptation for RGB-Infrared Person Re-Identification

Ziling Miao, Hong Liu, Wei Shi, Wanlu Xu, Hanrong Ye

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 916-922. https://doi.org/10.24963/ijcai.2021/127

RGB-infrared (IR) person re-identification is a challenging task due to the large modality gap between RGB and IR images. Many existing methods bridge the modality gap by style conversion, requiring high-similarity images exchanged by complex CNN structures, like GAN. In this paper, we propose a highly compact modality-aware style adaptation (MSA) framework, which aims to explore more potential relations between RGB and IR modalities by introducing new related modalities. Therefore, the attention is shifted from bridging to filling the modality gap with no requirement on high-quality generated images. To this end, we firstly propose a concise feature-free image generation structure to adapt the original modalities to two new styles that are compatible with both inputs by patch-based pixel redistribution. Secondly, we devise two image style quantification metrics to discriminate styles in image space using luminance and contrast. Thirdly, we design two image-level losses based on the quantified results to guide the style adaptation during an end-to-end four-modality collaborative learning process. Experimental results on two datasets SYSU-MM01 and RegDB show that MSA achieves significant improvements with little extra computation cost and outperforms the state-of-the-art methods.
Keywords:
Computer Vision: Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation
Machine Learning: Learning Generative Models
Machine Learning: Transfer, Adaptation, Multi-task Learning