Modality Unifying Network for Visible-Infrared Person Re-Identification

Visible-infrared person re-identification (VI-ReID) is a challenging task due to large cross-modality discrepancies and intra-class variations. Existing methods mainly focus on learning modality-shared representations by embedding different modalities into the same feature space. As a result, the learned feature emphasizes the common patterns across modalities while suppressing modality-specific and identity-aware information that is valuable for Re-ID. To address these issues, we propose a novel Modality Unifying Network (MUN) to explore a robust auxiliary modality for VI-ReID. First, the auxiliary modality is generated by combining the proposed cross-modality learner and intra-modality learner, which can dynamically model the modality-specific and modality-shared representations to alleviate both cross-modality and intra-modality variations. Second, by aligning identity centres across the three modalities, an identity alignment loss function is proposed to discover the discriminative feature representations. Third, a modality alignment loss is introduced to consistently reduce the distribution distance of visible and infrared images by modality prototype modeling. Extensive experiments on multiple public datasets demonstrate that the proposed method surpasses the current state-of-the-art methods by a significant margin.

Yu Hao, Cheng Xu, Peng Wei, Liu Weihao, Zhao Guoying

A4 Article in conference proceedings (peer-reviewed)

H. Yu, X. Cheng, W. Peng, W. Liu and G. Zhao, "Modality Unifying Network for Visible-Infrared Person Re-Identification," 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023, pp. 11151-11161, doi: 10.1109/ICCV51070.2023.01027