MDANet: Modality-Aware Domain Alignment Network for Visible-Infrared Person Re-Identification

Visible-infrared person re-identification is a challenging task in video surveillance. Most existing works achieve performance gains by aligning feature distributions or image styles across modalities, whereas the multi-granularity information and domain knowledge are usually neglected. Motivated by these issues, we propose a novel modality-aware domain alignment network (MDANet) for visible-infrared person re-identification (VI-ReID), which utilizes global-local context cues and the generalized domain alignment strategy to solve modal differences and poor generalization. Firstly, modality-aware global-local context attention (MGLCA) is proposed to obtain multi-granularity context features and identity-aware patterns. Secondly, we present a generalized domain alignment learning head (GDALH) to relieve the modality discrepancy and enhance the generalization of MDANet, whose core idea is to enrich feature diversity in the domain alignment procedure. Finally, the entire network model is trained by proposing cross-modality circle, classification, and domain alignment losses in an end-to-end fashion. We conduct comprehensive experiments on two standards and their corrupted VI-ReID datasets to validate the robustness and generalization of our approach. MDANet is obviously superior to the most state-of-the-art methods. Specifically, the proposed method can gain 8.86% and 2.50% in Rank-1 accuracy on SYSU-MM01 (all-search and single-shot mode) and RegDB (infrared to visible mode) datasets, respectively. The source code will be made available soon.