Recently, invited by Professor Salvador Garcia, Editor in Chief of Information Fusion, Professor ZHANG Qiang's group, School of Mechano-Electronic Engineering at Xidian University and Professor Jungong Han’s group, Computer Science Department at Aberystwyth University, published a review paper ‘Deep Learning for Visible-Infrared Cross-modality Person Re-Identification: A comprehensive Review’ in Information Fusion.
Cross-modality Person Re-Identification aims to match the query images of one modality from the gallery set with images of another modality, e.g. visible-to-infrared image matching and infrared-to-visible image matching, which has various applications in smart surveillance systems and intelligent security systems. Recently, with the construction of smart cities, there are increasing demands for surveillance and security. Accordingly, as one of the key techniques in this field, cross-modality person re-identification has gained considerable attention in recent years and has seen increasingly rapid advances in computer vision. This review provides a comprehensive and detailed review for cross-modality person re-identification. Furthermore, this review provides a comprehensive and systematic introduction to relevant researchers and plays a positive role in the development of this task.
Figure 1. Illustration of the basic framework of cross-modality person re-identification models.
Figure 2. Taxonomy of cross-modality person re-identification models.
Original Article Link: https://www.sciencedirect.com/science/article/pii/S1566253522002007