Physics Maths Engineering
Youchen Fan,
Mingyu Qin,
Huichao Guo,
Huichao Guo
Department of Electronic and Optical Engineering, Space Engineering University
Laixian Zhang
Laixian Zhang
Department of Electronic and Optical Engineering, Space Engineering University
The range-gated laser imaging instrument can capture face images in a dark environment, which provides a new idea for long-distance face recognition at night. However, the laser image has low contrast, low SNR and no color information, which affects observation and recognition. Therefore, it becomes important to convert laser images into visible images and then identify them. For image translation, we propose a laser-visible face image translation model combined with spectral normalization (SN-CycleGAN). We add spectral normalization layers to the discriminator to solve the problem of low image translation quality caused by the difficulty of training the generative adversarial network. The content reconstruction loss function based on the Y channel is added to reduce the error mapping. The face generated by the improved model on the self-built laser-visible face image dataset has better visual quality, which reduces the error mapping and basically retains the structural features of the target compared with other models. The FID value of evaluation index is 36.845, which is 16.902, 13.781, 10.056, 57.722, 62.598 and 0.761 lower than the CycleGAN, Pix2Pix, UNIT, UGATIT, StarGAN and DCLGAN models, respectively. For the face recognition of translated images, we propose a laser-visible face recognition model based on feature retention. The shallow feature maps with identity information are directly connected to the decoder to solve the problem of identity information loss in network transmission. The domain loss function based on triplet loss is added to constrain the style between domains. We use pre-trained FaceNet to recognize generated visible face images and obtain the recognition accuracy of Rank-1. The recognition accuracy of the images generated by the improved model reaches 76.9%, which is greatly improved compared with the above models and 19.2% higher than that of laser face recognition.
Range-gated laser imaging can capture face images in low-light conditions, but these images often suffer from low contrast, low signal-to-noise ratio (SNR), and lack color information, which can hinder observation and recognition.
The study proposes converting laser images into visible images using a laser-visible face image translation model combined with spectral normalization (SN-CycleGAN). This approach aims to enhance image quality and facilitate better recognition.
By adding spectral normalization layers to the discriminator, the SN-CycleGAN model addresses training difficulties in generative adversarial networks, leading to improved image translation quality.
The content reconstruction loss function based on the Y channel is added to reduce error mapping, helping to preserve the structural features of the target during image translation.
The proposed model generates faces with better visual quality, reducing error mapping and retaining structural features compared to other models. It achieves a Fréchet Inception Distance (FID) value of 36.845, which is lower than that of CycleGAN, Pix2Pix, UNIT, UGATIT, StarGAN, and DCLGAN models, indicating superior performance.
The study proposes a laser-visible face recognition model based on feature retention, where shallow feature maps containing identity information are directly connected to the decoder to mitigate identity information loss during network transmission.
A domain loss function based on triplet loss is added to the model to constrain style differences between domains, enhancing recognition accuracy.
Using a pre-trained FaceNet to recognize generated visible face images, the proposed model achieves a Rank-1 recognition accuracy of 76.9%, which is a significant improvement over other models and 19.2% higher than laser face recognition alone.
Show by month | Manuscript | Video Summary |
---|---|---|
2025 May | 3 | 3 |
2025 April | 5 | 5 |
2025 March | 8 | 8 |
2025 February | 9 | 9 |
2025 January | 8 | 8 |
2024 December | 12 | 12 |
2024 November | 13 | 13 |
2024 October | 7 | 7 |
2024 September | 2 | 2 |
2024 August | 3 | 3 |
2024 July | 5 | 5 |
2024 June | 5 | 5 |
2024 May | 4 | 4 |
2024 April | 3 | 3 |
Total | 87 | 87 |
Show by month | Manuscript | Video Summary |
---|---|---|
2025 May | 3 | 3 |
2025 April | 5 | 5 |
2025 March | 8 | 8 |
2025 February | 9 | 9 |
2025 January | 8 | 8 |
2024 December | 12 | 12 |
2024 November | 13 | 13 |
2024 October | 7 | 7 |
2024 September | 2 | 2 |
2024 August | 3 | 3 |
2024 July | 5 | 5 |
2024 June | 5 | 5 |
2024 May | 4 | 4 |
2024 April | 3 | 3 |
Total | 87 | 87 |