Ex) Article Title, Author, Keywords
Current Optics
and Photonics
G-0K8J8ZR168
Ex) Article Title, Author, Keywords
Curr. Opt. Photon. 2022; 6(6): 608-618
Published online December 25, 2022 https://doi.org/10.3807/COPP.2022.6.6.608
Copyright © Optical Society of Korea.
Yunfeng Jiang, Dongsheng Wu , Jie Liu, Kuo Tian, Dan Wang
Corresponding author: *jyf1optics@163.com, ORCID 0000-0002-6848-238X
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.
Keywords: Decorrelation, Nonsubsampled shearlet transform (NSST), Principal component analysis (PCA), Signal-to-noise ratio (SNR), True color night vision
OCIS codes: (110.2960) Image analysis; (110.2970) Image detection systems; (110.3000) Image quality assessment
Patrolling and driving at night led to the birth of night vision technology. It has expanded the time wars are fought, and even makes some wars occur more at night than in the daytime. Night vision technology uses photoelectric imaging devices to detect the radiation or reflection information of targets at night. The technology can convert the target information at night that cannot be distinguished by human eyes into a visual image through the acquisition, processing and display of a photodetector and imaging equipment. Many studies have shown that human eyes can only recognize dozens of gray levels, but can recognize thousands of colors, so the technology greatly improves the recognition probability of human eyes [1]. Experiments have shown that the target recognition rate in a color image is 30% higher than that in a gray image, and the error recognition rate can be reduced by 60% compared with the latter [2]. We can use the color information data stored in the brain or a computer database to better realize target recognition and understand a scene [3]. Therefore, the time to understand targets is shorter and the recognition is more accurate in color images than in gray images for humans.
At present, low-light-level (LLL) and infrared night vision technology products take a larger share of the night vision market. Although they are widely used, their output images are monochromatic and contain limited detail information. The complex background interferes with the target recognition of human eyes, which is their technical defect [4, 5]. In addition, most of the current color night vision technology products work by means of false color fusion and color transfer technology [6–8], but their output images have poor natural color, which is quite different from the true color information observed by human eyes in the daytime. Therefore, the pursuit of true color information of scenes at night has become one of the main study directions of night vision technology.
In recent years, true color night vision technology has developed rapidly and is widely applied to military, traffic, reconnaissance and other observation fields. In 2006, the company OKSI produced a color night vision product [8] that placed a liquid crystal filter in front of a third-generation image intensifier CCD (ICCD) that can adjust the transmission of different bands by changing the voltage, and finally fused the collected images of different bands to obtain a true color night vision image. However, it was not suitable for dynamic scenes. In 2010, OKSI also produced a product that obtained the color information of the scene by covering the Bayer filters in front of the detector [7]. The principle is that each pixel of the detector includes part of the spectrum, and then the three primary RGB color values are obtained by interpolation. There are many interpolation methods (neighbor interpolation, linear interpolation and 3*3 interpolation methods, etc.) [9, 10]. Since then, true color night vision products have been developed based on the principle of the Bayer array, which uses conventional tri-band image fusion methods. However, due to the low illumination conditions at night, the signal-to-noise ratio (SNR) of the true color images obtained by conventional technology is very low, which seriously affects the image quality and human visual perception [11]. In order to increase the details and information of output images, some studies have attempted to improve system performance by the four-band system. For example, [12] used a portable four-band sensor system for the classification of oil palm fresh fruit bunches based on their maturity, which made full use of the four-band information. Therefore, a four-band (RYYB and RGGB) true color system was proposed to increase the signal value under weak light. However, it is at the expense of color accuracy, especially green and yellow [13, 14]. For this purpose, we proposed a new double-channel four-band system to realize outputting of night vision images with higher SNR and true color information.
In our work, we first analyzed the SNR theory of the conventional tri-band true color night vision system and found that the SNR of output images can be limited by the wavelength range of the system response
The principle of true color night vision technology is to divide the visible light band (380–760 nm) at night into three sub-bands (R, G and B), and use the LLL night vision system and image acquisition device to capture the three sub-band images for registration, fusion and other processing, so as to obtain a true color night vision image consistent with the human visual system [15]. The following formula expresses this principle:
where R, G and B represent R, G and B-band images, respectively.
The SNR (
where
where
The total output noise of the system includes temporal noise and spatial noise, but spatial noise can be corrected by the algorithms. Therefore, the noise analyzed in this paper only relates to temporal noise. The temporal noise of the LLL imaging system is composed of photon noise, dark current noise and readout circuit noise. The above noise components are independent and uncorrelated, so the total noise power of the system is the superposition value of their power, that is, the total number of noise electrons is
where
The formula for calculating the SNR of the LLL imaging system can be obtained by formula (2) and formula (4):
Therefore, the rate of change of system SNR with the signal value can be obtained, that is the partial derivative of system SNR (
For the concrete working LLL night vision system, the
It can be seen that when the signal value is low, the change rate of SNR is much greater than that when the signal value is high, that is, under low illumination conditions, increasing the number of signal electrons will quickly improve the SNR. Therefore, according to formula (3), the parameters of the optical system and detector for the LLL imaging system are constant, so changing the signal intensity can only be achieved by adjusting the wavelength range of the system response
At present, the three primary color image fusion methods based on liquid crystal tunable filter or optical thin film filter technology are mainly aimed at the visible light band to obtain true color night vision images [17]. The true color night vision system captures the sub-band (R: 380–490 nm, G: 490–580 nm, B: 580–760 nm) images, which greatly reduces the wavelength range of the system response. The SNR of sub-band (R, G and B) images are
where
In the case of low illuminance at night, for example, the illuminance is 10−1 lx in moonlight, and is 10−3 lx in starlight. Although the light energy in the visible light (380–760 nm) range is very low, there are many near-infrared rays (NIR; 760–1,100 nm) in the atmosphere. The curves in Fig. 2 are the spectral distribution under the conditions of full moon, starlight and glow [18].
If we make full use of the large amount of visible light and NIR information at night, the system response wavelength will be greatly expanded and thus improve the SNR of the true color night vision images. The SNR of the full-band (380–1,100) image is
where
Therefore, we can expand the response range of the true color LLL night vision system by adding a full-band channel, thereby improving the SNR of the output images.
In order to improve the SNR of the system, we increased the full-band channel in the true color night vision system. The overall scheme of the double-channel four-band true color night vision system we designed is shown in Fig. 3, and is mainly composed of two channels (full-band and sub-band). Each channel consists of four parts: 1. Optical system, 2. Adjusting device, 3. Image acquisition system, and 4. Image processing and fusion system. The image processing and fusion system is mainly composed of a computer or DSP. The working principle of the system is as follows:
First, some of the reflected light of the targets passes through the full-band channel and the rest passes through the sub-band channel, which is divided into three primary colors after color separation by a prism. Next, the full-band and sub-band images are obtained by the ICCD and image acquisition device. Finally, after image registration and fusion in the computer, we can get the true color night vision image in real time. In addition, due to aligning the four light paths of the ICCD through the adjusting device, the difficulty of multi-band image registration will be greatly reduced.
In order to make full use of the full-band image details and sub-band image (R, G and B) spectral information, we proposed an image fusion method based on PCA and NSST to get the true color night vision image. The steps of the fusion algorithm are as follows:
(1) PCA analysis is performed on the sub-band image (R, G and B) to obtain the principal component components,
(2) The spatial detail information in the full-band image (
where
(3) According to the first principal component
(4) Different fusion rules are used for high- and low-frequency components. According to the matching degree between images, the maximum value or average fusion strategy is used for low-frequency components. The absolute value strategy is used for high-frequency components. We get the fused high- and low-frequency components
(5) Then inverse NSST transform (INSST) is performed on
(6) Inverse PCA (IPCA) transform is performed on
Next, we introduce the fusion rules of high- and low-frequency components in Step 4.
a) The low-frequency components fusion rule (LCFR).
Let’s set
where
Let’s set
where the value of
Assuming that
When
When
where,
b) The high-frequency components fusion rule (HCFR).
Since the high-frequency component mainly includes the detail information of an image, generally speaking, the high-frequency detail component of the full-color image (
For the image fusion algorithm, we give an algorithm flow chart in Fig. 4.
According to the above design, the double-channel four-band true color night vision system can be built and debugged, which can realize the display of true color night vision images.
We used a Gen Ⅲ image intensifier (ANDOR: iStar DH334T-18-U-93) with a spectral response range of 380–1,090 nm coupled with a CCD to synthesize the ICCD to build the true color night vision system. Using the system, experiments were carried out for targets in different scenes and the exposure time was the same when capturing images for the four channel sensors. The four targets are: A standard colorimetric plate (Scene 1, 0.02 lx), a building (Scene 2, 0.02 lx), an indoor rainbow umbrella (Scene 3, 0.26 lx) and an urban night scene (Scene 4, 0.63 lx). The captured images (B-band, G-band, R-band and full-band images) of each channel, which were operated by image registration, are shown in Figs. 5–8.
In order to compare this with conventional true color night vision technology, we set up a single channel tri-band system, that is, the full-band channel of our system was abandoned, and only the sub-band channel worked to obtain the true color night vision images. Here, the conventional true color night vision image fusion technology is to map the sub-band image to the RGB space. The true color night vision images obtained by conventional tri-band and four-band image fusion methods are shown in Figs. 9–12, respectively.
From the point of view of target spectral characteristics, the same kind of target has the same or similar spectral characteristics, and will have the same or similar gray values in an image, and there is a high correlation between the same kind of target signals in the same uniform region. However, noise is independent and uncorrelated. That is to say, the spectral reflection information of the same targets in the uniform region is related. If the signal value of this pixel can be estimated, the signal and noise can be separated to obtain the noise value. Multiple linear regression (MLR) for fitting can be used to select the signal values around a pixel in the uniform region to fit the signal value of the pixel [21]. Generally speaking, the targets in an image are not unique and uniform, and there will be structural features between different targets. The signal values of the same or uniform targets can be fitted after eliminating the structural features (decorrelated method). This can not only weaken the influence of mutual interference between different targets, but also estimate the noise value according to the characteristics of high correlation between the same and uniform targets. Therefore, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the output true color night vision images. The steps are as follows:
(1) Use the Canny operator to detect and mark the edges of the targets in the detected image.
(2) Block the image marked by the edges (the size of the module is selected according to the image).
(3) The image blocks marked by the edges are removed, and the others containing the same targets or that are uniform are retained.
(4) Use the formula to perform regressing calculation (numerical fitting) on the pixels that do not contain the outermost pixels of reserved image blocks. The formula is as follows:
where
Then
(5) Calculate the mean value DN of the corresponding image block.
where DN
(6) Finally, select an interval with the largest mode according to the standard deviation of all these image blocks, and take the mean value of the standard deviation of the image blocks falling into this interval as the noise mean value
According to the above method, we calculated the SNR of the images obtained by the two true color night vision schemes as shown in Fig. 13.
According to Figs. 9–12, we know that both the conventional tri-band and double-channel four-band night true color night vision system we designed can obtain true color night vision images that are consistent with the human visual system. However, the images obtained can visually confirm the difference between tri-band and four-band systems, that is, the four-band system can increase the output image sharpness compared with the tri-band system. In order to show how much of an effect SNR has on image quality, we calculated the correlation coefficient (CC) [22] between the edge graphs of obtained and reference images. The reference images we selected were the full-band images of the four scenes because they have rich details and edge information. Firstly, we used the Canny operator, which is robust to noise, to detect the edges of the obtained (tri-band and four-band) true-color and reference images. Secondly, we calculated the CC between the tri-band or four-band and full-band images and compared them. The CC values are listed in Table 1.
TABLE 1 Correlation coefficient (CC) between obtained and reference images of two systems
Scenes | CC | |
---|---|---|
Tri-band | Four-band | |
Standard Colorimetric Plate | 0.0727 | 0.2263 |
Building | 0.0585 | 0.2237 |
Indoor Rainbow Umbrella | 0.4398 | 0.5011 |
Urban Night Scene | 0.5562 | 0.6071 |
From Fig. 13 and Table 1, we get that the SNR of the images obtained by the four-band system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system, and the CC values are also higher than the latter. We know that the four-band system designed by us can output more distinct true color night vision images than the conventional tri-band system. Under the condition of weak light, our four-band system has more advantages than tri-band. However, with the increase in light brightness, the performance difference between the two systems will be less.
By analyzing the SNR theory of the true color night vision system, we know that we can expand the system response to improve the output image SNR. Based on the conventional tri-band true color night vision scheme, we added a full-band channel and proposed an image fusion method based on PCA and NSST to build a double-channel four-band true color night vision system. Experiments were carried out with the built system, and we got true color night vision images that are consistent with the human visual system. In addition, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the true color night vision images obtained. In the meantime, we calculated and compared the CC between obtained and reference edge graphs. The results demonstrated that our four-band true color night vision system can greatly improve the SNR of the output images, which may have a positive significance for the development of true color night vision technology.
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Dongsheng Wu thanks the National Natural Science Foundation for help in identifying collaborators for this work.
National Natural Science Foundation of China (6180 1507).
Curr. Opt. Photon. 2022; 6(6): 608-618
Published online December 25, 2022 https://doi.org/10.3807/COPP.2022.6.6.608
Copyright © Optical Society of Korea.
Yunfeng Jiang, Dongsheng Wu , Jie Liu, Kuo Tian, Dan Wang
Department of Electronics and Optical Engineering, Army Engineering University of PLA, Shijiazhuang 050000, China
Correspondence to:*jyf1optics@163.com, ORCID 0000-0002-6848-238X
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.
Keywords: Decorrelation, Nonsubsampled shearlet transform (NSST), Principal component analysis (PCA), Signal-to-noise ratio (SNR), True color night vision
Patrolling and driving at night led to the birth of night vision technology. It has expanded the time wars are fought, and even makes some wars occur more at night than in the daytime. Night vision technology uses photoelectric imaging devices to detect the radiation or reflection information of targets at night. The technology can convert the target information at night that cannot be distinguished by human eyes into a visual image through the acquisition, processing and display of a photodetector and imaging equipment. Many studies have shown that human eyes can only recognize dozens of gray levels, but can recognize thousands of colors, so the technology greatly improves the recognition probability of human eyes [1]. Experiments have shown that the target recognition rate in a color image is 30% higher than that in a gray image, and the error recognition rate can be reduced by 60% compared with the latter [2]. We can use the color information data stored in the brain or a computer database to better realize target recognition and understand a scene [3]. Therefore, the time to understand targets is shorter and the recognition is more accurate in color images than in gray images for humans.
At present, low-light-level (LLL) and infrared night vision technology products take a larger share of the night vision market. Although they are widely used, their output images are monochromatic and contain limited detail information. The complex background interferes with the target recognition of human eyes, which is their technical defect [4, 5]. In addition, most of the current color night vision technology products work by means of false color fusion and color transfer technology [6–8], but their output images have poor natural color, which is quite different from the true color information observed by human eyes in the daytime. Therefore, the pursuit of true color information of scenes at night has become one of the main study directions of night vision technology.
In recent years, true color night vision technology has developed rapidly and is widely applied to military, traffic, reconnaissance and other observation fields. In 2006, the company OKSI produced a color night vision product [8] that placed a liquid crystal filter in front of a third-generation image intensifier CCD (ICCD) that can adjust the transmission of different bands by changing the voltage, and finally fused the collected images of different bands to obtain a true color night vision image. However, it was not suitable for dynamic scenes. In 2010, OKSI also produced a product that obtained the color information of the scene by covering the Bayer filters in front of the detector [7]. The principle is that each pixel of the detector includes part of the spectrum, and then the three primary RGB color values are obtained by interpolation. There are many interpolation methods (neighbor interpolation, linear interpolation and 3*3 interpolation methods, etc.) [9, 10]. Since then, true color night vision products have been developed based on the principle of the Bayer array, which uses conventional tri-band image fusion methods. However, due to the low illumination conditions at night, the signal-to-noise ratio (SNR) of the true color images obtained by conventional technology is very low, which seriously affects the image quality and human visual perception [11]. In order to increase the details and information of output images, some studies have attempted to improve system performance by the four-band system. For example, [12] used a portable four-band sensor system for the classification of oil palm fresh fruit bunches based on their maturity, which made full use of the four-band information. Therefore, a four-band (RYYB and RGGB) true color system was proposed to increase the signal value under weak light. However, it is at the expense of color accuracy, especially green and yellow [13, 14]. For this purpose, we proposed a new double-channel four-band system to realize outputting of night vision images with higher SNR and true color information.
In our work, we first analyzed the SNR theory of the conventional tri-band true color night vision system and found that the SNR of output images can be limited by the wavelength range of the system response
The principle of true color night vision technology is to divide the visible light band (380–760 nm) at night into three sub-bands (R, G and B), and use the LLL night vision system and image acquisition device to capture the three sub-band images for registration, fusion and other processing, so as to obtain a true color night vision image consistent with the human visual system [15]. The following formula expresses this principle:
where R, G and B represent R, G and B-band images, respectively.
The SNR (
where
where
The total output noise of the system includes temporal noise and spatial noise, but spatial noise can be corrected by the algorithms. Therefore, the noise analyzed in this paper only relates to temporal noise. The temporal noise of the LLL imaging system is composed of photon noise, dark current noise and readout circuit noise. The above noise components are independent and uncorrelated, so the total noise power of the system is the superposition value of their power, that is, the total number of noise electrons is
where
The formula for calculating the SNR of the LLL imaging system can be obtained by formula (2) and formula (4):
Therefore, the rate of change of system SNR with the signal value can be obtained, that is the partial derivative of system SNR (
For the concrete working LLL night vision system, the
It can be seen that when the signal value is low, the change rate of SNR is much greater than that when the signal value is high, that is, under low illumination conditions, increasing the number of signal electrons will quickly improve the SNR. Therefore, according to formula (3), the parameters of the optical system and detector for the LLL imaging system are constant, so changing the signal intensity can only be achieved by adjusting the wavelength range of the system response
At present, the three primary color image fusion methods based on liquid crystal tunable filter or optical thin film filter technology are mainly aimed at the visible light band to obtain true color night vision images [17]. The true color night vision system captures the sub-band (R: 380–490 nm, G: 490–580 nm, B: 580–760 nm) images, which greatly reduces the wavelength range of the system response. The SNR of sub-band (R, G and B) images are
where
In the case of low illuminance at night, for example, the illuminance is 10−1 lx in moonlight, and is 10−3 lx in starlight. Although the light energy in the visible light (380–760 nm) range is very low, there are many near-infrared rays (NIR; 760–1,100 nm) in the atmosphere. The curves in Fig. 2 are the spectral distribution under the conditions of full moon, starlight and glow [18].
If we make full use of the large amount of visible light and NIR information at night, the system response wavelength will be greatly expanded and thus improve the SNR of the true color night vision images. The SNR of the full-band (380–1,100) image is
where
Therefore, we can expand the response range of the true color LLL night vision system by adding a full-band channel, thereby improving the SNR of the output images.
In order to improve the SNR of the system, we increased the full-band channel in the true color night vision system. The overall scheme of the double-channel four-band true color night vision system we designed is shown in Fig. 3, and is mainly composed of two channels (full-band and sub-band). Each channel consists of four parts: 1. Optical system, 2. Adjusting device, 3. Image acquisition system, and 4. Image processing and fusion system. The image processing and fusion system is mainly composed of a computer or DSP. The working principle of the system is as follows:
First, some of the reflected light of the targets passes through the full-band channel and the rest passes through the sub-band channel, which is divided into three primary colors after color separation by a prism. Next, the full-band and sub-band images are obtained by the ICCD and image acquisition device. Finally, after image registration and fusion in the computer, we can get the true color night vision image in real time. In addition, due to aligning the four light paths of the ICCD through the adjusting device, the difficulty of multi-band image registration will be greatly reduced.
In order to make full use of the full-band image details and sub-band image (R, G and B) spectral information, we proposed an image fusion method based on PCA and NSST to get the true color night vision image. The steps of the fusion algorithm are as follows:
(1) PCA analysis is performed on the sub-band image (R, G and B) to obtain the principal component components,
(2) The spatial detail information in the full-band image (
where
(3) According to the first principal component
(4) Different fusion rules are used for high- and low-frequency components. According to the matching degree between images, the maximum value or average fusion strategy is used for low-frequency components. The absolute value strategy is used for high-frequency components. We get the fused high- and low-frequency components
(5) Then inverse NSST transform (INSST) is performed on
(6) Inverse PCA (IPCA) transform is performed on
Next, we introduce the fusion rules of high- and low-frequency components in Step 4.
a) The low-frequency components fusion rule (LCFR).
Let’s set
where
Let’s set
where the value of
Assuming that
When
When
where,
b) The high-frequency components fusion rule (HCFR).
Since the high-frequency component mainly includes the detail information of an image, generally speaking, the high-frequency detail component of the full-color image (
For the image fusion algorithm, we give an algorithm flow chart in Fig. 4.
According to the above design, the double-channel four-band true color night vision system can be built and debugged, which can realize the display of true color night vision images.
We used a Gen Ⅲ image intensifier (ANDOR: iStar DH334T-18-U-93) with a spectral response range of 380–1,090 nm coupled with a CCD to synthesize the ICCD to build the true color night vision system. Using the system, experiments were carried out for targets in different scenes and the exposure time was the same when capturing images for the four channel sensors. The four targets are: A standard colorimetric plate (Scene 1, 0.02 lx), a building (Scene 2, 0.02 lx), an indoor rainbow umbrella (Scene 3, 0.26 lx) and an urban night scene (Scene 4, 0.63 lx). The captured images (B-band, G-band, R-band and full-band images) of each channel, which were operated by image registration, are shown in Figs. 5–8.
In order to compare this with conventional true color night vision technology, we set up a single channel tri-band system, that is, the full-band channel of our system was abandoned, and only the sub-band channel worked to obtain the true color night vision images. Here, the conventional true color night vision image fusion technology is to map the sub-band image to the RGB space. The true color night vision images obtained by conventional tri-band and four-band image fusion methods are shown in Figs. 9–12, respectively.
From the point of view of target spectral characteristics, the same kind of target has the same or similar spectral characteristics, and will have the same or similar gray values in an image, and there is a high correlation between the same kind of target signals in the same uniform region. However, noise is independent and uncorrelated. That is to say, the spectral reflection information of the same targets in the uniform region is related. If the signal value of this pixel can be estimated, the signal and noise can be separated to obtain the noise value. Multiple linear regression (MLR) for fitting can be used to select the signal values around a pixel in the uniform region to fit the signal value of the pixel [21]. Generally speaking, the targets in an image are not unique and uniform, and there will be structural features between different targets. The signal values of the same or uniform targets can be fitted after eliminating the structural features (decorrelated method). This can not only weaken the influence of mutual interference between different targets, but also estimate the noise value according to the characteristics of high correlation between the same and uniform targets. Therefore, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the output true color night vision images. The steps are as follows:
(1) Use the Canny operator to detect and mark the edges of the targets in the detected image.
(2) Block the image marked by the edges (the size of the module is selected according to the image).
(3) The image blocks marked by the edges are removed, and the others containing the same targets or that are uniform are retained.
(4) Use the formula to perform regressing calculation (numerical fitting) on the pixels that do not contain the outermost pixels of reserved image blocks. The formula is as follows:
where
Then
(5) Calculate the mean value DN of the corresponding image block.
where DN
(6) Finally, select an interval with the largest mode according to the standard deviation of all these image blocks, and take the mean value of the standard deviation of the image blocks falling into this interval as the noise mean value
According to the above method, we calculated the SNR of the images obtained by the two true color night vision schemes as shown in Fig. 13.
According to Figs. 9–12, we know that both the conventional tri-band and double-channel four-band night true color night vision system we designed can obtain true color night vision images that are consistent with the human visual system. However, the images obtained can visually confirm the difference between tri-band and four-band systems, that is, the four-band system can increase the output image sharpness compared with the tri-band system. In order to show how much of an effect SNR has on image quality, we calculated the correlation coefficient (CC) [22] between the edge graphs of obtained and reference images. The reference images we selected were the full-band images of the four scenes because they have rich details and edge information. Firstly, we used the Canny operator, which is robust to noise, to detect the edges of the obtained (tri-band and four-band) true-color and reference images. Secondly, we calculated the CC between the tri-band or four-band and full-band images and compared them. The CC values are listed in Table 1.
TABLE 1. Correlation coefficient (CC) between obtained and reference images of two systems.
Scenes | CC | |
---|---|---|
Tri-band | Four-band | |
Standard Colorimetric Plate | 0.0727 | 0.2263 |
Building | 0.0585 | 0.2237 |
Indoor Rainbow Umbrella | 0.4398 | 0.5011 |
Urban Night Scene | 0.5562 | 0.6071 |
From Fig. 13 and Table 1, we get that the SNR of the images obtained by the four-band system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system, and the CC values are also higher than the latter. We know that the four-band system designed by us can output more distinct true color night vision images than the conventional tri-band system. Under the condition of weak light, our four-band system has more advantages than tri-band. However, with the increase in light brightness, the performance difference between the two systems will be less.
By analyzing the SNR theory of the true color night vision system, we know that we can expand the system response to improve the output image SNR. Based on the conventional tri-band true color night vision scheme, we added a full-band channel and proposed an image fusion method based on PCA and NSST to build a double-channel four-band true color night vision system. Experiments were carried out with the built system, and we got true color night vision images that are consistent with the human visual system. In addition, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the true color night vision images obtained. In the meantime, we calculated and compared the CC between obtained and reference edge graphs. The results demonstrated that our four-band true color night vision system can greatly improve the SNR of the output images, which may have a positive significance for the development of true color night vision technology.
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Dongsheng Wu thanks the National Natural Science Foundation for help in identifying collaborators for this work.
National Natural Science Foundation of China (6180 1507).
TABLE 1 Correlation coefficient (CC) between obtained and reference images of two systems
Scenes | CC | |
---|---|---|
Tri-band | Four-band | |
Standard Colorimetric Plate | 0.0727 | 0.2263 |
Building | 0.0585 | 0.2237 |
Indoor Rainbow Umbrella | 0.4398 | 0.5011 |
Urban Night Scene | 0.5562 | 0.6071 |