G-0K8J8ZR168
검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

## Article

Curr. Opt. Photon. 2022; 6(6): 608-618

Published online December 25, 2022 https://doi.org/10.3807/COPP.2022.6.6.608

## A Double-channel Four-band True Color Night Vision System

Yunfeng Jiang, Dongsheng Wu , Jie Liu, Kuo Tian, Dan Wang

Department of Electronics and Optical Engineering, Army Engineering University of PLA, Shijiazhuang 050000, China

Corresponding author: *jyf1optics@163.com, ORCID 0000-0002-6848-238X

Received: August 19, 2022; Revised: November 1, 2022; Accepted: November 2, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.

Keywords: Decorrelation, Nonsubsampled shearlet transform (NSST), Principal component analysis (PCA), Signal-to-noise ratio (SNR), True color night vision

OCIS codes: (110.2960) Image analysis; (110.2970) Image detection systems; (110.3000) Image quality assessment

Patrolling and driving at night led to the birth of night vision technology. It has expanded the time wars are fought, and even makes some wars occur more at night than in the daytime. Night vision technology uses photoelectric imaging devices to detect the radiation or reflection information of targets at night. The technology can convert the target information at night that cannot be distinguished by human eyes into a visual image through the acquisition, processing and display of a photodetector and imaging equipment. Many studies have shown that human eyes can only recognize dozens of gray levels, but can recognize thousands of colors, so the technology greatly improves the recognition probability of human eyes [1]. Experiments have shown that the target recognition rate in a color image is 30% higher than that in a gray image, and the error recognition rate can be reduced by 60% compared with the latter [2]. We can use the color information data stored in the brain or a computer database to better realize target recognition and understand a scene [3]. Therefore, the time to understand targets is shorter and the recognition is more accurate in color images than in gray images for humans.

At present, low-light-level (LLL) and infrared night vision technology products take a larger share of the night vision market. Although they are widely used, their output images are monochromatic and contain limited detail information. The complex background interferes with the target recognition of human eyes, which is their technical defect [4, 5]. In addition, most of the current color night vision technology products work by means of false color fusion and color transfer technology [68], but their output images have poor natural color, which is quite different from the true color information observed by human eyes in the daytime. Therefore, the pursuit of true color information of scenes at night has become one of the main study directions of night vision technology.

In recent years, true color night vision technology has developed rapidly and is widely applied to military, traffic, reconnaissance and other observation fields. In 2006, the company OKSI produced a color night vision product [8] that placed a liquid crystal filter in front of a third-generation image intensifier CCD (ICCD) that can adjust the transmission of different bands by changing the voltage, and finally fused the collected images of different bands to obtain a true color night vision image. However, it was not suitable for dynamic scenes. In 2010, OKSI also produced a product that obtained the color information of the scene by covering the Bayer filters in front of the detector [7]. The principle is that each pixel of the detector includes part of the spectrum, and then the three primary RGB color values are obtained by interpolation. There are many interpolation methods (neighbor interpolation, linear interpolation and 3*3 interpolation methods, etc.) [9, 10]. Since then, true color night vision products have been developed based on the principle of the Bayer array, which uses conventional tri-band image fusion methods. However, due to the low illumination conditions at night, the signal-to-noise ratio (SNR) of the true color images obtained by conventional technology is very low, which seriously affects the image quality and human visual perception [11]. In order to increase the details and information of output images, some studies have attempted to improve system performance by the four-band system. For example, [12] used a portable four-band sensor system for the classification of oil palm fresh fruit bunches based on their maturity, which made full use of the four-band information. Therefore, a four-band (RYYB and RGGB) true color system was proposed to increase the signal value under weak light. However, it is at the expense of color accuracy, especially green and yellow [13, 14]. For this purpose, we proposed a new double-channel four-band system to realize outputting of night vision images with higher SNR and true color information.

In our work, we first analyzed the SNR theory of the conventional tri-band true color night vision system and found that the SNR of output images can be limited by the wavelength range of the system response λ1 and λ2. Next, we built a double-channel four-band true color night vision system to expand the system response, and proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST). Through experiments, we obtained true color night vision images consistent with the human visual system. Finally, we proposed a method based on the edge extraction of targets and spatial dimension decorrelation to calculate the SNR of the obtained images. Our experiment results showed that the SNR of the image obtained by our double-channel four-band true color night vision system is much higher than that of the conventional tri-band system under the condition of weak light, which may be of great significance for studies on improving image quality in color night vision technology.

### II. DESIGN OF THE TRUE COLOR NIGHT VISION SYSTEM

The principle of true color night vision technology is to divide the visible light band (380–760 nm) at night into three sub-bands (R, G and B), and use the LLL night vision system and image acquisition device to capture the three sub-band images for registration, fusion and other processing, so as to obtain a true color night vision image consistent with the human visual system [15]. The following formula expresses this principle:

C=R+G+B

where R, G and B represent R, G and B-band images, respectively.

### 2.1. SNR Analysis of the True Color Imaging System

The SNR (RSN) of an LLL imaging system is defined as [16]:

RSN=NSnn

where NS and nn are the number of signal and noise electrons collected by the detector, respectively. According to the imaging theory of photoelectric systems, the number of photoelectrons generated on each pixel of the detector of the night vision instrument is

NS=λ1λ2 πL(λ)τo (λ)τa (λ)η(λ)ρ(λ)λA pix T in F 2hcdλ

where L(λ) is the spectral radiance of the night sky light. τo(λ) and τa(λ) are the transmittance of atmosphere and the optical system, respectively. η(λ) is the quantum efficiency of the detector, and ρ(λ) is the reflectivity of the target. Apix is the area of the detector pixel, and Tin is the integration time of the imaging system. F is the F# of the optical system. h and c are the Planck constant and the speed of light in vacuum, respectively. λ1 and λ2 are the wavelength range limits of the system response.

The total output noise of the system includes temporal noise and spatial noise, but spatial noise can be corrected by the algorithms. Therefore, the noise analyzed in this paper only relates to temporal noise. The temporal noise of the LLL imaging system is composed of photon noise, dark current noise and readout circuit noise. The above noise components are independent and uncorrelated, so the total noise power of the system is the superposition value of their power, that is, the total number of noise electrons is

ntotal=NS+Nd+ncir2

where NS is the number of photogenerated electrons of the target signal. Nd is the number of dark current noise electrons of the detector, and n2cir is the number of readout circuit noise electrons of the detector.

The formula for calculating the SNR of the LLL imaging system can be obtained by formula (2) and formula (4):

RSN=NSNS+Nd+ncir2

Therefore, the rate of change of system SNR with the signal value can be obtained, that is the partial derivative of system SNR (RSN) over signal value (NS)

RSNNS=NS+2(Nd+ncir2)2(NS+Nd+ncir2)3/2

For the concrete working LLL night vision system, the Nd and n2cir are uncorrelated and constant. Then the relationship between the change rate of SNR and signal value is simulated, and the result is shown in Fig. 1.

Figure 1.Relationship between change rate of signal-to-noise ratio (SNR) and signal value.

It can be seen that when the signal value is low, the change rate of SNR is much greater than that when the signal value is high, that is, under low illumination conditions, increasing the number of signal electrons will quickly improve the SNR. Therefore, according to formula (3), the parameters of the optical system and detector for the LLL imaging system are constant, so changing the signal intensity can only be achieved by adjusting the wavelength range of the system response λ1 and λ2.

At present, the three primary color image fusion methods based on liquid crystal tunable filter or optical thin film filter technology are mainly aimed at the visible light band to obtain true color night vision images [17]. The true color night vision system captures the sub-band (R: 380–490 nm, G: 490–580 nm, B: 580–760 nm) images, which greatly reduces the wavelength range of the system response. The SNR of sub-band (R, G and B) images are RSNR, RSNG, RSNB.

RSNR=NSRNSR+Nd+ncir2

RSNG=NSGNSG+Nd+ncir2

RSNB=NSBNSB+Nd+ncir2

where NSR, NSG and NSB are the number of photogenerated electrons of the target signals collected by R, G and B channels, respectively.

In the case of low illuminance at night, for example, the illuminance is 10−1 lx in moonlight, and is 10−3 lx in starlight. Although the light energy in the visible light (380–760 nm) range is very low, there are many near-infrared rays (NIR; 760–1,100 nm) in the atmosphere. The curves in Fig. 2 are the spectral distribution under the conditions of full moon, starlight and glow [18].

Figure 2.Spectral distribution of night sky light under different conditions.

If we make full use of the large amount of visible light and NIR information at night, the system response wavelength will be greatly expanded and thus improve the SNR of the true color night vision images. The SNR of the full-band (380–1,100) image is RSNw, which is much higher than that of the sub-band images. Then we can get

RSNw=NSWNSW+Nd+ncir2

where NSW is the number of photogenerated electrons of the target signal in full-band. When we ignore the attenuation effect of the system, we know that

RSNw>RSNRRSNw>RSNGRSNw>RSNB

Therefore, we can expand the response range of the true color LLL night vision system by adding a full-band channel, thereby improving the SNR of the output images.

### 2.2. Working Principle and Components of the Designed System

In order to improve the SNR of the system, we increased the full-band channel in the true color night vision system. The overall scheme of the double-channel four-band true color night vision system we designed is shown in Fig. 3, and is mainly composed of two channels (full-band and sub-band). Each channel consists of four parts: 1. Optical system, 2. Adjusting device, 3. Image acquisition system, and 4. Image processing and fusion system. The image processing and fusion system is mainly composed of a computer or DSP. The working principle of the system is as follows:

Figure 3.Block diagram of double-channel four-band true color night vision system.

First, some of the reflected light of the targets passes through the full-band channel and the rest passes through the sub-band channel, which is divided into three primary colors after color separation by a prism. Next, the full-band and sub-band images are obtained by the ICCD and image acquisition device. Finally, after image registration and fusion in the computer, we can get the true color night vision image in real time. In addition, due to aligning the four light paths of the ICCD through the adjusting device, the difficulty of multi-band image registration will be greatly reduced.

In order to make full use of the full-band image details and sub-band image (R, G and B) spectral information, we proposed an image fusion method based on PCA and NSST to get the true color night vision image. The steps of the fusion algorithm are as follows:

(1) PCA analysis is performed on the sub-band image (R, G and B) to obtain the principal component components, C1, C2 and C3 [19].

(2) The spatial detail information in the full-band image (P) is extracted by weighted least squares (WLS), and it is imported into C1 to obtain CH1 [20].

C1H=C1+ε m=1KPHm,m=1,2,...,K

ε=DC1DP

PHm=PLm1PLm,m=1,2,...,K

where P is the full-band image. H and L are the high- and low-frequency information, respectively. C1 is the first principal component and CH1 is the first principal component after high frequency is imported. ε is the gain coefficient, which is the ratio of DC1 (standard deviation of C1) and DP (standard deviation of P). When the scale is m, PHm and PLm are the filtered high- and low-frequency information, respectively. K is the scale factor. When m = 1, PLm−1 = P.

(3) According to the first principal component C1, the full-band image (P) is histogram matched to get P~. P~ and CH1 are decomposed by NSST transform to obtain the decomposition coefficient HC1HI,K(i,j),LC1HI,K(i,j) and HP˜I,K(i,j),LP˜I,K(i,j). Here, HC1HI,K(i,j) is the high-frequency coefficient with level I and direction K of CH1 at the spatial position (i, j), and LC1HI,K(i,j) is the low-frequency coefficient. HP˜I,K(i,j) and LP˜I,K(i,j) are the high- and low-frequency coefficients of P~, respectively.

(4) Different fusion rules are used for high- and low-frequency components. According to the matching degree between images, the maximum value or average fusion strategy is used for low-frequency components. The absolute value strategy is used for high-frequency components. We get the fused high- and low-frequency components HF(i, j) and LF(i, j).

(5) Then inverse NSST transform (INSST) is performed on HF(i, j) and LF(i, j) to obtain the fused first principal component C′1.

(6) Inverse PCA (IPCA) transform is performed on C′1, C2 and C3 to obtain the final fused image F.

Next, we introduce the fusion rules of high- and low-frequency components in Step 4.

a) The low-frequency components fusion rule (LCFR).

Let’s set M(X) as the low-frequency coefficient matrix for wavelet decomposition of image X. p = (m, n) is the spatial position of M(X). Region Q is the neighborhood space of p, and M(X, p) is the element value on (m, n) in the low-frequency coefficient matrix. μ(X, p) is the mean of the elements in the neighborhood space Q of p. Assuming that G(X, p) is the regional variance of position p in X, G(X, p) can be calculated according to the following equation:

G(X,p)= qQw(q)(M(X,q)μ(X,p))2

where w(q) is weight value. The closer the distance between p and q, the greater the value of w(q).

Let’s set G(A, p) and G(B, p) as the regional variance of the low-frequency coefficient matrices of A and B in p, so the variance matching degree Tp of A and B in p is expressed as follows:

Tp=2× qQw(q)M(A,q)μ(A,p)M(B,q)μ(B,p)G(A,p)+G(B,p)

where the value of Tp varies from 0 to 1. The higher it is, the lower the correlation between the low-frequency coefficient matrices of the two images (A and B) is.

Assuming that U is the matching degree threshold, the value range of U is generally 0.5–1. In our work, U = 0.7 was selected according to the experiments.

When Tp < U, the matching degree between A and B is low, and the maximum value fusion strategy is used.

M(F,p)=M(A,p),G(A,p)G(B,p)M(B,p),G(A,p)<G(B,p)

When Tp > U, the mean value fusion method is used.

M(F,p)=WmaxM(A,p)+WminM(B,p),G(A,p)G(B,p)WminM(A,p)+WmaxM(B,p),G(A,p)<G(B,p)

where, Wmin = 0.5 − 0.5(), and Wmax = 1 − Wmin.

b) The high-frequency components fusion rule (HCFR).

Since the high-frequency component mainly includes the detail information of an image, generally speaking, the high-frequency detail component of the full-color image (P~) has more information. In order to better maintain the detail texture of the image, the maximum absolute value method is applied to the fusion of the high-frequency components.

M(F,p)=M(A,p),M(A,p)M(B,p)M(B,p),M(A,p)<M(B,p)

For the image fusion algorithm, we give an algorithm flow chart in Fig. 4.

Figure 4.Image fusion algorithm flow chart.

According to the above design, the double-channel four-band true color night vision system can be built and debugged, which can realize the display of true color night vision images.

### III. EXPERIMENTS AND RESULTS

We used a Gen Ⅲ image intensifier (ANDOR: iStar DH334T-18-U-93) with a spectral response range of 380–1,090 nm coupled with a CCD to synthesize the ICCD to build the true color night vision system. Using the system, experiments were carried out for targets in different scenes and the exposure time was the same when capturing images for the four channel sensors. The four targets are: A standard colorimetric plate (Scene 1, 0.02 lx), a building (Scene 2, 0.02 lx), an indoor rainbow umbrella (Scene 3, 0.26 lx) and an urban night scene (Scene 4, 0.63 lx). The captured images (B-band, G-band, R-band and full-band images) of each channel, which were operated by image registration, are shown in Figs. 58.

Figure 5.Standard colorimetric plate: (a) B, (b) G, (c) R, and (d) full.

Figure 6.The building: (a) B, (b) G, (c) R, and (d) full.

Figure 7.Indoor rainbow umbrella: (a) B, (b) G, (c) R, and (d) full.

Figure 8.Urban night scene: (a) B, (b) G, (c) R, and (d) full.

In order to compare this with conventional true color night vision technology, we set up a single channel tri-band system, that is, the full-band channel of our system was abandoned, and only the sub-band channel worked to obtain the true color night vision images. Here, the conventional true color night vision image fusion technology is to map the sub-band image to the RGB space. The true color night vision images obtained by conventional tri-band and four-band image fusion methods are shown in Figs. 912, respectively.

Figure 9.Standard colorimetric plate: (a) Tri-band, (b) four-band.

Figure 10.The building: (a) Tri-band, (b) four-band.

Figure 11.Indoor rainbow umbrella: (a) Tri-band, (b) four-band.

Figure 12.Urban night scene: (a) Tri-band, (b) four-band.

From the point of view of target spectral characteristics, the same kind of target has the same or similar spectral characteristics, and will have the same or similar gray values in an image, and there is a high correlation between the same kind of target signals in the same uniform region. However, noise is independent and uncorrelated. That is to say, the spectral reflection information of the same targets in the uniform region is related. If the signal value of this pixel can be estimated, the signal and noise can be separated to obtain the noise value. Multiple linear regression (MLR) for fitting can be used to select the signal values around a pixel in the uniform region to fit the signal value of the pixel [21]. Generally speaking, the targets in an image are not unique and uniform, and there will be structural features between different targets. The signal values of the same or uniform targets can be fitted after eliminating the structural features (decorrelated method). This can not only weaken the influence of mutual interference between different targets, but also estimate the noise value according to the characteristics of high correlation between the same and uniform targets. Therefore, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the output true color night vision images. The steps are as follows:

(1) Use the Canny operator to detect and mark the edges of the targets in the detected image.

(2) Block the image marked by the edges (the size of the module is selected according to the image).

(3) The image blocks marked by the edges are removed, and the others containing the same targets or that are uniform are retained.

(4) Use the formula to perform regressing calculation (numerical fitting) on the pixels that do not contain the outermost pixels of reserved image blocks. The formula is as follows:

x^i,j=axi+1,j+bxi1,j+cxi,j1+dxi,j+1+e

where xi,j are the gray values of the pixel in row i and column j of an image block. x^ij is the fitting value of x^ij. a, b, c, d and e are the linear regression coefficients, which should minimize the residual value ri,j,k, the difference between the fitting value x^ij and the original value xi,j in the same image block. The residual value is:

ri,j,k=x^i,jxi,j

Then S2=1w1hri,j,k2, and image noise variance σ2n is:

σn2=S2(M1)

M = (w − 2) × (h − 2). Where, w and h are the width and height of the image block, respectively.

(5) Calculate the mean value DN of the corresponding image block.

DN¯=1M i=1 MDNi

where DNi is the gray value of each pixel in the selected region image, and DN is the mean value of the region image.

(6) Finally, select an interval with the largest mode according to the standard deviation of all these image blocks, and take the mean value of the standard deviation of the image blocks falling into this interval as the noise mean value σ of the whole image, and then the SNR of the image is calculated according to formula (24).

SNR=DN¯σ

According to the above method, we calculated the SNR of the images obtained by the two true color night vision schemes as shown in Fig. 13.

Figure 13.The signal-to-noise ratio (SNR) of the images of scenes obtained by the two systems.

According to Figs. 912, we know that both the conventional tri-band and double-channel four-band night true color night vision system we designed can obtain true color night vision images that are consistent with the human visual system. However, the images obtained can visually confirm the difference between tri-band and four-band systems, that is, the four-band system can increase the output image sharpness compared with the tri-band system. In order to show how much of an effect SNR has on image quality, we calculated the correlation coefficient (CC) [22] between the edge graphs of obtained and reference images. The reference images we selected were the full-band images of the four scenes because they have rich details and edge information. Firstly, we used the Canny operator, which is robust to noise, to detect the edges of the obtained (tri-band and four-band) true-color and reference images. Secondly, we calculated the CC between the tri-band or four-band and full-band images and compared them. The CC values are listed in Table 1.

TABLE 1 Correlation coefficient (CC) between obtained and reference images of two systems

ScenesCC
Tri-bandFour-band
Standard Colorimetric Plate0.07270.2263
Building0.05850.2237
Indoor Rainbow Umbrella0.43980.5011
Urban Night Scene0.55620.6071

From Fig. 13 and Table 1, we get that the SNR of the images obtained by the four-band system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system, and the CC values are also higher than the latter. We know that the four-band system designed by us can output more distinct true color night vision images than the conventional tri-band system. Under the condition of weak light, our four-band system has more advantages than tri-band. However, with the increase in light brightness, the performance difference between the two systems will be less.

By analyzing the SNR theory of the true color night vision system, we know that we can expand the system response to improve the output image SNR. Based on the conventional tri-band true color night vision scheme, we added a full-band channel and proposed an image fusion method based on PCA and NSST to build a double-channel four-band true color night vision system. Experiments were carried out with the built system, and we got true color night vision images that are consistent with the human visual system. In addition, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the true color night vision images obtained. In the meantime, we calculated and compared the CC between obtained and reference edge graphs. The results demonstrated that our four-band true color night vision system can greatly improve the SNR of the output images, which may have a positive significance for the development of true color night vision technology.

The authors declare no conflicts of interest.

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Dongsheng Wu thanks the National Natural Science Foundation for help in identifying collaborators for this work.

National Natural Science Foundation of China (6180 1507).

1. D. Li, L. Deng, B. B. Gupta, H. Wang, and C. Choi, “A novel CNN based security guaranteed image watermarking generation scenario for smart city applications,” Inform. Sci. 479, 432-447 (2019).
2. H. Naeem, F. Ullah, M. R. Naeem, S. Khalid, D. Vasan, S. Jabbar, and S. J. A. H. N. Saeed, “Malware detection in industrial internet of things based on hybrid image visualization and deep learning model,” Ad Hoc Networks 105, 102154 (2020).
3. M. Zhang, M. Zhu, and X. Zhao, “Recognition of high-risk scenarios in building construction based on image semantics,” J. Comput. Civ. Eng. 34, 04020019 (2020).
4. Y. Yang, W. Zhang, W. He, Q. Chen, and G. Gu, “Research and implementation of color night vision imaging system based on FPGA and CMOS,” SPIE 11434, 114340U (2020).
5. J.-B. Yang Sr, Y. Lu, L. Wang, K. Zhao, C. Yang, Y. Liu, and X.-H. Chai, “Research on starlight level broad spectrum full color imaging technology,” SPIE 11338, 113381V (2019).
6. A. Toet and M. A. Hogervorst, “Progress in color night vision,” Opt. Eng. 51, 010901 (2012).
7. B. Ren, G. Jiao, Y. Li, Z. Zheng Jr, K. Qiao, Y. Yang, L. Yan, and Y. Yuan, “Research progress of true color digital night vision technology,” SPIE 11763, 117636C (2021).
8. K. Chrzanowski, “Review of night vision technology,” Opto-Electron. Rev. 21, 153-181 (2013).
9. M. R. Khosravi, M. Sharif-Yazd, M. K. Moghimi, A. Keshavarz, H. Rostami, and S. Mansouri, “MRF-based multispectral image fusion using an adaptive approach based on edge-guided interpolation,” arXiv:1512.08475 (2015).
10. M. Morimatsu, Y. Monno, M. Tanaka, and M. Okutomi, “Monochrome and color polarization demosaicking using edge-aware residual interpolation,” in Proc. IEEE International Conference on Image Processing-ICIP (Abu Dhabi, United Arab Emirates, Oct. 25-28, 2020), pp. 2571-2575.
11. L. Yu and B. Pan, “Color stereo-digital image correlation method using a single 3CCD color camera,” Exp. Mech. 57, 649-657 (2017).
12. O. M. B. Saeed, S. Sankaran, A. R. M. Shariff, H. Z. M. Shafri, R. Ehsani, M. S. Alfatni, and M. H. M. Hazir, “Classification of oil palm fresh fruit bunches based on their maturity using portable four-band sensor system,” Comput. Electron. Agric. 82, 55-60 (2012).
13. X. Dong, W. Xu, Z. Miao, L. Ma, C. Zhang, J. Yang, Z. Jin, A. B. J. Teoh, and J. Shen, “Abandoning the Bayer-filter to see in the dark,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (New Orleans, LA, USA, Jun. 21-24, 2022) pp. 17431-17440.
14. G. G. Jeon, “Optimally determined modified Bayer color array for imagery,” Adv. Mater. Res. 717, 501-505 (2013).
15. J. Kriesel and N. Gat, “True-color night vision cameras,” SPIE 6540, 65400D (2007).
16. Y. Jiang and D. Wu, “Analysis and test of signal-to-noise ratio of LLL night vision system,” in Proc. IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference-IMCEC (Chongqing, China, Oct. 11-13, 2019), pp. 648-651.
17. M. Aboelaze and F. Aloul, “Current and future trends in sensor networks: a survey,” in Proc. Second IFIP International Conference on Wireless and Optical Communications Networks (Dubai, United Arab Emirates, Mar. 6-8, 2005), pp. 551-555.
18. P. Cinzano, “Night sky photometry with sky quality meter,” ISTIL Int. Rep. (2005), Number 9, Version 1.4.
19. R. P. Desale and S. V. Verma, “Study and analysis of PCA, DCT & DWT based image fusion techniques,” in Proc. 2013 International Conference on Signal Processing, Image Processing & Pattern Recognition (Coimbatore, India, Feb. 7-8, 2013), pp. 66-69.
20. Y. Song, W. Wei, L. Zheng, X. Yang, L. Kai, and L. Wei, “An adaptive pansharpening method by using weighted least squares filter,” IEEE Geosci. Remote Sens. Lett. 13, 18-22 (2016).
21. L. Sun and F. Zhong, “Mixed noise estimation for hyperspectral image based on multiple bands prediction,” IEEE Geosci. Remote Sens. Lett. 19, 6007705 (2022).
22. K. Peng, X. Shen, F. Huang, and X. He, “A joint transform correlator encryption system based on binary encoding for grayscale images,” Curr. Opt. Photonics 3, 548-554 (2019).

### Article

#### Article

Curr. Opt. Photon. 2022; 6(6): 608-618

Published online December 25, 2022 https://doi.org/10.3807/COPP.2022.6.6.608

## A Double-channel Four-band True Color Night Vision System

Yunfeng Jiang, Dongsheng Wu , Jie Liu, Kuo Tian, Dan Wang

Department of Electronics and Optical Engineering, Army Engineering University of PLA, Shijiazhuang 050000, China

Correspondence to:*jyf1optics@163.com, ORCID 0000-0002-6848-238X

Received: August 19, 2022; Revised: November 1, 2022; Accepted: November 2, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.

Keywords: Decorrelation, Nonsubsampled shearlet transform (NSST), Principal component analysis (PCA), Signal-to-noise ratio (SNR), True color night vision

### I. INTRODUCTION

Patrolling and driving at night led to the birth of night vision technology. It has expanded the time wars are fought, and even makes some wars occur more at night than in the daytime. Night vision technology uses photoelectric imaging devices to detect the radiation or reflection information of targets at night. The technology can convert the target information at night that cannot be distinguished by human eyes into a visual image through the acquisition, processing and display of a photodetector and imaging equipment. Many studies have shown that human eyes can only recognize dozens of gray levels, but can recognize thousands of colors, so the technology greatly improves the recognition probability of human eyes [1]. Experiments have shown that the target recognition rate in a color image is 30% higher than that in a gray image, and the error recognition rate can be reduced by 60% compared with the latter [2]. We can use the color information data stored in the brain or a computer database to better realize target recognition and understand a scene [3]. Therefore, the time to understand targets is shorter and the recognition is more accurate in color images than in gray images for humans.

At present, low-light-level (LLL) and infrared night vision technology products take a larger share of the night vision market. Although they are widely used, their output images are monochromatic and contain limited detail information. The complex background interferes with the target recognition of human eyes, which is their technical defect [4, 5]. In addition, most of the current color night vision technology products work by means of false color fusion and color transfer technology [68], but their output images have poor natural color, which is quite different from the true color information observed by human eyes in the daytime. Therefore, the pursuit of true color information of scenes at night has become one of the main study directions of night vision technology.

In recent years, true color night vision technology has developed rapidly and is widely applied to military, traffic, reconnaissance and other observation fields. In 2006, the company OKSI produced a color night vision product [8] that placed a liquid crystal filter in front of a third-generation image intensifier CCD (ICCD) that can adjust the transmission of different bands by changing the voltage, and finally fused the collected images of different bands to obtain a true color night vision image. However, it was not suitable for dynamic scenes. In 2010, OKSI also produced a product that obtained the color information of the scene by covering the Bayer filters in front of the detector [7]. The principle is that each pixel of the detector includes part of the spectrum, and then the three primary RGB color values are obtained by interpolation. There are many interpolation methods (neighbor interpolation, linear interpolation and 3*3 interpolation methods, etc.) [9, 10]. Since then, true color night vision products have been developed based on the principle of the Bayer array, which uses conventional tri-band image fusion methods. However, due to the low illumination conditions at night, the signal-to-noise ratio (SNR) of the true color images obtained by conventional technology is very low, which seriously affects the image quality and human visual perception [11]. In order to increase the details and information of output images, some studies have attempted to improve system performance by the four-band system. For example, [12] used a portable four-band sensor system for the classification of oil palm fresh fruit bunches based on their maturity, which made full use of the four-band information. Therefore, a four-band (RYYB and RGGB) true color system was proposed to increase the signal value under weak light. However, it is at the expense of color accuracy, especially green and yellow [13, 14]. For this purpose, we proposed a new double-channel four-band system to realize outputting of night vision images with higher SNR and true color information.

In our work, we first analyzed the SNR theory of the conventional tri-band true color night vision system and found that the SNR of output images can be limited by the wavelength range of the system response λ1 and λ2. Next, we built a double-channel four-band true color night vision system to expand the system response, and proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST). Through experiments, we obtained true color night vision images consistent with the human visual system. Finally, we proposed a method based on the edge extraction of targets and spatial dimension decorrelation to calculate the SNR of the obtained images. Our experiment results showed that the SNR of the image obtained by our double-channel four-band true color night vision system is much higher than that of the conventional tri-band system under the condition of weak light, which may be of great significance for studies on improving image quality in color night vision technology.

### II. DESIGN OF THE TRUE COLOR NIGHT VISION SYSTEM

The principle of true color night vision technology is to divide the visible light band (380–760 nm) at night into three sub-bands (R, G and B), and use the LLL night vision system and image acquisition device to capture the three sub-band images for registration, fusion and other processing, so as to obtain a true color night vision image consistent with the human visual system [15]. The following formula expresses this principle:

$C=R+G+B$

where R, G and B represent R, G and B-band images, respectively.

### 2.1. SNR Analysis of the True Color Imaging System

The SNR (RSN) of an LLL imaging system is defined as [16]:

$RSN=NSnn$

where NS and nn are the number of signal and noise electrons collected by the detector, respectively. According to the imaging theory of photoelectric systems, the number of photoelectrons generated on each pixel of the detector of the night vision instrument is

$NS=∫λ1λ2 πL(λ)τo (λ)τa (λ)η(λ)ρ(λ)λA pix T in F 2hcdλ$

where L(λ) is the spectral radiance of the night sky light. τo(λ) and τa(λ) are the transmittance of atmosphere and the optical system, respectively. η(λ) is the quantum efficiency of the detector, and ρ(λ) is the reflectivity of the target. Apix is the area of the detector pixel, and Tin is the integration time of the imaging system. F is the F# of the optical system. h and c are the Planck constant and the speed of light in vacuum, respectively. λ1 and λ2 are the wavelength range limits of the system response.

The total output noise of the system includes temporal noise and spatial noise, but spatial noise can be corrected by the algorithms. Therefore, the noise analyzed in this paper only relates to temporal noise. The temporal noise of the LLL imaging system is composed of photon noise, dark current noise and readout circuit noise. The above noise components are independent and uncorrelated, so the total noise power of the system is the superposition value of their power, that is, the total number of noise electrons is

$ntotal=NS+Nd+ncir2$

where NS is the number of photogenerated electrons of the target signal. Nd is the number of dark current noise electrons of the detector, and n2cir is the number of readout circuit noise electrons of the detector.

The formula for calculating the SNR of the LLL imaging system can be obtained by formula (2) and formula (4):

$RSN=NSNS+Nd+ncir2$

Therefore, the rate of change of system SNR with the signal value can be obtained, that is the partial derivative of system SNR (RSN) over signal value (NS)

$∂RSN∂NS=NS+2(Nd+ncir2)2(NS+Nd+ncir2)3/2$

For the concrete working LLL night vision system, the Nd and n2cir are uncorrelated and constant. Then the relationship between the change rate of SNR and signal value is simulated, and the result is shown in Fig. 1.

Figure 1. Relationship between change rate of signal-to-noise ratio (SNR) and signal value.

It can be seen that when the signal value is low, the change rate of SNR is much greater than that when the signal value is high, that is, under low illumination conditions, increasing the number of signal electrons will quickly improve the SNR. Therefore, according to formula (3), the parameters of the optical system and detector for the LLL imaging system are constant, so changing the signal intensity can only be achieved by adjusting the wavelength range of the system response λ1 and λ2.

At present, the three primary color image fusion methods based on liquid crystal tunable filter or optical thin film filter technology are mainly aimed at the visible light band to obtain true color night vision images [17]. The true color night vision system captures the sub-band (R: 380–490 nm, G: 490–580 nm, B: 580–760 nm) images, which greatly reduces the wavelength range of the system response. The SNR of sub-band (R, G and B) images are RSNR, RSNG, RSNB.

$RSNR=NSRNSR+Nd+ncir2$

$RSNG=NSGNSG+Nd+ncir2$

$RSNB=NSBNSB+Nd+ncir2$

where NSR, NSG and NSB are the number of photogenerated electrons of the target signals collected by R, G and B channels, respectively.

In the case of low illuminance at night, for example, the illuminance is 10−1 lx in moonlight, and is 10−3 lx in starlight. Although the light energy in the visible light (380–760 nm) range is very low, there are many near-infrared rays (NIR; 760–1,100 nm) in the atmosphere. The curves in Fig. 2 are the spectral distribution under the conditions of full moon, starlight and glow [18].

Figure 2. Spectral distribution of night sky light under different conditions.

If we make full use of the large amount of visible light and NIR information at night, the system response wavelength will be greatly expanded and thus improve the SNR of the true color night vision images. The SNR of the full-band (380–1,100) image is RSNw, which is much higher than that of the sub-band images. Then we can get

$RSNw=NSWNSW+Nd+ncir2$

where NSW is the number of photogenerated electrons of the target signal in full-band. When we ignore the attenuation effect of the system, we know that

$RSNw>RSNRRSNw>RSNGRSNw>RSNB$

Therefore, we can expand the response range of the true color LLL night vision system by adding a full-band channel, thereby improving the SNR of the output images.

### 2.2. Working Principle and Components of the Designed System

In order to improve the SNR of the system, we increased the full-band channel in the true color night vision system. The overall scheme of the double-channel four-band true color night vision system we designed is shown in Fig. 3, and is mainly composed of two channels (full-band and sub-band). Each channel consists of four parts: 1. Optical system, 2. Adjusting device, 3. Image acquisition system, and 4. Image processing and fusion system. The image processing and fusion system is mainly composed of a computer or DSP. The working principle of the system is as follows:

Figure 3. Block diagram of double-channel four-band true color night vision system.

First, some of the reflected light of the targets passes through the full-band channel and the rest passes through the sub-band channel, which is divided into three primary colors after color separation by a prism. Next, the full-band and sub-band images are obtained by the ICCD and image acquisition device. Finally, after image registration and fusion in the computer, we can get the true color night vision image in real time. In addition, due to aligning the four light paths of the ICCD through the adjusting device, the difficulty of multi-band image registration will be greatly reduced.

In order to make full use of the full-band image details and sub-band image (R, G and B) spectral information, we proposed an image fusion method based on PCA and NSST to get the true color night vision image. The steps of the fusion algorithm are as follows:

(1) PCA analysis is performed on the sub-band image (R, G and B) to obtain the principal component components, C1, C2 and C3 [19].

(2) The spatial detail information in the full-band image (P) is extracted by weighted least squares (WLS), and it is imported into C1 to obtain CH1 [20].

$C1H=C1+ε∑ m=1KPHm,m=1,2,...,K$

$ε=DC1DP$

$PHm=PLm−1−PLm,m=1,2,...,K$

where P is the full-band image. H and L are the high- and low-frequency information, respectively. C1 is the first principal component and CH1 is the first principal component after high frequency is imported. ε is the gain coefficient, which is the ratio of DC1 (standard deviation of C1) and DP (standard deviation of P). When the scale is m, PHm and PLm are the filtered high- and low-frequency information, respectively. K is the scale factor. When m = 1, PLm−1 = P.

(3) According to the first principal component C1, the full-band image (P) is histogram matched to get P~. P~ and CH1 are decomposed by NSST transform to obtain the decomposition coefficient $HC1HI,K(i,j),LC1HI,K(i,j)$ and $HP˜I,K(i,j),LP˜I,K(i,j)$. Here, $HC1HI,K(i,j)$ is the high-frequency coefficient with level I and direction K of CH1 at the spatial position (i, j), and $LC1HI,K(i,j)$ is the low-frequency coefficient. $HP˜I,K(i,j)$ and $LP˜I,K(i,j)$ are the high- and low-frequency coefficients of P~, respectively.

(4) Different fusion rules are used for high- and low-frequency components. According to the matching degree between images, the maximum value or average fusion strategy is used for low-frequency components. The absolute value strategy is used for high-frequency components. We get the fused high- and low-frequency components HF(i, j) and LF(i, j).

(5) Then inverse NSST transform (INSST) is performed on HF(i, j) and LF(i, j) to obtain the fused first principal component C′1.

(6) Inverse PCA (IPCA) transform is performed on C′1, C2 and C3 to obtain the final fused image F.

Next, we introduce the fusion rules of high- and low-frequency components in Step 4.

a) The low-frequency components fusion rule (LCFR).

Let’s set M(X) as the low-frequency coefficient matrix for wavelet decomposition of image X. p = (m, n) is the spatial position of M(X). Region Q is the neighborhood space of p, and M(X, p) is the element value on (m, n) in the low-frequency coefficient matrix. μ(X, p) is the mean of the elements in the neighborhood space Q of p. Assuming that G(X, p) is the regional variance of position p in X, G(X, p) can be calculated according to the following equation:

$G(X,p)=∑ q∈Qw(q)(M(X,q)−μ(X,p))2$

where w(q) is weight value. The closer the distance between p and q, the greater the value of w(q).

Let’s set G(A, p) and G(B, p) as the regional variance of the low-frequency coefficient matrices of A and B in p, so the variance matching degree Tp of A and B in p is expressed as follows:

$Tp=2×∑ q∈Qw(q)M(A,q)−μ(A,p)M(B,q)−μ(B,p)G(A,p)+G(B,p)$

where the value of Tp varies from 0 to 1. The higher it is, the lower the correlation between the low-frequency coefficient matrices of the two images (A and B) is.

Assuming that U is the matching degree threshold, the value range of U is generally 0.5–1. In our work, U = 0.7 was selected according to the experiments.

When Tp < U, the matching degree between A and B is low, and the maximum value fusion strategy is used.

$M(F,p)=M(A,p),G(A,p)≥G(B,p)M(B,p),G(A,p)

When Tp > U, the mean value fusion method is used.

$M(F,p)=WmaxM(A,p)+WminM(B,p),G(A,p)≥G(B,p)WminM(A,p)+WmaxM(B,p),G(A,p)

where, Wmin = 0.5 − 0.5(), and Wmax = 1 − Wmin.

b) The high-frequency components fusion rule (HCFR).

Since the high-frequency component mainly includes the detail information of an image, generally speaking, the high-frequency detail component of the full-color image (P~) has more information. In order to better maintain the detail texture of the image, the maximum absolute value method is applied to the fusion of the high-frequency components.

$M(F,p)=M(A,p),M(A,p)≥M(B,p)M(B,p),M(A,p)

For the image fusion algorithm, we give an algorithm flow chart in Fig. 4.

Figure 4. Image fusion algorithm flow chart.

According to the above design, the double-channel four-band true color night vision system can be built and debugged, which can realize the display of true color night vision images.

### III. EXPERIMENTS AND RESULTS

We used a Gen Ⅲ image intensifier (ANDOR: iStar DH334T-18-U-93) with a spectral response range of 380–1,090 nm coupled with a CCD to synthesize the ICCD to build the true color night vision system. Using the system, experiments were carried out for targets in different scenes and the exposure time was the same when capturing images for the four channel sensors. The four targets are: A standard colorimetric plate (Scene 1, 0.02 lx), a building (Scene 2, 0.02 lx), an indoor rainbow umbrella (Scene 3, 0.26 lx) and an urban night scene (Scene 4, 0.63 lx). The captured images (B-band, G-band, R-band and full-band images) of each channel, which were operated by image registration, are shown in Figs. 58.

Figure 5. Standard colorimetric plate: (a) B, (b) G, (c) R, and (d) full.

Figure 6. The building: (a) B, (b) G, (c) R, and (d) full.

Figure 7. Indoor rainbow umbrella: (a) B, (b) G, (c) R, and (d) full.

Figure 8. Urban night scene: (a) B, (b) G, (c) R, and (d) full.

In order to compare this with conventional true color night vision technology, we set up a single channel tri-band system, that is, the full-band channel of our system was abandoned, and only the sub-band channel worked to obtain the true color night vision images. Here, the conventional true color night vision image fusion technology is to map the sub-band image to the RGB space. The true color night vision images obtained by conventional tri-band and four-band image fusion methods are shown in Figs. 912, respectively.

Figure 9. Standard colorimetric plate: (a) Tri-band, (b) four-band.

Figure 10. The building: (a) Tri-band, (b) four-band.

Figure 11. Indoor rainbow umbrella: (a) Tri-band, (b) four-band.

Figure 12. Urban night scene: (a) Tri-band, (b) four-band.

From the point of view of target spectral characteristics, the same kind of target has the same or similar spectral characteristics, and will have the same or similar gray values in an image, and there is a high correlation between the same kind of target signals in the same uniform region. However, noise is independent and uncorrelated. That is to say, the spectral reflection information of the same targets in the uniform region is related. If the signal value of this pixel can be estimated, the signal and noise can be separated to obtain the noise value. Multiple linear regression (MLR) for fitting can be used to select the signal values around a pixel in the uniform region to fit the signal value of the pixel [21]. Generally speaking, the targets in an image are not unique and uniform, and there will be structural features between different targets. The signal values of the same or uniform targets can be fitted after eliminating the structural features (decorrelated method). This can not only weaken the influence of mutual interference between different targets, but also estimate the noise value according to the characteristics of high correlation between the same and uniform targets. Therefore, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the output true color night vision images. The steps are as follows:

(1) Use the Canny operator to detect and mark the edges of the targets in the detected image.

(2) Block the image marked by the edges (the size of the module is selected according to the image).

(3) The image blocks marked by the edges are removed, and the others containing the same targets or that are uniform are retained.

(4) Use the formula to perform regressing calculation (numerical fitting) on the pixels that do not contain the outermost pixels of reserved image blocks. The formula is as follows:

$x^i,j=axi+1,j+bxi−1,j+cxi,j−1+dxi,j+1+e$

where xi,j are the gray values of the pixel in row i and column j of an image block. $x^ij$ is the fitting value of $x^ij$. a, b, c, d and e are the linear regression coefficients, which should minimize the residual value ri,j,k, the difference between the fitting value $x^ij$ and the original value xi,j in the same image block. The residual value is:

$ri,j,k=x^i,j−xi,j$

Then $S2=∑1w∑1hri,j,k2$, and image noise variance σ2n is:

$σn2=S2(M−1)$

M = (w − 2) × (h − 2). Where, w and h are the width and height of the image block, respectively.

(5) Calculate the mean value DN of the corresponding image block.

$DN¯=1M∑ i=1 MDNi$

where DNi is the gray value of each pixel in the selected region image, and DN is the mean value of the region image.

(6) Finally, select an interval with the largest mode according to the standard deviation of all these image blocks, and take the mean value of the standard deviation of the image blocks falling into this interval as the noise mean value σ of the whole image, and then the SNR of the image is calculated according to formula (24).

$SNR=DN¯σ$

According to the above method, we calculated the SNR of the images obtained by the two true color night vision schemes as shown in Fig. 13.

Figure 13. The signal-to-noise ratio (SNR) of the images of scenes obtained by the two systems.

According to Figs. 912, we know that both the conventional tri-band and double-channel four-band night true color night vision system we designed can obtain true color night vision images that are consistent with the human visual system. However, the images obtained can visually confirm the difference between tri-band and four-band systems, that is, the four-band system can increase the output image sharpness compared with the tri-band system. In order to show how much of an effect SNR has on image quality, we calculated the correlation coefficient (CC) [22] between the edge graphs of obtained and reference images. The reference images we selected were the full-band images of the four scenes because they have rich details and edge information. Firstly, we used the Canny operator, which is robust to noise, to detect the edges of the obtained (tri-band and four-band) true-color and reference images. Secondly, we calculated the CC between the tri-band or four-band and full-band images and compared them. The CC values are listed in Table 1.

TABLE 1. Correlation coefficient (CC) between obtained and reference images of two systems.

ScenesCC
Tri-bandFour-band
Standard Colorimetric Plate0.07270.2263
Building0.05850.2237
Indoor Rainbow Umbrella0.43980.5011
Urban Night Scene0.55620.6071

From Fig. 13 and Table 1, we get that the SNR of the images obtained by the four-band system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system, and the CC values are also higher than the latter. We know that the four-band system designed by us can output more distinct true color night vision images than the conventional tri-band system. Under the condition of weak light, our four-band system has more advantages than tri-band. However, with the increase in light brightness, the performance difference between the two systems will be less.

### IV. CONCLUSIONS

By analyzing the SNR theory of the true color night vision system, we know that we can expand the system response to improve the output image SNR. Based on the conventional tri-band true color night vision scheme, we added a full-band channel and proposed an image fusion method based on PCA and NSST to build a double-channel four-band true color night vision system. Experiments were carried out with the built system, and we got true color night vision images that are consistent with the human visual system. In addition, we proposed a method based on edge extraction of the targets and spatial dimension decorrelation to calculate the SNR of the true color night vision images obtained. In the meantime, we calculated and compared the CC between obtained and reference edge graphs. The results demonstrated that our four-band true color night vision system can greatly improve the SNR of the output images, which may have a positive significance for the development of true color night vision technology.

### DISCLOSURES

The authors declare no conflicts of interest.

### DATA AVAILABILITY

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

### ACKNOWLEDGMENT

Dongsheng Wu thanks the National Natural Science Foundation for help in identifying collaborators for this work.

### FUNDING

National Natural Science Foundation of China (6180 1507).

### Fig 1.

Figure 1.Relationship between change rate of signal-to-noise ratio (SNR) and signal value.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 2.

Figure 2.Spectral distribution of night sky light under different conditions.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 3.

Figure 3.Block diagram of double-channel four-band true color night vision system.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 4.

Figure 4.Image fusion algorithm flow chart.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 5.

Figure 5.Standard colorimetric plate: (a) B, (b) G, (c) R, and (d) full.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 6.

Figure 6.The building: (a) B, (b) G, (c) R, and (d) full.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 7.

Figure 7.Indoor rainbow umbrella: (a) B, (b) G, (c) R, and (d) full.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 8.

Figure 8.Urban night scene: (a) B, (b) G, (c) R, and (d) full.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 9.

Figure 9.Standard colorimetric plate: (a) Tri-band, (b) four-band.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 10.

Figure 10.The building: (a) Tri-band, (b) four-band.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 11.

Figure 11.Indoor rainbow umbrella: (a) Tri-band, (b) four-band.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 12.

Figure 12.Urban night scene: (a) Tri-band, (b) four-band.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

### Fig 13.

Figure 13.The signal-to-noise ratio (SNR) of the images of scenes obtained by the two systems.
Current Optics and Photonics 2022; 6: 608-618https://doi.org/10.3807/COPP.2022.6.6.608

TABLE 1 Correlation coefficient (CC) between obtained and reference images of two systems

ScenesCC
Tri-bandFour-band
Standard Colorimetric Plate0.07270.2263
Building0.05850.2237
Indoor Rainbow Umbrella0.43980.5011
Urban Night Scene0.55620.6071

### References

1. D. Li, L. Deng, B. B. Gupta, H. Wang, and C. Choi, “A novel CNN based security guaranteed image watermarking generation scenario for smart city applications,” Inform. Sci. 479, 432-447 (2019).
2. H. Naeem, F. Ullah, M. R. Naeem, S. Khalid, D. Vasan, S. Jabbar, and S. J. A. H. N. Saeed, “Malware detection in industrial internet of things based on hybrid image visualization and deep learning model,” Ad Hoc Networks 105, 102154 (2020).
3. M. Zhang, M. Zhu, and X. Zhao, “Recognition of high-risk scenarios in building construction based on image semantics,” J. Comput. Civ. Eng. 34, 04020019 (2020).
4. Y. Yang, W. Zhang, W. He, Q. Chen, and G. Gu, “Research and implementation of color night vision imaging system based on FPGA and CMOS,” SPIE 11434, 114340U (2020).
5. J.-B. Yang Sr, Y. Lu, L. Wang, K. Zhao, C. Yang, Y. Liu, and X.-H. Chai, “Research on starlight level broad spectrum full color imaging technology,” SPIE 11338, 113381V (2019).
6. A. Toet and M. A. Hogervorst, “Progress in color night vision,” Opt. Eng. 51, 010901 (2012).
7. B. Ren, G. Jiao, Y. Li, Z. Zheng Jr, K. Qiao, Y. Yang, L. Yan, and Y. Yuan, “Research progress of true color digital night vision technology,” SPIE 11763, 117636C (2021).
8. K. Chrzanowski, “Review of night vision technology,” Opto-Electron. Rev. 21, 153-181 (2013).
9. M. R. Khosravi, M. Sharif-Yazd, M. K. Moghimi, A. Keshavarz, H. Rostami, and S. Mansouri, “MRF-based multispectral image fusion using an adaptive approach based on edge-guided interpolation,” arXiv:1512.08475 (2015).
10. M. Morimatsu, Y. Monno, M. Tanaka, and M. Okutomi, “Monochrome and color polarization demosaicking using edge-aware residual interpolation,” in Proc. IEEE International Conference on Image Processing-ICIP (Abu Dhabi, United Arab Emirates, Oct. 25-28, 2020), pp. 2571-2575.
11. L. Yu and B. Pan, “Color stereo-digital image correlation method using a single 3CCD color camera,” Exp. Mech. 57, 649-657 (2017).
12. O. M. B. Saeed, S. Sankaran, A. R. M. Shariff, H. Z. M. Shafri, R. Ehsani, M. S. Alfatni, and M. H. M. Hazir, “Classification of oil palm fresh fruit bunches based on their maturity using portable four-band sensor system,” Comput. Electron. Agric. 82, 55-60 (2012).
13. X. Dong, W. Xu, Z. Miao, L. Ma, C. Zhang, J. Yang, Z. Jin, A. B. J. Teoh, and J. Shen, “Abandoning the Bayer-filter to see in the dark,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (New Orleans, LA, USA, Jun. 21-24, 2022) pp. 17431-17440.
14. G. G. Jeon, “Optimally determined modified Bayer color array for imagery,” Adv. Mater. Res. 717, 501-505 (2013).
15. J. Kriesel and N. Gat, “True-color night vision cameras,” SPIE 6540, 65400D (2007).
16. Y. Jiang and D. Wu, “Analysis and test of signal-to-noise ratio of LLL night vision system,” in Proc. IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference-IMCEC (Chongqing, China, Oct. 11-13, 2019), pp. 648-651.
17. M. Aboelaze and F. Aloul, “Current and future trends in sensor networks: a survey,” in Proc. Second IFIP International Conference on Wireless and Optical Communications Networks (Dubai, United Arab Emirates, Mar. 6-8, 2005), pp. 551-555.
18. P. Cinzano, “Night sky photometry with sky quality meter,” ISTIL Int. Rep. (2005), Number 9, Version 1.4.
19. R. P. Desale and S. V. Verma, “Study and analysis of PCA, DCT & DWT based image fusion techniques,” in Proc. 2013 International Conference on Signal Processing, Image Processing & Pattern Recognition (Coimbatore, India, Feb. 7-8, 2013), pp. 66-69.
20. Y. Song, W. Wei, L. Zheng, X. Yang, L. Kai, and L. Wei, “An adaptive pansharpening method by using weighted least squares filter,” IEEE Geosci. Remote Sens. Lett. 13, 18-22 (2016).
21. L. Sun and F. Zhong, “Mixed noise estimation for hyperspectral image based on multiple bands prediction,” IEEE Geosci. Remote Sens. Lett. 19, 6007705 (2022).
22. K. Peng, X. Shen, F. Huang, and X. He, “A joint transform correlator encryption system based on binary encoding for grayscale images,” Curr. Opt. Photonics 3, 548-554 (2019).

Wonshik Choi,
Editor-in-chief