검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2023; 7(4): 408-418

Published online August 25, 2023 https://doi.org/10.3807/COPP.2023.7.4.408

Copyright © Optical Society of Korea.

A Non-uniform Correction Algorithm Based on Scene Nonlinear Filtering Residual Estimation

Hongfei Song, Kehang Zhang , Wen Tan, Fei Guo, Xinren Zhang, Wenxiao Cao

School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun, Jilin 130022, China

Corresponding author: *cust_zkh123@163.com, ORCID 0000-0002-6744-666X

Received: January 13, 2023; Revised: May 11, 2023; Accepted: June 13, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Due to the technological limitations of infrared thermography, infrared focal plane array (IFPA) imaging exhibits stripe non-uniformity, which is typically fixed pattern noise that changes over time and temperature on top of existing non-uniformities. This paper proposes a stripe non-uniformity correction algorithm based on scene-adaptive nonlinear filtering. The algorithm first uses a nonlinear filter to remove single-column non-uniformities and calculates the actual residual with respect to the original image. Then, the current residual is obtained by using the predicted residual from the previous frame and the actual residual. Finally, we adaptively calculate the gain and bias coefficients according to global motion parameters to reduce artifacts. Experimental results show that the proposed algorithm protects image edges to a certain extent, converges fast, has high quality, and effectively removes column stripes and non-uniform random noise compared to other adaptive correction algorithms.

Keywords: Adaptive correction, Nonlinear filter, Non-uniformity correction, Residual estimation

OCIS codes: (110.2650) Fringe analysis; (110.3080) Infrared imaging; (110.4155) Multiframe image processing; (110.4280) Noise in imaging systems

As a main development direction of infrared thermal imaging, the infrared focal plane array (IFPA) has been widely employed in military, aerospace, civil, and other fields due to its high sensitivity, strong detection capability, and strong adaptability to harsh climate conditions [1]. However, due to the limitations of non-uniformity correction technology, different pixels in the focal plane array (FPA) have different response rates, resulting in fixed-pattern noise that seriously affects the imaging quality of the infrared system. In particular, pixels in the same column of the (FPA share the same integral readout circuit, resulting in column stripe noise in the imaging. Moreover, because the fixed noise will shift with temperature and time, it cannot be accurately calibrated [2].

There are currently two directions for non-uniformity correction algorithms: calibration-based and scene-based. The former includes typical methods such as the two-point correction method, multi-point correction method, and baffle correction method, which rely on the periodic use of radiation sources or baffles to provide uniform scenes for calibration [3]. The advantage of this method is its simplicity and obvious correction effect. However, in practical applications, periodic loss of targets may occur. The latter direction, scene-based correction, includes representative methods such as the generative adversarial network method proposed by Mou et al. [4, 5], the constant statistics (CS) method proposed by Narendra and Foss [6], the method based on adaptive moving window moment matching proposed by Yan et al. [7], and the space low-pass and temporal high-pass algorithm proposed by Qian et al. [8]. These methods adaptively estimate the gain and bias coefficients based on the statistical characteristics of the image itself [48].

Figure 1 is a comparison of infrared image before and after correction by non-uniform correction algorithm.

Figure 1.Comparison chart before and after non-uniform correction: (a) is the non-uniform infrared image, (b) is the corrected image.

In this paper, a scene-based non-uniform correction (NUC) algorithm is proposed for removing column stripes and random noise caused by non-uniformity in infrared (IR) imaging systems. IR non-uniformity is often considered fixed pattern noise, which varies with time and temperature, and has a significant impact on image quality. Traditional methods ignore the characteristics of non-uniformity, and changes in scene details may be incorrectly identified as non-uniformity and filtered out. Based on the characteristics of visible light imaging, where adjacent pixels of a target pixel have a Gaussian distribution of mutual influence, it is assumed that the mutual influence between the target column pixel and adjacent column pixels in an IR image also follows a Gaussian distribution. Additionally, pixels in the same column of a FPA share the same integration readout circuit, and the impact of fixed noise between different columns can be ignored. Therefore, it is assumed that a single column contains sufficient imaging information, and the impact of fixed noise between column elements can be ignored. The proposed method uses a nonlinear filtering approach to remove non-uniformity and random noise from a single column, obtaining a single frame NUC image, and calculates the actual residual between the NUC image and the original image. Then, the current residual is obtained using the predicted residual from the previous frame and the actual residual. Finally, the algorithm adaptively calculates the gain and offset coefficients based on global motion parameters to reduce artifacts. Compared with other adaptive correction methods, this method can protect image edges to some extent and has a faster convergence rate.

2.1. The Nonlinear Filter

The adjacent columns of an infrared image have similar gray level distribution [911]. We assume that a single column contains enough imaging information, the fixed noises between the column elements are independent of each other, and the mutual influence can be ignored. Also, there is a normal distribution influence between the target column elements and the adjacent column elements on the scene imaging, and the imaging of the target column elements can be represented by the adjacent column elements. Suppose the image has M rows and N columns, and (j ∈ [0, N − 1], i ∈ [0, M − 1]). First, we need to sort each column of the image to obtain the sequence Seqj[i], and keep the original index as Indexj[i]. Then we use the Gaussian kernel for a filter and calculate the sequence of each column as Seqgauss[i], and the jth column transformation sequence is obtained by Gaussian kernel filtering as follows:

Seqgaussji= k=ddGausskSeqk+ji

where d = 3σ, k is the distance between adjacent columns and the jth column, and Gauss(k) is the Gaussian filter kernel:

Gauss k=12πσexpk22σ2

where σ is a Gaussian standard deviation, and we can choose the optimal standard deviation empirically. Different standard deviations have different correction effects on the same image, and the convergence time of the algorithm is also different. The influence only exists in adjacent columns, so σ should not be too large, because when it is too large it is easy to cause image distortion, and when it is too small there is not an obvious filtering effect. The Fig. 2 shows the predicted images of the same non-uniform image under different σ.

Figure 2.Predicted images under different Gaussian standard deviation σ: (a) original image, (b) predicted image when σ = 0.5, (c) predicted image when σ = 1, (d) predicted image when σ = 5.

Finally, restore each column’s pixels according to the transformation sequence and the original index:

FjIndexji=Seqgaussji

where Fj[Indexj[i] is the gray value of Indexj[i]th row and jth column, and the reduction formula for pixel transformation of each column is as follows:

Fi,j=SeqgaussjiSeqjui,j

where the interpretation of the ° operation is to transform Seq j to SeqGaussj and restore the pixel value. F(i, j) is the pixel value after reverse mapping, and u(i, j) is the original gray value at the coordinate (i, j).

2.2. Cross Sampling

After nonlinear filtering, most of the fringe non-uniform noise is removed. However, the random non-uniform noise still needs to be sampled. The purpose of cross sampling is to filter the random noise in the condition of protecting the details of the original pixels so as to obtain the final prediction image of the current frame. The rule of cross sampling is:

Di,jn=14μ+1 φ=μμFi+φ,jn+ φ=μμFi,j+φnFi,jn

where Di, j (n) is the pixel value at the coordinate (i, j) after cross sampling, Fi, j (n) is the pixel value at the coordinate (i, j) before sampling, and µ is the sampling parameter.

2.3. Single Layer Residual Estimation Neural Network

The traditional neural network method uses the mean of four neighborhoods as the forecast image [4] for stripe noise, but the algorithm effect is not ideal; Moreover, the traditional neural network method severely loses image information and cannot protect the edge of the image [1214]. The traditional residual calculation ignores the characteristics of non-uniformity, and the changes of scene details will be mistaken for non-uniformity and filtered out. In this paper, we use nonlinear filtering to obtain the forecast image and calculate the actual residual with the original image, then predict the residuals in the time domain and the temperature response, and then use the weights to calculate the current frame residuals. Finally, the adaptive correction calculates the gain and bias coefficients. The proposed method estimates the dynamic growth of non-uniformity, protects the image details, and effectively improves the convergence speed and the degree of non-uniformity correction. The experiments show that the algorithm proposed in this paper has obvious effects on stripe noise and random noise. The linear correction model output function is expressed as:

Yi,j=Gi,jXi,j+Oi,j+ϵ

where Xi, j is the response value of the focal plane detector, Yi, j is the output value after NUC, Gi, j is the gain coefficient, Oi, j is the offset coefficient [15, 16], and ε is a random disturbance.

For the linear response correction model, in order to achieve dynamic correction, we need to dynamically correct and update the gain and bias coefficient, respectively. We assume that the pixel value of the nth frame image after nonlinear filtering processing and cross sampling is Di, j(n), the output value of the original image after non-uniformity correction is Yi, j(n):

Yi,jn=Gi,jn1Xi,jn+Oi,jn1+ϵ

and the error function Ei, j(n) is:

Ei,jn=Yi,jnDi,jn

In the Eq. (8), non-uniformity is approximated by residuals. The residual of the nth frame is spatially similar to the residual of the first frame, but its value is gradually influenced by the surrounding environment. Therefore, the residual of the nth frame can be predicted by the residual of the n − 1th frame using a temporal-temperature model. By controlling the weight coefficient between the current residual and the predicted residual, the difference between the residual and the actual non-uniformity can be reduced, thus improving the correction effect of non-uniformity. The actual residual can be expressed as:

Ei,jn= Wn Ei,j n+Wn1 E i,jn1+Rn2n2, Ei,j n2    otherwise,

where Ei, jn−1 is the residual of the previous frame, Ei, jn is the current residual, Wn is the confidence weight of the actual residual, Wn−1 is the weight of the predicted residual, and Wn and Wn−1 obey the following rule:

Wn+Wn1=1

where the residual estimate R(n) can be simulated using a temporal-temperature model:

Rn=n2nγ+TntnTmaxTminβ

where β is the temperature noise growth parameter, which is proportional to the temperature change, Tmax is the maximum detectable temperature, Tmin is the minimum detectable temperature, and tn is the FPA temperature, and γ is the time domain noise growth parameter, which gradually increases over the time. γ is the original inter-frame difference:

γ=Ei,jnEi,jn1

In order to minimize the error function, we use the stochastic gradient descent method to update the gain and bias coefficients of each pixel point [17]. According to the error function, the variation of the gain and bias coefficients can be obtained as:

ΔGi,jn=2WnEi,jn+Wn1Ei,j n1 +RnXi,jn

ΔOi,jn=2WnEi,jn+Wn1Ei,jn1+Rn

According to stochastic gradient descent method:

G˜i,jn=Gi,jn1λΔGi,jn

O˜i,jn=Oi,jn1λΔOi,jn

where λ is the step size of the stochastic gradient descent method. The change of the coefficients has a great influence on the gray value, so the iterative step size should not be too large. G˜i,jn and O˜i,jn is the gain and offset of the n + 1th frame, and the output of the next frame after the update is:

Yi,jn+1=G˜i,jnXi,jn+1+O˜i,jn+ϵ

In the practical application of the algorithm, to reduce the negative impact of rapid scene changes on the algorithm, this paper uses a scene change threshold Mt. When the threshold Mt is 1, the algorithm only filters the image, and the learning rate is 0. When the threshold Mt is not 1, the gain and bias of the image are corrected.

First, we calculate the difference image df for adjacent frames:

dfi,jn=Xi,jnXi,jn1

where df is the difference image and X is the original image. Then the difference image is binarized, which can be expressed as follows:

Bsi,jn=1          dfi,jn>Ts, 0          dfi,jn<Ts, 

where Bs is the binary image [0,1], and Ts is the change threshold. Then we calculate the motion parameters of the current frame:

Mon= i=1Mj=1NBs i,jn

where Mo is the global motion parameter. Then the global motion parameter Bg is binarized as follows:

Bgn=1         Mon>Tg, 0         Mon<Tg, 

where Tg is the scene movement threshold. Finally, the correction formula of the output of the next frame can be expressed as:

Mtn=Bsi,jnBgn

Yi,jn+1=Gi,jn1Xi,jn+1+Gi,jn1+ϵ   Mtn=1, G˜i,j nXi,jn+1+O˜i,jn+ϵ       otherwise, 

where Mt(n) is the motion judgment parameter. With motion frame estimation, we can judge whether the current frame is moving or not, which can greatly reduce the impact of scene changes on imaging, such as artifacts.

Figure 3 is a general flowchart of the algorithm.

Figure 3.Algorithm flowchart.

3.1. Experimental Equipment

In this experiment, we acquired three sets of raw video streams using the IRay Photoelectric Xcore_LT series temperature measurement uncooled infrared detector (Fig. 4). The image size was 640 × 512, and the experimental scenes are two cups at room temperature, a black-body radiation source at 40 ℃ and a palm at room temperature.

Figure 4.IRay Photoelectric Xcore_LT series temperature measuring type uncooled infrared movement components.

In this experiment, the data storage and computation were carried out using data structures from the OpenCV4.4 open-source library. The infrared images were acquired using the RTSP protocol. The nonlinear filtering residual estimation algorithm employed a Gaussian filter with a standard deviation of σ = 1.25, a sampling parameter of µ = 1, and gain parameters Gi, j initialized as a 640 × 512 matrix of all ones and bias parameters initialized as a 640 × 512 matrix of all zeros. To reduce algorithm complexity and better address fixed-pattern non-uniform noise, the random perturbation error ε was ignored, and the weight values of Wn and Wn−1 were determined through multiple experiments as 0.347 and 0.653, respectively. The temperature noise growth parameter β was set to 2.42, and the change threshold Ts was set to 15, while the scene motion threshold Tg was set to 10,922. Certain parameters need to be adjusted for different infrared images. The algorithm was implemented using the C++ programming language in the VS2015 environment and executed on an i7-6700HQ CPU @ 2.60GHz processor with 16G RAM.

3.2. Analytical Metrics

To evaluate the performance and effectiveness of the proposed algorithm, we use roughness (ρ) to quantitatively evaluate the non-uniformity of the original infrared image and the non-uniformity-corrected image [18, 19]. Roughness (ρ) is commonly used to evaluate streak non-uniform noise in infrared images, and is defined as:

ρ=h×IL1+hT×IL1IL1

where ρ is the roughness of the image, I is the original image matrix, h is the column difference convolution term, hT is the row difference convolution term, and ||I ||L1 represents the L1 norm of the matrix I. The smaller the ρ value, the smaller the column difference and row difference of the image, and the smaller the noise of the image [20].

In special cases, such as in a uniform radiation field, peak signal-to-noise ratio (PSNR) is used as an auxiliary evaluation metric in this paper. For an image K with noise containing M rows and N columns, and its corresponding noise-free ideal image I, the definition of MSE is as follows:

MSE=1MN i=0 m1 j=0 n1[Ii,jKi,j]2

and the definition of PSNR (in dB) is:

PSNR=10log10MAXI2MSE=20log10MAXI MSE

3.3. Comparison of Results

In our experiment, we employed roughness (ρ) in different scenarios as the primary evaluation metric [21], with PSNR used as a complementary metric [22]. The correction effect was jointly assessed by analyzing the error images between the corrected images and the original images [23, 24]. For the same infrared FPA, we analyzed video frame sequences captured from three different scenarios, including two types of cups at room temperature, a blackbody radiation source at 40 degrees Celsius, and a palm at room temperature. We compared the results of the co-occurring filter neural network (CoFNN) method [3], the CS method [5], and our proposed algorithm (our method). The Fig. 5 depicts a comparison of the non-uniformly corrected and corrected images in a three-dimensional space within a scene with a cup as the background. It can be seen in the Fig. 5 that the non-uniform image appears extremely rough, while the corrected image appears significantly smoother.

Figure 5.3D view of non-uniform image and corrected image. (a) Non-uniform image, (b) corrected image.

Figures 68 show a comparison of the correction results of different algorithms in three different scenarios. The smoother the image, the better the correction effect. As indicated in the figure, non-uniformity severely affects the imaging effect. The proposed algorithm in this paper provides the most intuitive removal of non-uniformity and effectively reduces the non-uniformity of the stripes while suppressing white noise.

Figure 6.The correction results of different algorithms in the scenario with a cup as the background. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.

Figure 7.The correction results of different algorithms in the uniform radiation scenario. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.

Figure 8.The correction results of different algorithms in the scenario with a palm as the background. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.

In terms of the correction effect of images in the same scenario, the CoFNN algorithm has a lower degree of convergence, and the correction effect of random non-uniformity is obvious, but the non-uniformity of stripes is still clearly visible. The CS algorithm did not achieve complete convergence, and there are still some artifacts in the results. The algorithm proposed in this paper performs the best in terms of degree of convergence.

These results demonstrate the effectiveness of the proposed algorithm in removing non-uniformity and reducing noise in stripe patterns. The findings suggest that it can be applied in various scenarios and has great potential for further development.

Figures 911 show error images (grayscale) between the correction results of each algorithm and the original image. Subjectively, the striped information in the images represents the filtered non-uniformity, and the clearer the stripes, the stronger the filtering effect; The darker the image, the less obvious the filtering effect of non-uniformity and noise removal. From the images, it can be seen that the proposed algorithm has the highest pixel values in the annotated region and appears brighter, indicating that the proposed algorithm effectively filters non-uniformity. As for stripe texture, the proposed algorithm has clear column stripes in the annotated region, while the other two algorithms do not fully reflect the striped information. Therefore, the algorithm effectively filters most of the striped non-uniformity, and the obtained error image has a significant filtering effect, with clearer stripes. This indicates that the striped non-uniformity has been effectively suppressed, and the proposed algorithm has a significant advantage over other scene-based non-uniformity correction algorithms.

Figure 9.Non-uniformity corrected by different algorithms in the cup scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

Figure 10.Non-uniformity corrected by different algorithms in the uniform radiation scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

Figure 11.Non-uniformity corrected by different algorithms in the palm scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

In Fig. 12, the output of each algorithm for moving the cup after a period of time in the focused state is presented. In Fig. 12(a), the CoFNN method failed to update parameters in time for the rapid transformation of the scene, and used the correction parameters before moving between the cup and the bottle, resulting in an incorrect output, i.e. ghost. As shown in Fig. 12(b), the CS method requires multiple frames to converge, and an obvious ghost is generated after quickly moving the cup. It can be seen in Fig. 12(c) that the method proposed in this paper does not produce a ghost, which demonstrates that it can effectively suppress ghosting.

Figure 12.The correction results of each algorithm under scene change. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

As shown in Table 1, this experiment used PSNR and roughness ρ as objective evaluation indicators. The PSNR of the original image compared to the ideal image (Fig. 13) was 35.97. The PSNR after CoFNN algorithm correction was 38.3, which was 6.7% higher than the original image. The PSNR after CS algorithm correction was 36.5, which was 1.9% higher than the original image. The PSNR of the algorithm proposed in this paper was 40.7, which was 13.4% higher than the original image. From the perspective of PSNR, the results of the proposed algorithm proposed after correction were closest to the ideal image. Regarding roughness ρ, the roughness of the original image was approximately 0.028. The roughness after CoFNN algorithm correction was 0.019, which was 32.1% lower than the original image. The roughness after CS algorithm correction was 0.023, which was 17.9% lower than the original image. The roughness of the algorithm proposed in this paper was 0.016, which was 42.8% lower than the original image. From the perspective of roughness, the proposed algorithm had significant advantages. This objectively demonstrated that it had significant correction effects on non-uniformity.

TABLE 1 Peak signal-to-noise ratio (PSNR) and roughness of images corrected by different algorithms under uniform radiation

Analytical MethodOriginalCoFNNCSOur
PSNR35.973838.327636.537940.7353
Roughness0.027960.019350.023150.01569


Figure 13.An ideal infrared image with a radiation temperature of 40 ℃.

As shown in Table 2, in terms of convergence frames, this paper achieved convergence from the 100th frame, while the other two algorithms still maintained a high level of roughness at 100 frames. The CoFNN algorithm reduced non-uniformity by 47.6% in 10 frames, the CS algorithm reduced it by 21.4%, and the proposed algorithm reduced it by 52.3%. At the end of the 200th frame, the CoFNN algorithm removed 55.5% of the non-uniformity, the CS algorithm removed 24.6%, and the proposed algorithm removed 61.3%. The non-uniformity removal effect of the proposed algorithm was significantly better than that of other algorithms, and the data demonstrated that it has obvious advantages in non-uniformity correction.

TABLE 2 Roughness of image corrected by different algorithms in general environment

FrameOriginalCoFNNCSOur
10.2024780.2024780.2024780.202478
100.2100270.1107010.1655160.100062
500.2088340.1004530.1709520.088301
1000.2087760.0946510.1671370.083539
1500.2086070.0926770.1625390.082162
2000.2072280.0921820.1563240.080299


Figure 14 shows the roughness variation curve of 200 frames of images corrected by different algorithms. It can be seen in the figure that the roughness of the original image fluctuates around 0.2, while the corrected image tends to converge after several frames. The curve in the figure indicates that the proposed algorithm has a better convergence effect than CS and CoFNN. In summary, the proposed method balances noise reduction and protection of image details, and has good suppression effects on non-uniformity and random noise, as well as on ghosting artifacts. Both subjective and objective evaluations confirm that the proposed algorithm has an ideal correction effect in practical applications.

Figure 14.The roughness variation curve of 200 frames of images in the cup scene.

We propose a scene-based non-uniformity correction method for IRFPA to solve the issue of IR image non-uniformity. The proposed method employs nonlinear filtering to remove the non-uniformity and random noise in a single column and calculates the actual residue with the original image. Then, the current residue is obtained by using the predicted residue from the previous frame and the actual residue. Finally, we adaptively calculate the gain and bias coefficients based on the global motion parameters control algorithm to reduce artifacts. The results show that compared with other correction methods, our method can effectively suppress ghosting and has advantages in convergence, image quality, correction effect, and detail preservation. It also has a significant limiting effect on stripe non-uniformity and random noise.

Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

  1. L. Song and H. Huang, “Spatial and temporal adaptive nonuniformity correction for infrared focal plane arrays,” Opt. Express 30, 44681-44700 (2022).
    Pubmed CrossRef
  2. Y. Tendero, J. Gilles, S. Landeau, and J. M. Morel, “Efficient single image non-uniformity correction algorithm,” Proc. SPIE 7834, 96-107 (2010).
    CrossRef
  3. L. Li, Q. Li, H. Feng, Z. Xu, and Y. Chen, “A novel infrared focal plane non-uniformity correction method based on co-occurrence filter and adaptive learning rate,” IEEE Access 7, 40941-40950 (2019).
    CrossRef
  4. X. Mou, T. Zhu, and X. Zhou, “Visible-image-assisted nonuniformity correction of infrared images using the GAN with SEBlock,” Sensors 23, 3282 (2023).
    Pubmed KoreaMed CrossRef
  5. Y. JunLu, “Nonuniformity correction design and implementation for infrared image based on FPGA and artificial neural networks,” J. Phys.: Conf. Ser. 1693, 012177 (2020).
    CrossRef
  6. P. M. Narendra and N. A. Foss, “Shutterless fixed pattern noise correction for infrared imaging arrays,” Proc. SPIE 282, 44-51 (1981).
    CrossRef
  7. J. Yan, Y. Kang, Y. Ni, Y. Zhang, J. Fan, and X. Hu, “Non-uniformity correction method of remote sensing images based on adaptive moving window moment matching,” J. Imaging Sci. Technol 66, 50502 (2022).
    CrossRef
  8. W. Qian, Q. Chen, and G. Gu, “Space low-pass and temporal high-pass nonuniformity correction algorithm,” Opt. Rev. 17, 24-29 (2010).
    CrossRef
  9. T. Guillemot and J. Delon, “Implementation of the midway image equalization,” Image Process. Line. 6, 114-129 (2016).
    CrossRef
  10. Y. Tendero, S. Landeau, and J. Gilles, “Non-uniformity correction of infrared images by midway equalization,” Image Process. Line 2, 134-146 (2012).
    CrossRef
  11. S. Yang, H. Qin, X. Yan, S. Yuan, and Q. Zeng, “Mid-wave infrared snapshot compressive spectral imager with deep infrared denoising prior,” Remote Sens. 15, 280 (2023).
    CrossRef
  12. Y. Cao, M. Y. Yang, and C.-L. Tisse, “Effective strip noise removal for low-textured infrared images based on 1-D guided filtering,” IEEE Trans. Cir. Syst. Video Technol. 26, 2176-2188 (2015).
    CrossRef
  13. N. Liu and J. Xie, “Interframe phase-correlated registration scene-based nonuniformity correction technology,” Infrar. Phys. Technol. 69, 198-205 (2015).
    CrossRef
  14. B. Lv, S. Tong, Q. Liu, and H. Sun, “Statistical scene-based non-uniformity correction method with interframe registration,” Sensors 19, 5395 (2019).
    Pubmed KoreaMed CrossRef
  15. C. H. Lu, “Stripe non-uniformity correction of infrared images using parameter estimation,” Infrar. Phys. Technol. 107, 103313 (2020).
    CrossRef
  16. R. C. Calik, E. Tunali, B. Ercan, and S. Oz, “A study on calibration methods for infrared focal plane array cameras,” in Proc. 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Funchal, Madeira, Portugal, Jan. 27-29, 2018), pp. 219-226.
    CrossRef
  17. Y. Zhang, X. Li, X. Zheng, and Q. Wu, “Adaptive temporal high-pass infrared non-uniformity correction algorithm based on guided filter,” in Proc. 7th International Conference on Computing and Artificial Intelligence (Tianjin, China, Apr. 23-26, 2021), pp. 459-464.
    CrossRef
  18. G. Ness, A. Oved, and I. Kakon, “Derivative based focal plane array nonuniformity correction,” arXiv:1702.06118 (2017).
  19. Y. Sheng, X. Dun,W. Jin, F. Zhou, X. Wang, F. Mi, and S. Xiao, “The on-orbit non-uniformity correction method with modulated internal calibration sources for infrared remote sensing systems,” Remote Sens. 10, 830 (2018).
    CrossRef
  20. L. Geng, Q. Chen, W. Qian, and Y. Zhang, “Scene-based nonuniformity correction algorithm based on temporal median filter,” J. Opt. Soc. Korea 17, 255-261 (2013).
    CrossRef
  21. W. Qian, Q. Chen, and G. Gu, “Minimum mean square error method for stripe nonuniformity correction,” Chn. Opt. Lett. 9, 051003 (2011).
    CrossRef
  22. T. Orżanowski, “Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays,” SpringerPlus 5, 1831 (2016).
    Pubmed KoreaMed CrossRef
  23. T. Li, Y. Zhao, Y. Li, and G. Zhou, “Non-uniformity correction of infrared images based on improved CNN with long-short connections,” IEEE Photonics J. 13, 7800213 (2021).
    CrossRef
  24. B. Gutschwager and J. Hollandt, “Nonuniformity correction of infrared cameras by reading radiance temperatures with a spatially nonhomogeneous radiation source,” Meas. Sci. Technol. 28, 015401 (2016).
    CrossRef

Article

Research Paper

Curr. Opt. Photon. 2023; 7(4): 408-418

Published online August 25, 2023 https://doi.org/10.3807/COPP.2023.7.4.408

Copyright © Optical Society of Korea.

A Non-uniform Correction Algorithm Based on Scene Nonlinear Filtering Residual Estimation

Hongfei Song, Kehang Zhang , Wen Tan, Fei Guo, Xinren Zhang, Wenxiao Cao

School of Optoelectronic Engineering, Changchun University of Science and Technology, Changchun, Jilin 130022, China

Correspondence to:*cust_zkh123@163.com, ORCID 0000-0002-6744-666X

Received: January 13, 2023; Revised: May 11, 2023; Accepted: June 13, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Due to the technological limitations of infrared thermography, infrared focal plane array (IFPA) imaging exhibits stripe non-uniformity, which is typically fixed pattern noise that changes over time and temperature on top of existing non-uniformities. This paper proposes a stripe non-uniformity correction algorithm based on scene-adaptive nonlinear filtering. The algorithm first uses a nonlinear filter to remove single-column non-uniformities and calculates the actual residual with respect to the original image. Then, the current residual is obtained by using the predicted residual from the previous frame and the actual residual. Finally, we adaptively calculate the gain and bias coefficients according to global motion parameters to reduce artifacts. Experimental results show that the proposed algorithm protects image edges to a certain extent, converges fast, has high quality, and effectively removes column stripes and non-uniform random noise compared to other adaptive correction algorithms.

Keywords: Adaptive correction, Nonlinear filter, Non-uniformity correction, Residual estimation

I. INTRODUCTION

As a main development direction of infrared thermal imaging, the infrared focal plane array (IFPA) has been widely employed in military, aerospace, civil, and other fields due to its high sensitivity, strong detection capability, and strong adaptability to harsh climate conditions [1]. However, due to the limitations of non-uniformity correction technology, different pixels in the focal plane array (FPA) have different response rates, resulting in fixed-pattern noise that seriously affects the imaging quality of the infrared system. In particular, pixels in the same column of the (FPA share the same integral readout circuit, resulting in column stripe noise in the imaging. Moreover, because the fixed noise will shift with temperature and time, it cannot be accurately calibrated [2].

There are currently two directions for non-uniformity correction algorithms: calibration-based and scene-based. The former includes typical methods such as the two-point correction method, multi-point correction method, and baffle correction method, which rely on the periodic use of radiation sources or baffles to provide uniform scenes for calibration [3]. The advantage of this method is its simplicity and obvious correction effect. However, in practical applications, periodic loss of targets may occur. The latter direction, scene-based correction, includes representative methods such as the generative adversarial network method proposed by Mou et al. [4, 5], the constant statistics (CS) method proposed by Narendra and Foss [6], the method based on adaptive moving window moment matching proposed by Yan et al. [7], and the space low-pass and temporal high-pass algorithm proposed by Qian et al. [8]. These methods adaptively estimate the gain and bias coefficients based on the statistical characteristics of the image itself [48].

Figure 1 is a comparison of infrared image before and after correction by non-uniform correction algorithm.

Figure 1. Comparison chart before and after non-uniform correction: (a) is the non-uniform infrared image, (b) is the corrected image.

In this paper, a scene-based non-uniform correction (NUC) algorithm is proposed for removing column stripes and random noise caused by non-uniformity in infrared (IR) imaging systems. IR non-uniformity is often considered fixed pattern noise, which varies with time and temperature, and has a significant impact on image quality. Traditional methods ignore the characteristics of non-uniformity, and changes in scene details may be incorrectly identified as non-uniformity and filtered out. Based on the characteristics of visible light imaging, where adjacent pixels of a target pixel have a Gaussian distribution of mutual influence, it is assumed that the mutual influence between the target column pixel and adjacent column pixels in an IR image also follows a Gaussian distribution. Additionally, pixels in the same column of a FPA share the same integration readout circuit, and the impact of fixed noise between different columns can be ignored. Therefore, it is assumed that a single column contains sufficient imaging information, and the impact of fixed noise between column elements can be ignored. The proposed method uses a nonlinear filtering approach to remove non-uniformity and random noise from a single column, obtaining a single frame NUC image, and calculates the actual residual between the NUC image and the original image. Then, the current residual is obtained using the predicted residual from the previous frame and the actual residual. Finally, the algorithm adaptively calculates the gain and offset coefficients based on global motion parameters to reduce artifacts. Compared with other adaptive correction methods, this method can protect image edges to some extent and has a faster convergence rate.

II. Algorithm Concepts and Analysis

2.1. The Nonlinear Filter

The adjacent columns of an infrared image have similar gray level distribution [911]. We assume that a single column contains enough imaging information, the fixed noises between the column elements are independent of each other, and the mutual influence can be ignored. Also, there is a normal distribution influence between the target column elements and the adjacent column elements on the scene imaging, and the imaging of the target column elements can be represented by the adjacent column elements. Suppose the image has M rows and N columns, and (j ∈ [0, N − 1], i ∈ [0, M − 1]). First, we need to sort each column of the image to obtain the sequence Seqj[i], and keep the original index as Indexj[i]. Then we use the Gaussian kernel for a filter and calculate the sequence of each column as Seqgauss[i], and the jth column transformation sequence is obtained by Gaussian kernel filtering as follows:

Seqgaussji= k=ddGausskSeqk+ji

where d = 3σ, k is the distance between adjacent columns and the jth column, and Gauss(k) is the Gaussian filter kernel:

Gauss k=12πσexpk22σ2

where σ is a Gaussian standard deviation, and we can choose the optimal standard deviation empirically. Different standard deviations have different correction effects on the same image, and the convergence time of the algorithm is also different. The influence only exists in adjacent columns, so σ should not be too large, because when it is too large it is easy to cause image distortion, and when it is too small there is not an obvious filtering effect. The Fig. 2 shows the predicted images of the same non-uniform image under different σ.

Figure 2. Predicted images under different Gaussian standard deviation σ: (a) original image, (b) predicted image when σ = 0.5, (c) predicted image when σ = 1, (d) predicted image when σ = 5.

Finally, restore each column’s pixels according to the transformation sequence and the original index:

FjIndexji=Seqgaussji

where Fj[Indexj[i] is the gray value of Indexj[i]th row and jth column, and the reduction formula for pixel transformation of each column is as follows:

Fi,j=SeqgaussjiSeqjui,j

where the interpretation of the ° operation is to transform Seq j to SeqGaussj and restore the pixel value. F(i, j) is the pixel value after reverse mapping, and u(i, j) is the original gray value at the coordinate (i, j).

2.2. Cross Sampling

After nonlinear filtering, most of the fringe non-uniform noise is removed. However, the random non-uniform noise still needs to be sampled. The purpose of cross sampling is to filter the random noise in the condition of protecting the details of the original pixels so as to obtain the final prediction image of the current frame. The rule of cross sampling is:

Di,jn=14μ+1 φ=μμFi+φ,jn+ φ=μμFi,j+φnFi,jn

where Di, j (n) is the pixel value at the coordinate (i, j) after cross sampling, Fi, j (n) is the pixel value at the coordinate (i, j) before sampling, and µ is the sampling parameter.

2.3. Single Layer Residual Estimation Neural Network

The traditional neural network method uses the mean of four neighborhoods as the forecast image [4] for stripe noise, but the algorithm effect is not ideal; Moreover, the traditional neural network method severely loses image information and cannot protect the edge of the image [1214]. The traditional residual calculation ignores the characteristics of non-uniformity, and the changes of scene details will be mistaken for non-uniformity and filtered out. In this paper, we use nonlinear filtering to obtain the forecast image and calculate the actual residual with the original image, then predict the residuals in the time domain and the temperature response, and then use the weights to calculate the current frame residuals. Finally, the adaptive correction calculates the gain and bias coefficients. The proposed method estimates the dynamic growth of non-uniformity, protects the image details, and effectively improves the convergence speed and the degree of non-uniformity correction. The experiments show that the algorithm proposed in this paper has obvious effects on stripe noise and random noise. The linear correction model output function is expressed as:

Yi,j=Gi,jXi,j+Oi,j+ϵ

where Xi, j is the response value of the focal plane detector, Yi, j is the output value after NUC, Gi, j is the gain coefficient, Oi, j is the offset coefficient [15, 16], and ε is a random disturbance.

For the linear response correction model, in order to achieve dynamic correction, we need to dynamically correct and update the gain and bias coefficient, respectively. We assume that the pixel value of the nth frame image after nonlinear filtering processing and cross sampling is Di, j(n), the output value of the original image after non-uniformity correction is Yi, j(n):

Yi,jn=Gi,jn1Xi,jn+Oi,jn1+ϵ

and the error function Ei, j(n) is:

Ei,jn=Yi,jnDi,jn

In the Eq. (8), non-uniformity is approximated by residuals. The residual of the nth frame is spatially similar to the residual of the first frame, but its value is gradually influenced by the surrounding environment. Therefore, the residual of the nth frame can be predicted by the residual of the n − 1th frame using a temporal-temperature model. By controlling the weight coefficient between the current residual and the predicted residual, the difference between the residual and the actual non-uniformity can be reduced, thus improving the correction effect of non-uniformity. The actual residual can be expressed as:

Ei,jn= Wn Ei,j n+Wn1 E i,jn1+Rn2n2, Ei,j n2    otherwise,

where Ei, jn−1 is the residual of the previous frame, Ei, jn is the current residual, Wn is the confidence weight of the actual residual, Wn−1 is the weight of the predicted residual, and Wn and Wn−1 obey the following rule:

Wn+Wn1=1

where the residual estimate R(n) can be simulated using a temporal-temperature model:

Rn=n2nγ+TntnTmaxTminβ

where β is the temperature noise growth parameter, which is proportional to the temperature change, Tmax is the maximum detectable temperature, Tmin is the minimum detectable temperature, and tn is the FPA temperature, and γ is the time domain noise growth parameter, which gradually increases over the time. γ is the original inter-frame difference:

γ=Ei,jnEi,jn1

In order to minimize the error function, we use the stochastic gradient descent method to update the gain and bias coefficients of each pixel point [17]. According to the error function, the variation of the gain and bias coefficients can be obtained as:

ΔGi,jn=2WnEi,jn+Wn1Ei,j n1 +RnXi,jn

ΔOi,jn=2WnEi,jn+Wn1Ei,jn1+Rn

According to stochastic gradient descent method:

G˜i,jn=Gi,jn1λΔGi,jn

O˜i,jn=Oi,jn1λΔOi,jn

where λ is the step size of the stochastic gradient descent method. The change of the coefficients has a great influence on the gray value, so the iterative step size should not be too large. G˜i,jn and O˜i,jn is the gain and offset of the n + 1th frame, and the output of the next frame after the update is:

Yi,jn+1=G˜i,jnXi,jn+1+O˜i,jn+ϵ

In the practical application of the algorithm, to reduce the negative impact of rapid scene changes on the algorithm, this paper uses a scene change threshold Mt. When the threshold Mt is 1, the algorithm only filters the image, and the learning rate is 0. When the threshold Mt is not 1, the gain and bias of the image are corrected.

First, we calculate the difference image df for adjacent frames:

dfi,jn=Xi,jnXi,jn1

where df is the difference image and X is the original image. Then the difference image is binarized, which can be expressed as follows:

Bsi,jn=1          dfi,jn>Ts, 0          dfi,jn<Ts, 

where Bs is the binary image [0,1], and Ts is the change threshold. Then we calculate the motion parameters of the current frame:

Mon= i=1Mj=1NBs i,jn

where Mo is the global motion parameter. Then the global motion parameter Bg is binarized as follows:

Bgn=1         Mon>Tg, 0         Mon<Tg, 

where Tg is the scene movement threshold. Finally, the correction formula of the output of the next frame can be expressed as:

Mtn=Bsi,jnBgn

Yi,jn+1=Gi,jn1Xi,jn+1+Gi,jn1+ϵ   Mtn=1, G˜i,j nXi,jn+1+O˜i,jn+ϵ       otherwise, 

where Mt(n) is the motion judgment parameter. With motion frame estimation, we can judge whether the current frame is moving or not, which can greatly reduce the impact of scene changes on imaging, such as artifacts.

Figure 3 is a general flowchart of the algorithm.

Figure 3. Algorithm flowchart.

III. Experiment and result analysis

3.1. Experimental Equipment

In this experiment, we acquired three sets of raw video streams using the IRay Photoelectric Xcore_LT series temperature measurement uncooled infrared detector (Fig. 4). The image size was 640 × 512, and the experimental scenes are two cups at room temperature, a black-body radiation source at 40 ℃ and a palm at room temperature.

Figure 4. IRay Photoelectric Xcore_LT series temperature measuring type uncooled infrared movement components.

In this experiment, the data storage and computation were carried out using data structures from the OpenCV4.4 open-source library. The infrared images were acquired using the RTSP protocol. The nonlinear filtering residual estimation algorithm employed a Gaussian filter with a standard deviation of σ = 1.25, a sampling parameter of µ = 1, and gain parameters Gi, j initialized as a 640 × 512 matrix of all ones and bias parameters initialized as a 640 × 512 matrix of all zeros. To reduce algorithm complexity and better address fixed-pattern non-uniform noise, the random perturbation error ε was ignored, and the weight values of Wn and Wn−1 were determined through multiple experiments as 0.347 and 0.653, respectively. The temperature noise growth parameter β was set to 2.42, and the change threshold Ts was set to 15, while the scene motion threshold Tg was set to 10,922. Certain parameters need to be adjusted for different infrared images. The algorithm was implemented using the C++ programming language in the VS2015 environment and executed on an i7-6700HQ CPU @ 2.60GHz processor with 16G RAM.

3.2. Analytical Metrics

To evaluate the performance and effectiveness of the proposed algorithm, we use roughness (ρ) to quantitatively evaluate the non-uniformity of the original infrared image and the non-uniformity-corrected image [18, 19]. Roughness (ρ) is commonly used to evaluate streak non-uniform noise in infrared images, and is defined as:

ρ=h×IL1+hT×IL1IL1

where ρ is the roughness of the image, I is the original image matrix, h is the column difference convolution term, hT is the row difference convolution term, and ||I ||L1 represents the L1 norm of the matrix I. The smaller the ρ value, the smaller the column difference and row difference of the image, and the smaller the noise of the image [20].

In special cases, such as in a uniform radiation field, peak signal-to-noise ratio (PSNR) is used as an auxiliary evaluation metric in this paper. For an image K with noise containing M rows and N columns, and its corresponding noise-free ideal image I, the definition of MSE is as follows:

MSE=1MN i=0 m1 j=0 n1[Ii,jKi,j]2

and the definition of PSNR (in dB) is:

PSNR=10log10MAXI2MSE=20log10MAXI MSE

3.3. Comparison of Results

In our experiment, we employed roughness (ρ) in different scenarios as the primary evaluation metric [21], with PSNR used as a complementary metric [22]. The correction effect was jointly assessed by analyzing the error images between the corrected images and the original images [23, 24]. For the same infrared FPA, we analyzed video frame sequences captured from three different scenarios, including two types of cups at room temperature, a blackbody radiation source at 40 degrees Celsius, and a palm at room temperature. We compared the results of the co-occurring filter neural network (CoFNN) method [3], the CS method [5], and our proposed algorithm (our method). The Fig. 5 depicts a comparison of the non-uniformly corrected and corrected images in a three-dimensional space within a scene with a cup as the background. It can be seen in the Fig. 5 that the non-uniform image appears extremely rough, while the corrected image appears significantly smoother.

Figure 5. 3D view of non-uniform image and corrected image. (a) Non-uniform image, (b) corrected image.

Figures 68 show a comparison of the correction results of different algorithms in three different scenarios. The smoother the image, the better the correction effect. As indicated in the figure, non-uniformity severely affects the imaging effect. The proposed algorithm in this paper provides the most intuitive removal of non-uniformity and effectively reduces the non-uniformity of the stripes while suppressing white noise.

Figure 6. The correction results of different algorithms in the scenario with a cup as the background. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.

Figure 7. The correction results of different algorithms in the uniform radiation scenario. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.

Figure 8. The correction results of different algorithms in the scenario with a palm as the background. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.

In terms of the correction effect of images in the same scenario, the CoFNN algorithm has a lower degree of convergence, and the correction effect of random non-uniformity is obvious, but the non-uniformity of stripes is still clearly visible. The CS algorithm did not achieve complete convergence, and there are still some artifacts in the results. The algorithm proposed in this paper performs the best in terms of degree of convergence.

These results demonstrate the effectiveness of the proposed algorithm in removing non-uniformity and reducing noise in stripe patterns. The findings suggest that it can be applied in various scenarios and has great potential for further development.

Figures 911 show error images (grayscale) between the correction results of each algorithm and the original image. Subjectively, the striped information in the images represents the filtered non-uniformity, and the clearer the stripes, the stronger the filtering effect; The darker the image, the less obvious the filtering effect of non-uniformity and noise removal. From the images, it can be seen that the proposed algorithm has the highest pixel values in the annotated region and appears brighter, indicating that the proposed algorithm effectively filters non-uniformity. As for stripe texture, the proposed algorithm has clear column stripes in the annotated region, while the other two algorithms do not fully reflect the striped information. Therefore, the algorithm effectively filters most of the striped non-uniformity, and the obtained error image has a significant filtering effect, with clearer stripes. This indicates that the striped non-uniformity has been effectively suppressed, and the proposed algorithm has a significant advantage over other scene-based non-uniformity correction algorithms.

Figure 9. Non-uniformity corrected by different algorithms in the cup scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

Figure 10. Non-uniformity corrected by different algorithms in the uniform radiation scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

Figure 11. Non-uniformity corrected by different algorithms in the palm scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

In Fig. 12, the output of each algorithm for moving the cup after a period of time in the focused state is presented. In Fig. 12(a), the CoFNN method failed to update parameters in time for the rapid transformation of the scene, and used the correction parameters before moving between the cup and the bottle, resulting in an incorrect output, i.e. ghost. As shown in Fig. 12(b), the CS method requires multiple frames to converge, and an obvious ghost is generated after quickly moving the cup. It can be seen in Fig. 12(c) that the method proposed in this paper does not produce a ghost, which demonstrates that it can effectively suppress ghosting.

Figure 12. The correction results of each algorithm under scene change. (a) CoFNN, (b) constant statistics (CS), and (c) our method.

As shown in Table 1, this experiment used PSNR and roughness ρ as objective evaluation indicators. The PSNR of the original image compared to the ideal image (Fig. 13) was 35.97. The PSNR after CoFNN algorithm correction was 38.3, which was 6.7% higher than the original image. The PSNR after CS algorithm correction was 36.5, which was 1.9% higher than the original image. The PSNR of the algorithm proposed in this paper was 40.7, which was 13.4% higher than the original image. From the perspective of PSNR, the results of the proposed algorithm proposed after correction were closest to the ideal image. Regarding roughness ρ, the roughness of the original image was approximately 0.028. The roughness after CoFNN algorithm correction was 0.019, which was 32.1% lower than the original image. The roughness after CS algorithm correction was 0.023, which was 17.9% lower than the original image. The roughness of the algorithm proposed in this paper was 0.016, which was 42.8% lower than the original image. From the perspective of roughness, the proposed algorithm had significant advantages. This objectively demonstrated that it had significant correction effects on non-uniformity.

TABLE 1. Peak signal-to-noise ratio (PSNR) and roughness of images corrected by different algorithms under uniform radiation.

Analytical MethodOriginalCoFNNCSOur
PSNR35.973838.327636.537940.7353
Roughness0.027960.019350.023150.01569


Figure 13. An ideal infrared image with a radiation temperature of 40 ℃.

As shown in Table 2, in terms of convergence frames, this paper achieved convergence from the 100th frame, while the other two algorithms still maintained a high level of roughness at 100 frames. The CoFNN algorithm reduced non-uniformity by 47.6% in 10 frames, the CS algorithm reduced it by 21.4%, and the proposed algorithm reduced it by 52.3%. At the end of the 200th frame, the CoFNN algorithm removed 55.5% of the non-uniformity, the CS algorithm removed 24.6%, and the proposed algorithm removed 61.3%. The non-uniformity removal effect of the proposed algorithm was significantly better than that of other algorithms, and the data demonstrated that it has obvious advantages in non-uniformity correction.

TABLE 2. Roughness of image corrected by different algorithms in general environment.

FrameOriginalCoFNNCSOur
10.2024780.2024780.2024780.202478
100.2100270.1107010.1655160.100062
500.2088340.1004530.1709520.088301
1000.2087760.0946510.1671370.083539
1500.2086070.0926770.1625390.082162
2000.2072280.0921820.1563240.080299


Figure 14 shows the roughness variation curve of 200 frames of images corrected by different algorithms. It can be seen in the figure that the roughness of the original image fluctuates around 0.2, while the corrected image tends to converge after several frames. The curve in the figure indicates that the proposed algorithm has a better convergence effect than CS and CoFNN. In summary, the proposed method balances noise reduction and protection of image details, and has good suppression effects on non-uniformity and random noise, as well as on ghosting artifacts. Both subjective and objective evaluations confirm that the proposed algorithm has an ideal correction effect in practical applications.

Figure 14. The roughness variation curve of 200 frames of images in the cup scene.

IV. Conclusion

We propose a scene-based non-uniformity correction method for IRFPA to solve the issue of IR image non-uniformity. The proposed method employs nonlinear filtering to remove the non-uniformity and random noise in a single column and calculates the actual residue with the original image. Then, the current residue is obtained by using the predicted residue from the previous frame and the actual residue. Finally, we adaptively calculate the gain and bias coefficients based on the global motion parameters control algorithm to reduce artifacts. The results show that compared with other correction methods, our method can effectively suppress ghosting and has advantages in convergence, image quality, correction effect, and detail preservation. It also has a significant limiting effect on stripe non-uniformity and random noise.

DISCLOSURES

The authors declare no conflicts of interest.

DATA AVAILABILITY

Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

FUNDING

Science and Technology Development Program of Jilin Province under Grant (20200401066GX).

Fig 1.

Figure 1.Comparison chart before and after non-uniform correction: (a) is the non-uniform infrared image, (b) is the corrected image.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 2.

Figure 2.Predicted images under different Gaussian standard deviation σ: (a) original image, (b) predicted image when σ = 0.5, (c) predicted image when σ = 1, (d) predicted image when σ = 5.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 3.

Figure 3.Algorithm flowchart.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 4.

Figure 4.IRay Photoelectric Xcore_LT series temperature measuring type uncooled infrared movement components.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 5.

Figure 5.3D view of non-uniform image and corrected image. (a) Non-uniform image, (b) corrected image.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 6.

Figure 6.The correction results of different algorithms in the scenario with a cup as the background. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 7.

Figure 7.The correction results of different algorithms in the uniform radiation scenario. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 8.

Figure 8.The correction results of different algorithms in the scenario with a palm as the background. (a) Original, (b) CoFNN, (c) constant statistics (CS), and (d) our method.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 9.

Figure 9.Non-uniformity corrected by different algorithms in the cup scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 10.

Figure 10.Non-uniformity corrected by different algorithms in the uniform radiation scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 11.

Figure 11.Non-uniformity corrected by different algorithms in the palm scenario. (a) CoFNN, (b) constant statistics (CS), and (c) our method.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 12.

Figure 12.The correction results of each algorithm under scene change. (a) CoFNN, (b) constant statistics (CS), and (c) our method.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 13.

Figure 13.An ideal infrared image with a radiation temperature of 40 ℃.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

Fig 14.

Figure 14.The roughness variation curve of 200 frames of images in the cup scene.
Current Optics and Photonics 2023; 7: 408-418https://doi.org/10.3807/COPP.2023.7.4.408

TABLE 1 Peak signal-to-noise ratio (PSNR) and roughness of images corrected by different algorithms under uniform radiation

Analytical MethodOriginalCoFNNCSOur
PSNR35.973838.327636.537940.7353
Roughness0.027960.019350.023150.01569

TABLE 2 Roughness of image corrected by different algorithms in general environment

FrameOriginalCoFNNCSOur
10.2024780.2024780.2024780.202478
100.2100270.1107010.1655160.100062
500.2088340.1004530.1709520.088301
1000.2087760.0946510.1671370.083539
1500.2086070.0926770.1625390.082162
2000.2072280.0921820.1563240.080299

References

  1. L. Song and H. Huang, “Spatial and temporal adaptive nonuniformity correction for infrared focal plane arrays,” Opt. Express 30, 44681-44700 (2022).
    Pubmed CrossRef
  2. Y. Tendero, J. Gilles, S. Landeau, and J. M. Morel, “Efficient single image non-uniformity correction algorithm,” Proc. SPIE 7834, 96-107 (2010).
    CrossRef
  3. L. Li, Q. Li, H. Feng, Z. Xu, and Y. Chen, “A novel infrared focal plane non-uniformity correction method based on co-occurrence filter and adaptive learning rate,” IEEE Access 7, 40941-40950 (2019).
    CrossRef
  4. X. Mou, T. Zhu, and X. Zhou, “Visible-image-assisted nonuniformity correction of infrared images using the GAN with SEBlock,” Sensors 23, 3282 (2023).
    Pubmed KoreaMed CrossRef
  5. Y. JunLu, “Nonuniformity correction design and implementation for infrared image based on FPGA and artificial neural networks,” J. Phys.: Conf. Ser. 1693, 012177 (2020).
    CrossRef
  6. P. M. Narendra and N. A. Foss, “Shutterless fixed pattern noise correction for infrared imaging arrays,” Proc. SPIE 282, 44-51 (1981).
    CrossRef
  7. J. Yan, Y. Kang, Y. Ni, Y. Zhang, J. Fan, and X. Hu, “Non-uniformity correction method of remote sensing images based on adaptive moving window moment matching,” J. Imaging Sci. Technol 66, 50502 (2022).
    CrossRef
  8. W. Qian, Q. Chen, and G. Gu, “Space low-pass and temporal high-pass nonuniformity correction algorithm,” Opt. Rev. 17, 24-29 (2010).
    CrossRef
  9. T. Guillemot and J. Delon, “Implementation of the midway image equalization,” Image Process. Line. 6, 114-129 (2016).
    CrossRef
  10. Y. Tendero, S. Landeau, and J. Gilles, “Non-uniformity correction of infrared images by midway equalization,” Image Process. Line 2, 134-146 (2012).
    CrossRef
  11. S. Yang, H. Qin, X. Yan, S. Yuan, and Q. Zeng, “Mid-wave infrared snapshot compressive spectral imager with deep infrared denoising prior,” Remote Sens. 15, 280 (2023).
    CrossRef
  12. Y. Cao, M. Y. Yang, and C.-L. Tisse, “Effective strip noise removal for low-textured infrared images based on 1-D guided filtering,” IEEE Trans. Cir. Syst. Video Technol. 26, 2176-2188 (2015).
    CrossRef
  13. N. Liu and J. Xie, “Interframe phase-correlated registration scene-based nonuniformity correction technology,” Infrar. Phys. Technol. 69, 198-205 (2015).
    CrossRef
  14. B. Lv, S. Tong, Q. Liu, and H. Sun, “Statistical scene-based non-uniformity correction method with interframe registration,” Sensors 19, 5395 (2019).
    Pubmed KoreaMed CrossRef
  15. C. H. Lu, “Stripe non-uniformity correction of infrared images using parameter estimation,” Infrar. Phys. Technol. 107, 103313 (2020).
    CrossRef
  16. R. C. Calik, E. Tunali, B. Ercan, and S. Oz, “A study on calibration methods for infrared focal plane array cameras,” in Proc. 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Funchal, Madeira, Portugal, Jan. 27-29, 2018), pp. 219-226.
    CrossRef
  17. Y. Zhang, X. Li, X. Zheng, and Q. Wu, “Adaptive temporal high-pass infrared non-uniformity correction algorithm based on guided filter,” in Proc. 7th International Conference on Computing and Artificial Intelligence (Tianjin, China, Apr. 23-26, 2021), pp. 459-464.
    CrossRef
  18. G. Ness, A. Oved, and I. Kakon, “Derivative based focal plane array nonuniformity correction,” arXiv:1702.06118 (2017).
  19. Y. Sheng, X. Dun,W. Jin, F. Zhou, X. Wang, F. Mi, and S. Xiao, “The on-orbit non-uniformity correction method with modulated internal calibration sources for infrared remote sensing systems,” Remote Sens. 10, 830 (2018).
    CrossRef
  20. L. Geng, Q. Chen, W. Qian, and Y. Zhang, “Scene-based nonuniformity correction algorithm based on temporal median filter,” J. Opt. Soc. Korea 17, 255-261 (2013).
    CrossRef
  21. W. Qian, Q. Chen, and G. Gu, “Minimum mean square error method for stripe nonuniformity correction,” Chn. Opt. Lett. 9, 051003 (2011).
    CrossRef
  22. T. Orżanowski, “Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays,” SpringerPlus 5, 1831 (2016).
    Pubmed KoreaMed CrossRef
  23. T. Li, Y. Zhao, Y. Li, and G. Zhou, “Non-uniformity correction of infrared images based on improved CNN with long-short connections,” IEEE Photonics J. 13, 7800213 (2021).
    CrossRef
  24. B. Gutschwager and J. Hollandt, “Nonuniformity correction of infrared cameras by reading radiance temperatures with a spatially nonhomogeneous radiation source,” Meas. Sci. Technol. 28, 015401 (2016).
    CrossRef
Optical Society of Korea

Current Optics
and Photonics


Min-Kyo Seo,
Editor-in-chief

Share this article on :

  • line