Ex) Article Title, Author, Keywords
Current Optics
and Photonics
Ex) Article Title, Author, Keywords
Curr. Opt. Photon. 2022; 6(2): 161-170
Published online April 25, 2022 https://doi.org/10.3807/COPP.2022.6.2.161
Copyright © Optical Society of Korea.
Hyung Tae Kim1 , Duk Yeon Lee2, Dongwoon Choi2, Jaehyeon Kang2, Dong-Wook Lee2
Corresponding author: *htkim@kitech.re.kr, ORCID 0000-0001-5711-551X
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Three-dimensional (3D) geometric models are introduced to correct vignetting, and a downhill simplex search is applied to determine the coefficients of a 3D model used in digital microscopy. Vignetting is nonuniform illuminance with a geometric regularity on a two-dimensional (2D) image plane, which allows the illuminance distribution to be estimated using 3D models. The 3D models are defined using generalized polynomials and arbitrary coefficients. Because the 3D models are nonlinear, their coefficients are determined using a simplex search. The cost function of the simplex search is defined to minimize the error between the 3D model and the reference image of a standard white board. The conventional and proposed methods for correcting the vignetting are used in experiments on four inspection systems based on machine vision and microscopy. The methods are investigated using various performance indices, including the coefficient of determination, the mean absolute error, and the uniformity after correction. The proposed method is intuitive and shows performance similar to the conventional approach, using a smaller number of coefficients.
Keywords: Machine vision, Mathematical model, Microscopy camera calibration, Simplex search, Vignetting correction
OCIS codes: (100.2980) Image enhancement; (110.0180) Microscopy; (150.1135) Algorithms; (150.1488) Calibration; (220.3630) Lenses
Although digital microscopy originated from bioengineering through technical advances, it has become industrialized and applied in scientific metrology [1]. A digital microscopy system is constructed by attaching a digital camera and an image-processing unit to a conventional optical microscope. This simple combination has been used in many technical applications, such as autofocusing [2], filtering [3], white balancing [4], calibration [5], detection [6], stitching, and multifocus fusion. Because of the structural problems that occur in optical microscopes, distortion [7], aberrations [8], and vignetting [2] are inevitable when acquiring an image using digital microscopy. Vignetting is a nonuniform distribution of light intensity in an image, owing to different optical paths in a microscope. The center of the image is usually bright, but the brightness decreases toward the periphery [9]. Vignetting commonly occurs in current-imaging devices, such as smart phones [10], digital single-lens reflexes (DSLRs) [11], industrial cameras [12], microscopes [13], and line-scanning systems [14]. Because the correction of such vignetting is essentially hidden within these digital imaging devices, users rarely witness it. However, vignetting is frequently seen in microscopy when applied in bioimaging [15], semiconductors [16], flat-panel displays [17], and printed electronics [18]. Vignetting appears sensitively in the output of image fusion, such as in stitching and panoramic images. Image fusion has become popular in digital imaging devices, as well as digital microscopy, and the correction of vignetting is significant for synthesizing a natural and continuous image.
Vignetting is caused by the partial obstruction of the light path from an object plane to an image plane [19]. The sources of vignetting can be classified as the geometric optics of the image plane, the angular sensitivity of the digital sensors, and the light path and path blocking within and in front of a lens, respectively [20]. Vignetting in digital microscopy is corrected using hardware and software. Hardware-based correction is achieved through geometric optics, and thus is usually expensive. The computational cost required to analyze the optics is also high, and the light from commercial illumination is scattered; it is therefore difficult to design optics to correct vignetting. Furthermore, it is impossible to create optics for various commercial lenses and illumination conditions; thus, software correction is preferred in practice.
Software correction is conventionally conducted by applying gain to an actual image after determining the gain from a reference image. The reference image is acquired using a standard target, and the gain of each pixel is then determined by comparing a target value to the gray levels of pixels in the reference image [21]. Flat-field correction (FFC) is a popular method applied in microscopy, machine vision, and telescopy. Although illumination is continuous in a reference image, pixel-based FFC can be discontinuous, owing to camera noise and local damage occurring in a standard target [22]. Thus, regression approaches such as polynomials of the pixel gain [23, 24], an off-axis illumination model [13, 19, 25], and radial polynomials [26] have been discussed for continuity when applying FFC. Kordecki
The illuminance distribution of the reference image forms a geometric surface; thus, high-order polynomials and regression have been applied to model the vignetting. Mitsunaga and Nayar [9] proposed a general formulation of polynomials whose coefficients were determined using the least mean square. Vignetting forms symmetric distributions in many cases; thus, simplified low-order polynomials are advantageous for accelerating computation [12, 15, 28, 29]. However, these polynomial models have been derived from 2D models; a 3D model dealing with vignetting has yet to be investigated. In previous studies, the isotropic Gaussian model was briefly discussed for the potential to correct vignetting [15, 30]. However, a determination of the coefficients of the Gaussian function was not presented, because of nonlinear regression.
Considering the illumination distribution of the reference image, many 3D surfaces such as spheres, ellipsoids, paraboloids, and Gaussian surfaces have been applicable for modeling vignetting. These 3D models have only a few coefficients, providing a simple and intuitive formulation. The nonlinear regression used to determine the coefficients can be achieved through multi-dimensional optimum methods. Thus, this study proposes a vignetting-correction method using 3D models and a downhill simplex search. The performances of the conventional and proposed methods are investigated using four inspection systems, ranging from the ultraviolet (UV) to the near infrared (NIR).
The remainder of this paper is organized as follows. Section 2 describes the generalized 3D model and nonlinear regression using a downhill simplex search. Section 3 presents the experimental conditions and performance indices. The experimental results and a discussion are presented in section 4. Finally, concluding remarks are presented in section 5.
A FFC adjusts the gray level of each pixel using a reference image of the vignetting. In a simple case, the gain of an individual pixel is obtained from the reference image and the target value. The gray level of an actual image is then corrected using the gain. Equation (1) shows the normalized relation between the corrected image
Here
A cost function for the optimum method is defined to minimize the error between the reference image and 3D model, as follows:
The light distribution under vignetting is typically axisymmetric; thus, radial polynomials have been proposed in previous studies [19, 20], the generalized form of which is as follows:
The cross section of the vignetting is approximated as a parabola; thus, the vignetting model can be arranged using parabolas, after slicing the reference image in the
The parabolic array model is quite accurate and can easily determine the coefficients. However, the number of coefficients becomes 3 × (width + height), which reaches an extremely high value in the case of a megapixel camera. Considering the light distribution in the reference image, the shape is approximated as a 3D shape such as a paraboloid, ellipsoid, or Gaussian surface [15, 30]. The normalized coordinates of the 3D vignetting model can be written as generalized polynomials, as follows:
where (
These generalized models are formulated as elliptical and anisotropic polynomials and are extended from previous studies. These elliptical and anisotropic formulations present geometric properties such as the aspect ratio and center of vignetting. As an existing method, a parabolic array model is applied, and the elliptical and anisotropic Gaussian models are extended from conventional models. The paraboloid and ellipsoid models are additionally tested in this study.
The 3D models in Eqs. (6)–(8) and the cost function in Eq. (2) are nonlinear; thus, optimum methods are required to determine the coefficients. As a downhill simplex search, the well-known Nelder-Mead approach is one of the most popular methods for nonderivative, multidimensional nonlinear optimization [31]. Thus, the coefficients of the 3D models can be determined by minimizing the cost function between the vignetting and 3D model. The unknown variables for the optimum methods are the center, offset, and coefficients. The unknown variables are defined as generalized coordinates with dimensions of 3 ×
Table 1 lists the geometric equations for the 3D model. The elliptical polynomial and anisotropic Gaussian models are extensions of the radial model used in previous research. The intuitive forms are simple, because
TABLE 1 3D vignetting models and unknown coefficients for the simplex search
Shapes | Geometric Equations | Generalized Form | Intuitive Form |
---|---|---|---|
Elliptic Polynomials (O6) | |||
Anisotropic Gaussian | |||
Paraboloid | |||
Ellipsoid |
Because the dimension of the generalized coordinate is
To test for a lower value, a reflection point is then defined from among the vertices along the middle vector. If the reflection point is lower than the maximum point, an expansion point is defined from the reflection point to repeat the test.
If either of the two points is lower than the maximum, the maximum vertex is replaced by the test point, and the terminal condition is checked. Otherwise, the next test point is designated inside the vertices. This contraction is applied to determine the test point between the maximum and middle points.
If the contraction point is lower than the maximum, the maximum is replaced by the contraction point, and the terminal condition is applied. Otherwise, a shrinkage is applied to reduce the vertices toward the current minimum.
After these tests, the following terminal condition is examined, and the sorting step is repeated if the error is unsatisfactory.
Figure 2 shows these procedures of the simplex search summarized in a flow chart.
Reference images are acquired using four inspection systems, as shown in Table 2. The common components of the inspection systems include an industrial camera, a zoom lens, and a light source, although their specifications differ. The inspection systems have different resolutions, acquisition speeds, and spectral ranges from UV to NIR. The reference images are obtained using a standard white board, after finding the focus and adjusting the light intensity. The light intensity is set at the maximum optical power without pixel saturation. The reference images are then transferred to a PC for image processing. The vignetting models mentioned above are applied to the reference images, and the coefficients are determined using a simplex search. The terminal condition of the simplex search is 10−6. The gain is calculated using Eq. (1) and is multiplied by the actual images to correct the vignetting. The performance indices are computed using reference and corrected images.
TABLE 2 Specifications of the inspection machines used in the test
Camera | Spectral Range | Resolution | Bits | Lens | Light | Magnification | Photo |
---|---|---|---|---|---|---|---|
UV-Vis-NIR | 193–1100 nm | 1392 × 1040 | 14 | Motorized Zoom | Halogen | 0.58–12.5 | |
Color | 390–1100 nm | 2592 × 2048 | 10 | Motorized Zoom | White LED | 0.09–393.8 | |
Vis-NIR | 350–1100 nm | 2592 × 2048 | 10 | Manual Zoom | Halogen | 0.75–4.5 | |
High Speed | 390–1050 nm | 1280 × 1024 | 12 | Motorized Zoom | White LED | 0.58–12 |
The conventional and proposed methods are implemented using C++ codes with open sources under Linux. The C++ codes are generated into generalized subroutines that can handle various image formats. The subroutines are integrated into the library and reused as a software development kit (SDK). The PC for the vignetting correction is a high-performance parallel system consisting of a hexacore, a GPU, and 64 GB of memory.
Performance indices are defined to compare the results of the conventional and proposed methods. These performance indices are the coefficient of determination (
The experimental results using the four inspection systems are summarized in Tables 3–6. In the case of the UV-Vis-NIR inspection system, as shown in Table 3,
TABLE 3 Performance indices of the UV-Vis-NIR inspection system applied in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 7296 | 0.9873 | 0.9873 | 0.9871 | 1.2384 | 38.1948 | 1.0523 |
Elliptic Polynomials | 8 | 0.9764 | 0.9766 | 1.3399 | 1.6856 | 35.5171 | 1.5031 |
Anisotropic Gaussian | 6 | 0.9760 | 0.9763 | 1.3387 | 1.7013 | 35.4366 | 1.5268 |
Paraboloid | 5 | 0.9742 | 0.9744 | 1.3970 | 1.7619 | 35.1328 | 1.5757 |
Ellipsoid | 6 | 0.9731 | 0.9733 | 1.4242 | 1.8005 | 34.9441 | 1.6190 |
TABLE 4 Performance indices of the color inspection system used in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 13920 | 0.9953 | 0.9954 | 1.5822 | 1.9743 | 34.4414 | 0.9091 |
Elliptic Polynomials | 8 | 0.9963 | 0.9964 | 1.3128 | 1.7479 | 35.4995 | 0.9241 |
Anisotropic Gaussian | 6 | 0.9968 | 0.9969 | 1.2075 | 1.6258 | 36.1283 | 0.8515 |
Paraboloid | 5 | 0.9744 | 0.9756 | 3.4691 | 4.6282 | 27.0412 | 4.3283 |
Ellipsoid | 6 | 0.9715 | 0.9727 | 3.6695 | 4.8854 | 26.5716 | 4.5206 |
TABLE 5 Performance indices of the Vis-NIR inspection system utilized in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 13920 | 0.9807 | 0.9808 | 3.0061 | 3.8423 | 28.6294 | 1.2655 |
Elliptic Polynomials | 8 | 0.9788 | 0.9793 | 2.9340 | 4.0338 | 28.2069 | 1.4277 |
Anisotropic Gaussian | 6 | 0.9819 | 0.9823 | 2.7410 | 3.7223 | 28.9050 | 1.2428 |
Paraboloid | 5 | 0.9193 | 0.9264 | 5.7328 | 7.8689 | 22.4029 | 7.2386 |
Ellipsoid | 6 | 0.9114 | 0.9193 | 5.9974 | 8.2452 | 21.9971 | 7.5272 |
TABLE 6 Performance indices of a high-speed inspection system applied in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 6912 | 0.9705 | 0.9705 | 1.5240 | 1.9227 | 34.3759 | 0.6445 |
Elliptic Polynomials | 8 | 0.9602 | 0.9603 | 1.7858 | 2.2310 | 33.0840 | 0.7651 |
Anisotropic Gaussian | 6 | 0.9604 | 0.9605 | 1.7815 | 2.2257 | 33.1049 | 0.7638 |
Paraboloid | 5 | 0.9571 | 0.9572 | 1.8478 | 2.3172 | 32.7549 | 0.7979 |
Ellipsoid | 6 | 0.9550 | 0.9551 | 1.8896 | 2.3726 | 32.5496 | 0.8217 |
Table 4 shows the performance of the color inspection system, for which the anisotropic Gaussian model achieves the highest accuracy, followed by the elliptic polynomial.
The performance of the Vis-NIR inspection system is shown in Table 5. The overall accuracy is lower than that of the other inspection systems, because the MAE, RMSE, and UFM were higher. However, uniformity after correction of vignetting is approximately 1.2%–1.4%. In this case, the anisotropic Gaussian model is the most accurate.
Table 6 shows the results of the high-speed inspection system, where the parabolic array model is the most accurate, followed by the anisotropic Gaussian approach. However, the other models show similar accuracy, using a lower number of coefficients. The number of coefficients is 6912 and 6 for the parabolic array and anisotropic Gaussian models respectively. The UFM after correction is below 1.0% for all models.
Table 7 shows the results of vignetting correction using the parabolic array and the anisotropic Gaussian models in the experiments. Some of the reference images are dark, due to the reflectance of the standard white board. For inspection samples such as semiconductor and flat-panel displays, the brightness is sufficient, owing to high reflectivity. The images show little difference after vignetting correction, when viewed with the naked eye.
TABLE 7 Reference and corrected images in the experiment
Specific Features of the Camera | Reference Image for Vignetting | Image Correction Using a Parabolic Array Model | Image Correction Using an Anisotropic Gaussian Model |
---|---|---|---|
UV-Vis-NIR | |||
Color | |||
Vis-NIR | |||
High-speed |
Figure 3 shows a summary of the processing time required to determine coefficients and vignetting correction in the experiment. The conventional parabolic array model is advantageous in terms of processing time, because iteration is unnecessary. A simplex search of the proposed model iterates the cost function based on pixel operation until the coefficients are sufficiently determined. After determining the coefficients, the difference in processing time of vignetting correction decreases to under 50 ms. Among the proposed models, the paraboloid model is the fastest because it is similar to the conventional model. Furthermore, the initial condition is obtained using the conventional model. The processing times of the proposed models are longer than that of the conventional model, but are tolerable in practice.
The results generally show that the conventional parabolic array model has advantages, in terms of accuracy and processing time in the experimental cases. The anisotropic Gaussian model shows the highest accuracy in two cases, and a similar performance to that of the conventional model. This fact is unexpected, because the Gaussian model has received little attention and has only been discussed in a few previous studies. Although the isotropic Gaussian model in previous research used an axisymmetric radial exponent [30], the coefficient determination was not presented. Theoretical analyses of the parabolic array and polynomial models have been reported in previous studies. However, the anisotropic Gaussian model is considered for the experiments in this study, and a theoretical approach is required in the future. Considering the various optical combinations of cameras, lenses, and illumination in microscopy, the models for vignetting are not limited to the conventional ones. The most widely fitted vignetting model for a microscope system is determined by the inspection conditions.
A simplex search makes it possible to determine the coefficients of the generalized equations, as well as the other 3D models described in this study. The simplex search provides the possibility of constructing various geometric models and formulations for the correction of vignetting. The experimental models achieve a similar accuracy using 1/1000–1/2000 the number of coefficients, compared to the parabolic array model. The 3D models are intuitive and simple, compared to conventional models. The 3D models also provide geometric properties of the vignetting, such as the vignetting center and aspect ratio. These geometric properties can be used to align the optical and illumination axes. The proposed method is also available for an aspherical surface and inhomogeneous polynomials. In the future, we will also provide a parallel-processing architecture for the correction of vignetting, for real-time imaging using a graphics processing unit and a multicore processor.
A geometric modeling method for the correction of vignetting using 3D equations and a simplex search was proposed in this study. The 3D models were implemented into generalized nonlinear polynomials, considering conventional models. The coefficients of the 3D models were determined using a simplex search, and performance indices were defined for the experiment. Reference images were acquired using four inspection systems, and performance indices were obtained using the proposed models. The C++ code for the experiment was applied using open sources, and could handle various test conditions. Although the parabolic array model generally showed good performance during the experiments, the results of this study found that the anisotropic Gaussian model was unexpectedly accurate in certain cases. The proposed 3D models showed similar accuracy, but using 1/1000–1/2000 of the coefficients, to that of the parabolic array model. These 3D vignetting models are intuitive and provide the overall characteristics of the vignetting. The proposed method provides solutions for the alignment of optics and illumination, as well as the construction of various vignetting models.
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at the time of publication, which may be obtained from the authors upon reasonable request.
This research was supported by the Year 2022 Culture Technology R&D Program developed by the Ministry of Culture, Sports and Tourism (MCST) and the Korea Creative Content Agency (KOCCA). (Development of the system for digital data acquisition of modern and contemporary fine arts and supporting science-based art credibility analysis).
Korean Creative Content Agency (KOCCA R202006 0004).
Curr. Opt. Photon. 2022; 6(2): 161-170
Published online April 25, 2022 https://doi.org/10.3807/COPP.2022.6.2.161
Copyright © Optical Society of Korea.
Hyung Tae Kim1 , Duk Yeon Lee2, Dongwoon Choi2, Jaehyeon Kang2, Dong-Wook Lee2
1Digital Transformation R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea
2Robotics R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea
Correspondence to:*htkim@kitech.re.kr, ORCID 0000-0001-5711-551X
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Three-dimensional (3D) geometric models are introduced to correct vignetting, and a downhill simplex search is applied to determine the coefficients of a 3D model used in digital microscopy. Vignetting is nonuniform illuminance with a geometric regularity on a two-dimensional (2D) image plane, which allows the illuminance distribution to be estimated using 3D models. The 3D models are defined using generalized polynomials and arbitrary coefficients. Because the 3D models are nonlinear, their coefficients are determined using a simplex search. The cost function of the simplex search is defined to minimize the error between the 3D model and the reference image of a standard white board. The conventional and proposed methods for correcting the vignetting are used in experiments on four inspection systems based on machine vision and microscopy. The methods are investigated using various performance indices, including the coefficient of determination, the mean absolute error, and the uniformity after correction. The proposed method is intuitive and shows performance similar to the conventional approach, using a smaller number of coefficients.
Keywords: Machine vision, Mathematical model, Microscopy camera calibration, Simplex search, Vignetting correction
Although digital microscopy originated from bioengineering through technical advances, it has become industrialized and applied in scientific metrology [1]. A digital microscopy system is constructed by attaching a digital camera and an image-processing unit to a conventional optical microscope. This simple combination has been used in many technical applications, such as autofocusing [2], filtering [3], white balancing [4], calibration [5], detection [6], stitching, and multifocus fusion. Because of the structural problems that occur in optical microscopes, distortion [7], aberrations [8], and vignetting [2] are inevitable when acquiring an image using digital microscopy. Vignetting is a nonuniform distribution of light intensity in an image, owing to different optical paths in a microscope. The center of the image is usually bright, but the brightness decreases toward the periphery [9]. Vignetting commonly occurs in current-imaging devices, such as smart phones [10], digital single-lens reflexes (DSLRs) [11], industrial cameras [12], microscopes [13], and line-scanning systems [14]. Because the correction of such vignetting is essentially hidden within these digital imaging devices, users rarely witness it. However, vignetting is frequently seen in microscopy when applied in bioimaging [15], semiconductors [16], flat-panel displays [17], and printed electronics [18]. Vignetting appears sensitively in the output of image fusion, such as in stitching and panoramic images. Image fusion has become popular in digital imaging devices, as well as digital microscopy, and the correction of vignetting is significant for synthesizing a natural and continuous image.
Vignetting is caused by the partial obstruction of the light path from an object plane to an image plane [19]. The sources of vignetting can be classified as the geometric optics of the image plane, the angular sensitivity of the digital sensors, and the light path and path blocking within and in front of a lens, respectively [20]. Vignetting in digital microscopy is corrected using hardware and software. Hardware-based correction is achieved through geometric optics, and thus is usually expensive. The computational cost required to analyze the optics is also high, and the light from commercial illumination is scattered; it is therefore difficult to design optics to correct vignetting. Furthermore, it is impossible to create optics for various commercial lenses and illumination conditions; thus, software correction is preferred in practice.
Software correction is conventionally conducted by applying gain to an actual image after determining the gain from a reference image. The reference image is acquired using a standard target, and the gain of each pixel is then determined by comparing a target value to the gray levels of pixels in the reference image [21]. Flat-field correction (FFC) is a popular method applied in microscopy, machine vision, and telescopy. Although illumination is continuous in a reference image, pixel-based FFC can be discontinuous, owing to camera noise and local damage occurring in a standard target [22]. Thus, regression approaches such as polynomials of the pixel gain [23, 24], an off-axis illumination model [13, 19, 25], and radial polynomials [26] have been discussed for continuity when applying FFC. Kordecki
The illuminance distribution of the reference image forms a geometric surface; thus, high-order polynomials and regression have been applied to model the vignetting. Mitsunaga and Nayar [9] proposed a general formulation of polynomials whose coefficients were determined using the least mean square. Vignetting forms symmetric distributions in many cases; thus, simplified low-order polynomials are advantageous for accelerating computation [12, 15, 28, 29]. However, these polynomial models have been derived from 2D models; a 3D model dealing with vignetting has yet to be investigated. In previous studies, the isotropic Gaussian model was briefly discussed for the potential to correct vignetting [15, 30]. However, a determination of the coefficients of the Gaussian function was not presented, because of nonlinear regression.
Considering the illumination distribution of the reference image, many 3D surfaces such as spheres, ellipsoids, paraboloids, and Gaussian surfaces have been applicable for modeling vignetting. These 3D models have only a few coefficients, providing a simple and intuitive formulation. The nonlinear regression used to determine the coefficients can be achieved through multi-dimensional optimum methods. Thus, this study proposes a vignetting-correction method using 3D models and a downhill simplex search. The performances of the conventional and proposed methods are investigated using four inspection systems, ranging from the ultraviolet (UV) to the near infrared (NIR).
The remainder of this paper is organized as follows. Section 2 describes the generalized 3D model and nonlinear regression using a downhill simplex search. Section 3 presents the experimental conditions and performance indices. The experimental results and a discussion are presented in section 4. Finally, concluding remarks are presented in section 5.
A FFC adjusts the gray level of each pixel using a reference image of the vignetting. In a simple case, the gain of an individual pixel is obtained from the reference image and the target value. The gray level of an actual image is then corrected using the gain. Equation (1) shows the normalized relation between the corrected image
Here
A cost function for the optimum method is defined to minimize the error between the reference image and 3D model, as follows:
The light distribution under vignetting is typically axisymmetric; thus, radial polynomials have been proposed in previous studies [19, 20], the generalized form of which is as follows:
The cross section of the vignetting is approximated as a parabola; thus, the vignetting model can be arranged using parabolas, after slicing the reference image in the
The parabolic array model is quite accurate and can easily determine the coefficients. However, the number of coefficients becomes 3 × (width + height), which reaches an extremely high value in the case of a megapixel camera. Considering the light distribution in the reference image, the shape is approximated as a 3D shape such as a paraboloid, ellipsoid, or Gaussian surface [15, 30]. The normalized coordinates of the 3D vignetting model can be written as generalized polynomials, as follows:
where (
These generalized models are formulated as elliptical and anisotropic polynomials and are extended from previous studies. These elliptical and anisotropic formulations present geometric properties such as the aspect ratio and center of vignetting. As an existing method, a parabolic array model is applied, and the elliptical and anisotropic Gaussian models are extended from conventional models. The paraboloid and ellipsoid models are additionally tested in this study.
The 3D models in Eqs. (6)–(8) and the cost function in Eq. (2) are nonlinear; thus, optimum methods are required to determine the coefficients. As a downhill simplex search, the well-known Nelder-Mead approach is one of the most popular methods for nonderivative, multidimensional nonlinear optimization [31]. Thus, the coefficients of the 3D models can be determined by minimizing the cost function between the vignetting and 3D model. The unknown variables for the optimum methods are the center, offset, and coefficients. The unknown variables are defined as generalized coordinates with dimensions of 3 ×
Table 1 lists the geometric equations for the 3D model. The elliptical polynomial and anisotropic Gaussian models are extensions of the radial model used in previous research. The intuitive forms are simple, because
TABLE 1. 3D vignetting models and unknown coefficients for the simplex search.
Shapes | Geometric Equations | Generalized Form | Intuitive Form |
---|---|---|---|
Elliptic Polynomials (O6) | |||
Anisotropic Gaussian | |||
Paraboloid | |||
Ellipsoid |
Because the dimension of the generalized coordinate is
To test for a lower value, a reflection point is then defined from among the vertices along the middle vector. If the reflection point is lower than the maximum point, an expansion point is defined from the reflection point to repeat the test.
If either of the two points is lower than the maximum, the maximum vertex is replaced by the test point, and the terminal condition is checked. Otherwise, the next test point is designated inside the vertices. This contraction is applied to determine the test point between the maximum and middle points.
If the contraction point is lower than the maximum, the maximum is replaced by the contraction point, and the terminal condition is applied. Otherwise, a shrinkage is applied to reduce the vertices toward the current minimum.
After these tests, the following terminal condition is examined, and the sorting step is repeated if the error is unsatisfactory.
Figure 2 shows these procedures of the simplex search summarized in a flow chart.
Reference images are acquired using four inspection systems, as shown in Table 2. The common components of the inspection systems include an industrial camera, a zoom lens, and a light source, although their specifications differ. The inspection systems have different resolutions, acquisition speeds, and spectral ranges from UV to NIR. The reference images are obtained using a standard white board, after finding the focus and adjusting the light intensity. The light intensity is set at the maximum optical power without pixel saturation. The reference images are then transferred to a PC for image processing. The vignetting models mentioned above are applied to the reference images, and the coefficients are determined using a simplex search. The terminal condition of the simplex search is 10−6. The gain is calculated using Eq. (1) and is multiplied by the actual images to correct the vignetting. The performance indices are computed using reference and corrected images.
TABLE 2. Specifications of the inspection machines used in the test.
Camera | Spectral Range | Resolution | Bits | Lens | Light | Magnification | Photo |
---|---|---|---|---|---|---|---|
UV-Vis-NIR | 193–1100 nm | 1392 × 1040 | 14 | Motorized Zoom | Halogen | 0.58–12.5 | |
Color | 390–1100 nm | 2592 × 2048 | 10 | Motorized Zoom | White LED | 0.09–393.8 | |
Vis-NIR | 350–1100 nm | 2592 × 2048 | 10 | Manual Zoom | Halogen | 0.75–4.5 | |
High Speed | 390–1050 nm | 1280 × 1024 | 12 | Motorized Zoom | White LED | 0.58–12 |
The conventional and proposed methods are implemented using C++ codes with open sources under Linux. The C++ codes are generated into generalized subroutines that can handle various image formats. The subroutines are integrated into the library and reused as a software development kit (SDK). The PC for the vignetting correction is a high-performance parallel system consisting of a hexacore, a GPU, and 64 GB of memory.
Performance indices are defined to compare the results of the conventional and proposed methods. These performance indices are the coefficient of determination (
The experimental results using the four inspection systems are summarized in Tables 3–6. In the case of the UV-Vis-NIR inspection system, as shown in Table 3,
TABLE 3. Performance indices of the UV-Vis-NIR inspection system applied in the experiment.
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 7296 | 0.9873 | 0.9873 | 0.9871 | 1.2384 | 38.1948 | 1.0523 |
Elliptic Polynomials | 8 | 0.9764 | 0.9766 | 1.3399 | 1.6856 | 35.5171 | 1.5031 |
Anisotropic Gaussian | 6 | 0.9760 | 0.9763 | 1.3387 | 1.7013 | 35.4366 | 1.5268 |
Paraboloid | 5 | 0.9742 | 0.9744 | 1.3970 | 1.7619 | 35.1328 | 1.5757 |
Ellipsoid | 6 | 0.9731 | 0.9733 | 1.4242 | 1.8005 | 34.9441 | 1.6190 |
TABLE 4. Performance indices of the color inspection system used in the experiment.
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 13920 | 0.9953 | 0.9954 | 1.5822 | 1.9743 | 34.4414 | 0.9091 |
Elliptic Polynomials | 8 | 0.9963 | 0.9964 | 1.3128 | 1.7479 | 35.4995 | 0.9241 |
Anisotropic Gaussian | 6 | 0.9968 | 0.9969 | 1.2075 | 1.6258 | 36.1283 | 0.8515 |
Paraboloid | 5 | 0.9744 | 0.9756 | 3.4691 | 4.6282 | 27.0412 | 4.3283 |
Ellipsoid | 6 | 0.9715 | 0.9727 | 3.6695 | 4.8854 | 26.5716 | 4.5206 |
TABLE 5. Performance indices of the Vis-NIR inspection system utilized in the experiment.
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 13920 | 0.9807 | 0.9808 | 3.0061 | 3.8423 | 28.6294 | 1.2655 |
Elliptic Polynomials | 8 | 0.9788 | 0.9793 | 2.9340 | 4.0338 | 28.2069 | 1.4277 |
Anisotropic Gaussian | 6 | 0.9819 | 0.9823 | 2.7410 | 3.7223 | 28.9050 | 1.2428 |
Paraboloid | 5 | 0.9193 | 0.9264 | 5.7328 | 7.8689 | 22.4029 | 7.2386 |
Ellipsoid | 6 | 0.9114 | 0.9193 | 5.9974 | 8.2452 | 21.9971 | 7.5272 |
TABLE 6. Performance indices of a high-speed inspection system applied in the experiment.
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 6912 | 0.9705 | 0.9705 | 1.5240 | 1.9227 | 34.3759 | 0.6445 |
Elliptic Polynomials | 8 | 0.9602 | 0.9603 | 1.7858 | 2.2310 | 33.0840 | 0.7651 |
Anisotropic Gaussian | 6 | 0.9604 | 0.9605 | 1.7815 | 2.2257 | 33.1049 | 0.7638 |
Paraboloid | 5 | 0.9571 | 0.9572 | 1.8478 | 2.3172 | 32.7549 | 0.7979 |
Ellipsoid | 6 | 0.9550 | 0.9551 | 1.8896 | 2.3726 | 32.5496 | 0.8217 |
Table 4 shows the performance of the color inspection system, for which the anisotropic Gaussian model achieves the highest accuracy, followed by the elliptic polynomial.
The performance of the Vis-NIR inspection system is shown in Table 5. The overall accuracy is lower than that of the other inspection systems, because the MAE, RMSE, and UFM were higher. However, uniformity after correction of vignetting is approximately 1.2%–1.4%. In this case, the anisotropic Gaussian model is the most accurate.
Table 6 shows the results of the high-speed inspection system, where the parabolic array model is the most accurate, followed by the anisotropic Gaussian approach. However, the other models show similar accuracy, using a lower number of coefficients. The number of coefficients is 6912 and 6 for the parabolic array and anisotropic Gaussian models respectively. The UFM after correction is below 1.0% for all models.
Table 7 shows the results of vignetting correction using the parabolic array and the anisotropic Gaussian models in the experiments. Some of the reference images are dark, due to the reflectance of the standard white board. For inspection samples such as semiconductor and flat-panel displays, the brightness is sufficient, owing to high reflectivity. The images show little difference after vignetting correction, when viewed with the naked eye.
TABLE 7. Reference and corrected images in the experiment.
Specific Features of the Camera | Reference Image for Vignetting | Image Correction Using a Parabolic Array Model | Image Correction Using an Anisotropic Gaussian Model |
---|---|---|---|
UV-Vis-NIR | |||
Color | |||
Vis-NIR | |||
High-speed |
Figure 3 shows a summary of the processing time required to determine coefficients and vignetting correction in the experiment. The conventional parabolic array model is advantageous in terms of processing time, because iteration is unnecessary. A simplex search of the proposed model iterates the cost function based on pixel operation until the coefficients are sufficiently determined. After determining the coefficients, the difference in processing time of vignetting correction decreases to under 50 ms. Among the proposed models, the paraboloid model is the fastest because it is similar to the conventional model. Furthermore, the initial condition is obtained using the conventional model. The processing times of the proposed models are longer than that of the conventional model, but are tolerable in practice.
The results generally show that the conventional parabolic array model has advantages, in terms of accuracy and processing time in the experimental cases. The anisotropic Gaussian model shows the highest accuracy in two cases, and a similar performance to that of the conventional model. This fact is unexpected, because the Gaussian model has received little attention and has only been discussed in a few previous studies. Although the isotropic Gaussian model in previous research used an axisymmetric radial exponent [30], the coefficient determination was not presented. Theoretical analyses of the parabolic array and polynomial models have been reported in previous studies. However, the anisotropic Gaussian model is considered for the experiments in this study, and a theoretical approach is required in the future. Considering the various optical combinations of cameras, lenses, and illumination in microscopy, the models for vignetting are not limited to the conventional ones. The most widely fitted vignetting model for a microscope system is determined by the inspection conditions.
A simplex search makes it possible to determine the coefficients of the generalized equations, as well as the other 3D models described in this study. The simplex search provides the possibility of constructing various geometric models and formulations for the correction of vignetting. The experimental models achieve a similar accuracy using 1/1000–1/2000 the number of coefficients, compared to the parabolic array model. The 3D models are intuitive and simple, compared to conventional models. The 3D models also provide geometric properties of the vignetting, such as the vignetting center and aspect ratio. These geometric properties can be used to align the optical and illumination axes. The proposed method is also available for an aspherical surface and inhomogeneous polynomials. In the future, we will also provide a parallel-processing architecture for the correction of vignetting, for real-time imaging using a graphics processing unit and a multicore processor.
A geometric modeling method for the correction of vignetting using 3D equations and a simplex search was proposed in this study. The 3D models were implemented into generalized nonlinear polynomials, considering conventional models. The coefficients of the 3D models were determined using a simplex search, and performance indices were defined for the experiment. Reference images were acquired using four inspection systems, and performance indices were obtained using the proposed models. The C++ code for the experiment was applied using open sources, and could handle various test conditions. Although the parabolic array model generally showed good performance during the experiments, the results of this study found that the anisotropic Gaussian model was unexpectedly accurate in certain cases. The proposed 3D models showed similar accuracy, but using 1/1000–1/2000 of the coefficients, to that of the parabolic array model. These 3D vignetting models are intuitive and provide the overall characteristics of the vignetting. The proposed method provides solutions for the alignment of optics and illumination, as well as the construction of various vignetting models.
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at the time of publication, which may be obtained from the authors upon reasonable request.
This research was supported by the Year 2022 Culture Technology R&D Program developed by the Ministry of Culture, Sports and Tourism (MCST) and the Korea Creative Content Agency (KOCCA). (Development of the system for digital data acquisition of modern and contemporary fine arts and supporting science-based art credibility analysis).
Korean Creative Content Agency (KOCCA R202006 0004).
TABLE 1 3D vignetting models and unknown coefficients for the simplex search
Shapes | Geometric Equations | Generalized Form | Intuitive Form |
---|---|---|---|
Elliptic Polynomials (O6) | |||
Anisotropic Gaussian | |||
Paraboloid | |||
Ellipsoid |
TABLE 2 Specifications of the inspection machines used in the test
Camera | Spectral Range | Resolution | Bits | Lens | Light | Magnification | Photo |
---|---|---|---|---|---|---|---|
UV-Vis-NIR | 193–1100 nm | 1392 × 1040 | 14 | Motorized Zoom | Halogen | 0.58–12.5 | |
Color | 390–1100 nm | 2592 × 2048 | 10 | Motorized Zoom | White LED | 0.09–393.8 | |
Vis-NIR | 350–1100 nm | 2592 × 2048 | 10 | Manual Zoom | Halogen | 0.75–4.5 | |
High Speed | 390–1050 nm | 1280 × 1024 | 12 | Motorized Zoom | White LED | 0.58–12 |
TABLE 3 Performance indices of the UV-Vis-NIR inspection system applied in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 7296 | 0.9873 | 0.9873 | 0.9871 | 1.2384 | 38.1948 | 1.0523 |
Elliptic Polynomials | 8 | 0.9764 | 0.9766 | 1.3399 | 1.6856 | 35.5171 | 1.5031 |
Anisotropic Gaussian | 6 | 0.9760 | 0.9763 | 1.3387 | 1.7013 | 35.4366 | 1.5268 |
Paraboloid | 5 | 0.9742 | 0.9744 | 1.3970 | 1.7619 | 35.1328 | 1.5757 |
Ellipsoid | 6 | 0.9731 | 0.9733 | 1.4242 | 1.8005 | 34.9441 | 1.6190 |
TABLE 4 Performance indices of the color inspection system used in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 13920 | 0.9953 | 0.9954 | 1.5822 | 1.9743 | 34.4414 | 0.9091 |
Elliptic Polynomials | 8 | 0.9963 | 0.9964 | 1.3128 | 1.7479 | 35.4995 | 0.9241 |
Anisotropic Gaussian | 6 | 0.9968 | 0.9969 | 1.2075 | 1.6258 | 36.1283 | 0.8515 |
Paraboloid | 5 | 0.9744 | 0.9756 | 3.4691 | 4.6282 | 27.0412 | 4.3283 |
Ellipsoid | 6 | 0.9715 | 0.9727 | 3.6695 | 4.8854 | 26.5716 | 4.5206 |
TABLE 5 Performance indices of the Vis-NIR inspection system utilized in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 13920 | 0.9807 | 0.9808 | 3.0061 | 3.8423 | 28.6294 | 1.2655 |
Elliptic Polynomials | 8 | 0.9788 | 0.9793 | 2.9340 | 4.0338 | 28.2069 | 1.4277 |
Anisotropic Gaussian | 6 | 0.9819 | 0.9823 | 2.7410 | 3.7223 | 28.9050 | 1.2428 |
Paraboloid | 5 | 0.9193 | 0.9264 | 5.7328 | 7.8689 | 22.4029 | 7.2386 |
Ellipsoid | 6 | 0.9114 | 0.9193 | 5.9974 | 8.2452 | 21.9971 | 7.5272 |
TABLE 6 Performance indices of a high-speed inspection system applied in the experiment
Model | No. of Coeff. | R12 | R22 | MAE (%) | RMSE (%) | SRE | UFM (%) |
---|---|---|---|---|---|---|---|
Ideal | - | 1.0000 | 1.0000 | 0.0000 | 0.0000 | ∞ | 0.0000 |
Parabolic Array | 6912 | 0.9705 | 0.9705 | 1.5240 | 1.9227 | 34.3759 | 0.6445 |
Elliptic Polynomials | 8 | 0.9602 | 0.9603 | 1.7858 | 2.2310 | 33.0840 | 0.7651 |
Anisotropic Gaussian | 6 | 0.9604 | 0.9605 | 1.7815 | 2.2257 | 33.1049 | 0.7638 |
Paraboloid | 5 | 0.9571 | 0.9572 | 1.8478 | 2.3172 | 32.7549 | 0.7979 |
Ellipsoid | 6 | 0.9550 | 0.9551 | 1.8896 | 2.3726 | 32.5496 | 0.8217 |
TABLE 7 Reference and corrected images in the experiment
Specific Features of the Camera | Reference Image for Vignetting | Image Correction Using a Parabolic Array Model | Image Correction Using an Anisotropic Gaussian Model |
---|---|---|---|
UV-Vis-NIR | |||
Color | |||
Vis-NIR | |||
High-speed |