검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Article

Curr. Opt. Photon. 2022; 6(2): 161-170

Published online April 25, 2022 https://doi.org/10.3807/COPP.2022.6.2.161

Copyright © Optical Society of Korea.

Vignetting Dimensional Geometric Models and a Downhill Simplex Search

Hyung Tae Kim1 , Duk Yeon Lee2, Dongwoon Choi2, Jaehyeon Kang2, Dong-Wook Lee2

1Digital Transformation R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea
2Robotics R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea

Corresponding author: *htkim@kitech.re.kr, ORCID 0000-0001-5711-551X

Received: October 15, 2021; Revised: December 23, 2021; Accepted: December 28, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Three-dimensional (3D) geometric models are introduced to correct vignetting, and a downhill simplex search is applied to determine the coefficients of a 3D model used in digital microscopy. Vignetting is nonuniform illuminance with a geometric regularity on a two-dimensional (2D) image plane, which allows the illuminance distribution to be estimated using 3D models. The 3D models are defined using generalized polynomials and arbitrary coefficients. Because the 3D models are nonlinear, their coefficients are determined using a simplex search. The cost function of the simplex search is defined to minimize the error between the 3D model and the reference image of a standard white board. The conventional and proposed methods for correcting the vignetting are used in experiments on four inspection systems based on machine vision and microscopy. The methods are investigated using various performance indices, including the coefficient of determination, the mean absolute error, and the uniformity after correction. The proposed method is intuitive and shows performance similar to the conventional approach, using a smaller number of coefficients.

Keywords: Machine vision, Mathematical model, Microscopy camera calibration, Simplex search, Vignetting correction

OCIS codes: (100.2980) Image enhancement; (110.0180) Microscopy; (150.1135) Algorithms; (150.1488) Calibration; (220.3630) Lenses

Although digital microscopy originated from bioengineering through technical advances, it has become industrialized and applied in scientific metrology [1]. A digital microscopy system is constructed by attaching a digital camera and an image-processing unit to a conventional optical microscope. This simple combination has been used in many technical applications, such as autofocusing [2], filtering [3], white balancing [4], calibration [5], detection [6], stitching, and multifocus fusion. Because of the structural problems that occur in optical microscopes, distortion [7], aberrations [8], and vignetting [2] are inevitable when acquiring an image using digital microscopy. Vignetting is a nonuniform distribution of light intensity in an image, owing to different optical paths in a microscope. The center of the image is usually bright, but the brightness decreases toward the periphery [9]. Vignetting commonly occurs in current-imaging devices, such as smart phones [10], digital single-lens reflexes (DSLRs) [11], industrial cameras [12], microscopes [13], and line-scanning systems [14]. Because the correction of such vignetting is essentially hidden within these digital imaging devices, users rarely witness it. However, vignetting is frequently seen in microscopy when applied in bioimaging [15], semiconductors [16], flat-panel displays [17], and printed electronics [18]. Vignetting appears sensitively in the output of image fusion, such as in stitching and panoramic images. Image fusion has become popular in digital imaging devices, as well as digital microscopy, and the correction of vignetting is significant for synthesizing a natural and continuous image.

Vignetting is caused by the partial obstruction of the light path from an object plane to an image plane [19]. The sources of vignetting can be classified as the geometric optics of the image plane, the angular sensitivity of the digital sensors, and the light path and path blocking within and in front of a lens, respectively [20]. Vignetting in digital microscopy is corrected using hardware and software. Hardware-based correction is achieved through geometric optics, and thus is usually expensive. The computational cost required to analyze the optics is also high, and the light from commercial illumination is scattered; it is therefore difficult to design optics to correct vignetting. Furthermore, it is impossible to create optics for various commercial lenses and illumination conditions; thus, software correction is preferred in practice.

Software correction is conventionally conducted by applying gain to an actual image after determining the gain from a reference image. The reference image is acquired using a standard target, and the gain of each pixel is then determined by comparing a target value to the gray levels of pixels in the reference image [21]. Flat-field correction (FFC) is a popular method applied in microscopy, machine vision, and telescopy. Although illumination is continuous in a reference image, pixel-based FFC can be discontinuous, owing to camera noise and local damage occurring in a standard target [22]. Thus, regression approaches such as polynomials of the pixel gain [23, 24], an off-axis illumination model [13, 19, 25], and radial polynomials [26] have been discussed for continuity when applying FFC. Kordecki et al. [27] proposed a fast and accurate correction method using a parabolic array, achieved by slicing the reference image in the horizontal and vertical directions. The gain is determined by averaging second-order polynomials in both directions; however the number of coefficients increases significantly to 3 × (width + height). Recent studies have applied machine learning to the correction of vignetting, and convolutional neural networks have been experimented on using uneven illumination for medical imaging [28].

The illuminance distribution of the reference image forms a geometric surface; thus, high-order polynomials and regression have been applied to model the vignetting. Mitsunaga and Nayar [9] proposed a general formulation of polynomials whose coefficients were determined using the least mean square. Vignetting forms symmetric distributions in many cases; thus, simplified low-order polynomials are advantageous for accelerating computation [12, 15, 28, 29]. However, these polynomial models have been derived from 2D models; a 3D model dealing with vignetting has yet to be investigated. In previous studies, the isotropic Gaussian model was briefly discussed for the potential to correct vignetting [15, 30]. However, a determination of the coefficients of the Gaussian function was not presented, because of nonlinear regression.

Considering the illumination distribution of the reference image, many 3D surfaces such as spheres, ellipsoids, paraboloids, and Gaussian surfaces have been applicable for modeling vignetting. These 3D models have only a few coefficients, providing a simple and intuitive formulation. The nonlinear regression used to determine the coefficients can be achieved through multi-dimensional optimum methods. Thus, this study proposes a vignetting-correction method using 3D models and a downhill simplex search. The performances of the conventional and proposed methods are investigated using four inspection systems, ranging from the ultraviolet (UV) to the near infrared (NIR).

The remainder of this paper is organized as follows. Section 2 describes the generalized 3D model and nonlinear regression using a downhill simplex search. Section 3 presents the experimental conditions and performance indices. The experimental results and a discussion are presented in section 4. Finally, concluding remarks are presented in section 5.

2.1. 3D vignetting model

A FFC adjusts the gray level of each pixel using a reference image of the vignetting. In a simple case, the gain of an individual pixel is obtained from the reference image and the target value. The gray level of an actual image is then corrected using the gain. Equation (1) shows the normalized relation between the corrected image O(x, y), the source image I(x, y), and the reference image V(x, y) [21, 22].

Ox,y=I(x,y)Bvx,yBV¯=Gx,yIx,y.

Here B is the background noise, V¯ is the average vignetting in the reference image V(x, y), and G(x, y) is the pixel gain. The reference image is acquired from a standard white target; however, these direct pixel-to-pixel corrections are undesirable, owing to camera noise and stains of the standard white target. Thus, a regression of the vignetting is advantageous for guaranteeing continuity and estimating the overall tendency. Vignetting is observed as the distribution of light intensity on a 2D image plane; thus, a vignetting model can be defined using a 3D shape, as shown in Fig. 1. Here (x, y) indicates the image coordinates and z is the gray level, which follows a 3D model. The gain of each pixel is calculated using the 3D model and the target level, as shown in Eq. (1). The gray level of the corrected image becomes uniform according to the target level. A 3D model of the vignetting, f (x, y), is defined as an arbitrary polynomial, the coefficients of which can be determined using the optimization based on nonlinear regression.

Figure 1.Concept of vignetting correction, using a 3D model from a reference image to the corrected image.

A cost function for the optimum method is defined to minimize the error between the reference image and 3D model, as follows:

E=minx,yx y Vx,yfx,y2.

The light distribution under vignetting is typically axisymmetric; thus, radial polynomials have been proposed in previous studies [19, 20], the generalized form of which is as follows:

fr=i=0nairi x,y

The cross section of the vignetting is approximated as a parabola; thus, the vignetting model can be arranged using parabolas, after slicing the reference image in the xy direction [27]. The vignetting model is organized as an (x, y) parabola array, and the coefficients of each parabola can be determined using the least-mean-squares method.

fx,y=12 i=0 2axixi+ayiyi.

The parabolic array model is quite accurate and can easily determine the coefficients. However, the number of coefficients becomes 3 × (width + height), which reaches an extremely high value in the case of a megapixel camera. Considering the light distribution in the reference image, the shape is approximated as a 3D shape such as a paraboloid, ellipsoid, or Gaussian surface [15, 30]. The normalized coordinates of the 3D vignetting model can be written as generalized polynomials, as follows:

u,v=xxe2,yych,

fx,y=f0+ d+i=1n ai ui+bi uiv ni+ci viY,

where (xc, yc) is the center of vignetting, ad are coefficients, γ is an exponent and f0 is the offset. In the case of a paraboloid model, for example, n = 2, γ = 1, and bi = 0. An ellipsoid model can be defined as n = 2, γ = 1 / 2, and bi = 0. In addition, the formulation of the radial and Gaussian model is generalized to elliptical and anisotropic Gaussian models.

fx,y=f0+ i=1ncia iu2+b iv2i,

fx,y=f0+ed+i=1n aiui+biuiv ni+civi.

These generalized models are formulated as elliptical and anisotropic polynomials and are extended from previous studies. These elliptical and anisotropic formulations present geometric properties such as the aspect ratio and center of vignetting. As an existing method, a parabolic array model is applied, and the elliptical and anisotropic Gaussian models are extended from conventional models. The paraboloid and ellipsoid models are additionally tested in this study.

2.2. Nonlinear regression using downhill simplex search

The 3D models in Eqs. (6)–(8) and the cost function in Eq. (2) are nonlinear; thus, optimum methods are required to determine the coefficients. As a downhill simplex search, the well-known Nelder-Mead approach is one of the most popular methods for nonderivative, multidimensional nonlinear optimization [31]. Thus, the coefficients of the 3D models can be determined by minimizing the cost function between the vignetting and 3D model. The unknown variables for the optimum methods are the center, offset, and coefficients. The unknown variables are defined as generalized coordinates with dimensions of 3 × m + 4, as shown in Eq. (9):

q=xc,ycd,fo,a1,b1,c1,,am,bm,cm.

Table 1 lists the geometric equations for the 3D model. The elliptical polynomial and anisotropic Gaussian models are extensions of the radial model used in previous research. The intuitive forms are simple, because b is zero and constant terms c can be merged.

TABLE 1 3D vignetting models and unknown coefficients for the simplex search

ShapesGeometric EquationsGeneralized FormIntuitive Form
Elliptic Polynomials (O6)r=xxc2A+yyc2Bz=f0+c2r2+c4r4+c6r6(xc,yc,f0,a2,a4,a6,b2,b4,b6,c2,c4,c6)xc,yc,f0,A,B,c2,c4,c6
Anisotropic Gaussianz=f0+Ce xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B,C
Paraboloidz=f0+ xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B
Ellipsoidz=f0+C+ xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B,C


Because the dimension of the generalized coordinate is m, m + 1 vertices are constructed for the simplex search. The initial conditions for the vertices are obtained using the parabolic array model [27]. The coefficients at the center of the parabolic array model are converted to the initial conditions according to the geometric relations. In the first step of the simplex search, the vertices are sorted according to the value of the cost function. The maximum, next (second maximum), and minimum points are then determined. After the middle point excluding the maximum is calculated, the middle vector between the two points is calculated as follows:

qmid=1m i=1,imax m+1qi,

qmid=qmaxqmid.

To test for a lower value, a reflection point is then defined from among the vertices along the middle vector. If the reflection point is lower than the maximum point, an expansion point is defined from the reflection point to repeat the test.

qref,qexp=qmax qmid,qref qmid.

If either of the two points is lower than the maximum, the maximum vertex is replaced by the test point, and the terminal condition is checked. Otherwise, the next test point is designated inside the vertices. This contraction is applied to determine the test point between the maximum and middle points.

qcon=qmax+qmid2.

If the contraction point is lower than the maximum, the maximum is replaced by the contraction point, and the terminal condition is applied. Otherwise, a shrinkage is applied to reduce the vertices toward the current minimum.

qi,new=qmin+qi,old2.

After these tests, the following terminal condition is examined, and the sorting step is repeated if the error is unsatisfactory.

ε=E q maxE q minE q max+E q min

Figure 2 shows these procedures of the simplex search summarized in a flow chart.

Figure 2.Flow chart of the simplex search and transformation of a vertex, for a two-variable case.

3.1. Experiments

Reference images are acquired using four inspection systems, as shown in Table 2. The common components of the inspection systems include an industrial camera, a zoom lens, and a light source, although their specifications differ. The inspection systems have different resolutions, acquisition speeds, and spectral ranges from UV to NIR. The reference images are obtained using a standard white board, after finding the focus and adjusting the light intensity. The light intensity is set at the maximum optical power without pixel saturation. The reference images are then transferred to a PC for image processing. The vignetting models mentioned above are applied to the reference images, and the coefficients are determined using a simplex search. The terminal condition of the simplex search is 10−6. The gain is calculated using Eq. (1) and is multiplied by the actual images to correct the vignetting. The performance indices are computed using reference and corrected images.

TABLE 2 Specifications of the inspection machines used in the test

CameraSpectral RangeResolutionBitsLensLightMagnificationPhoto
UV-Vis-NIR193–1100 nm1392 × 104014Motorized ZoomHalogen0.58–12.5
Color390–1100 nm2592 × 204810Motorized ZoomWhite LED0.09–393.8
Vis-NIR350–1100 nm2592 × 204810Manual ZoomHalogen0.75–4.5
High Speed390–1050 nm1280 × 102412Motorized ZoomWhite LED0.58–12


The conventional and proposed methods are implemented using C++ codes with open sources under Linux. The C++ codes are generated into generalized subroutines that can handle various image formats. The subroutines are integrated into the library and reused as a software development kit (SDK). The PC for the vignetting correction is a high-performance parallel system consisting of a hexacore, a GPU, and 64 GB of memory.

3.2. Performance Indices

Performance indices are defined to compare the results of the conventional and proposed methods. These performance indices are the coefficient of determination (R12), Pearson correlation coefficient (R22), mean absolute error (MAE), root-mean-square error (RMSE), signal-to-reconstruction error (SRE) [32], and uniformity after the vignetting correction (UFM). Here R2 indicates the correlation; a perfect correlation is 1.0, and no correlation is 0.0. MAE, whereas for an ideal case the RMSE and UFM approach 0.0, and the SRE reaches infinity. These indices are calculated from the reference and corrected images, as shown in Eqs. (16)–(20).

R12=1 y=1h x=1w V x,yf x,y2 y=1h x=1w V x,yV¯2,

R22=CovV x,y,f x,yσvx,yσf(x,y),

MAE=1hw y=1hx=1wVx,yfx,y,

RMSE=1hw y=1hx=1wV x,yf x,y2,

SRE=10log10y=1h x=1w Vx,y2y=1h x=1w Vx,yfx,y2.

The experimental results using the four inspection systems are summarized in Tables 36. In the case of the UV-Vis-NIR inspection system, as shown in Table 3, R1 and R2 are approximately 1.0, which indicates high correlation. The MAE and RMSE range from 0.98% to 1.80%, and the uniformity after correction of vignetting is 1.05%–1.62%. The SRE is a sensitive indicator for higher discrimination, and here the parabolic array model scores the highest. Although the parabolic array model is the most accurate for the UV-Vis-NIR inspection system, the number of coefficients is 7296. The other geometric models show similar accuracy, at fewer than ten coefficients.

TABLE 3 Performance indices of the UV-Vis-NIR inspection system applied in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array72960.98730.98730.98711.238438.19481.0523
Elliptic Polynomials80.97640.97661.33991.685635.51711.5031
Anisotropic Gaussian60.97600.97631.33871.701335.43661.5268
Paraboloid50.97420.97441.39701.761935.13281.5757
Ellipsoid60.97310.97331.42421.800534.94411.6190


TABLE 4 Performance indices of the color inspection system used in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array139200.99530.99541.58221.974334.44140.9091
Elliptic Polynomials80.99630.99641.31281.747935.49950.9241
Anisotropic Gaussian60.99680.99691.20751.625836.12830.8515
Paraboloid50.97440.97563.46914.628227.04124.3283
Ellipsoid60.97150.97273.66954.885426.57164.5206


TABLE 5 Performance indices of the Vis-NIR inspection system utilized in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array139200.98070.98083.00613.842328.62941.2655
Elliptic Polynomials80.97880.97932.93404.033828.20691.4277
Anisotropic Gaussian60.98190.98232.74103.722328.90501.2428
Paraboloid50.91930.92645.73287.868922.40297.2386
Ellipsoid60.91140.91935.99748.245221.99717.5272


TABLE 6 Performance indices of a high-speed inspection system applied in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array69120.97050.97051.52401.922734.37590.6445
Elliptic Polynomials80.96020.96031.78582.231033.08400.7651
Anisotropic Gaussian60.96040.96051.78152.225733.10490.7638
Paraboloid50.95710.95721.84782.317232.75490.7979
Ellipsoid60.95500.95511.88962.372632.54960.8217


Table 4 shows the performance of the color inspection system, for which the anisotropic Gaussian model achieves the highest accuracy, followed by the elliptic polynomial. R1 and R2 show high correlation, whereas the paraboloid and ellipsoid show large errors. The uniformity of the parabolic array, elliptic polynomial, and anisotropic Gaussian polynomial models are below 1.0%. Whereas the number of coefficients of the parabolic array model reaches 13920, that of the anisotropic Gaussian was 6. Considering the number of parameters, the anisotropic Gaussian model is simple and competitive in this case.

The performance of the Vis-NIR inspection system is shown in Table 5. The overall accuracy is lower than that of the other inspection systems, because the MAE, RMSE, and UFM were higher. However, uniformity after correction of vignetting is approximately 1.2%–1.4%. In this case, the anisotropic Gaussian model is the most accurate.

Table 6 shows the results of the high-speed inspection system, where the parabolic array model is the most accurate, followed by the anisotropic Gaussian approach. However, the other models show similar accuracy, using a lower number of coefficients. The number of coefficients is 6912 and 6 for the parabolic array and anisotropic Gaussian models respectively. The UFM after correction is below 1.0% for all models.

Table 7 shows the results of vignetting correction using the parabolic array and the anisotropic Gaussian models in the experiments. Some of the reference images are dark, due to the reflectance of the standard white board. For inspection samples such as semiconductor and flat-panel displays, the brightness is sufficient, owing to high reflectivity. The images show little difference after vignetting correction, when viewed with the naked eye.

TABLE 7 Reference and corrected images in the experiment

Specific Features of the CameraReference Image for VignettingImage Correction Using a Parabolic Array ModelImage Correction Using an Anisotropic Gaussian Model
UV-Vis-NIR
Color
Vis-NIR
High-speed


Figure 3 shows a summary of the processing time required to determine coefficients and vignetting correction in the experiment. The conventional parabolic array model is advantageous in terms of processing time, because iteration is unnecessary. A simplex search of the proposed model iterates the cost function based on pixel operation until the coefficients are sufficiently determined. After determining the coefficients, the difference in processing time of vignetting correction decreases to under 50 ms. Among the proposed models, the paraboloid model is the fastest because it is similar to the conventional model. Furthermore, the initial condition is obtained using the conventional model. The processing times of the proposed models are longer than that of the conventional model, but are tolerable in practice.

Figure 3.Comparison of processing time required to (a) determine coefficients and (b) correct vignetting.

The results generally show that the conventional parabolic array model has advantages, in terms of accuracy and processing time in the experimental cases. The anisotropic Gaussian model shows the highest accuracy in two cases, and a similar performance to that of the conventional model. This fact is unexpected, because the Gaussian model has received little attention and has only been discussed in a few previous studies. Although the isotropic Gaussian model in previous research used an axisymmetric radial exponent [30], the coefficient determination was not presented. Theoretical analyses of the parabolic array and polynomial models have been reported in previous studies. However, the anisotropic Gaussian model is considered for the experiments in this study, and a theoretical approach is required in the future. Considering the various optical combinations of cameras, lenses, and illumination in microscopy, the models for vignetting are not limited to the conventional ones. The most widely fitted vignetting model for a microscope system is determined by the inspection conditions.

A simplex search makes it possible to determine the coefficients of the generalized equations, as well as the other 3D models described in this study. The simplex search provides the possibility of constructing various geometric models and formulations for the correction of vignetting. The experimental models achieve a similar accuracy using 1/1000–1/2000 the number of coefficients, compared to the parabolic array model. The 3D models are intuitive and simple, compared to conventional models. The 3D models also provide geometric properties of the vignetting, such as the vignetting center and aspect ratio. These geometric properties can be used to align the optical and illumination axes. The proposed method is also available for an aspherical surface and inhomogeneous polynomials. In the future, we will also provide a parallel-processing architecture for the correction of vignetting, for real-time imaging using a graphics processing unit and a multicore processor.

A geometric modeling method for the correction of vignetting using 3D equations and a simplex search was proposed in this study. The 3D models were implemented into generalized nonlinear polynomials, considering conventional models. The coefficients of the 3D models were determined using a simplex search, and performance indices were defined for the experiment. Reference images were acquired using four inspection systems, and performance indices were obtained using the proposed models. The C++ code for the experiment was applied using open sources, and could handle various test conditions. Although the parabolic array model generally showed good performance during the experiments, the results of this study found that the anisotropic Gaussian model was unexpectedly accurate in certain cases. The proposed 3D models showed similar accuracy, but using 1/1000–1/2000 of the coefficients, to that of the parabolic array model. These 3D vignetting models are intuitive and provide the overall characteristics of the vignetting. The proposed method provides solutions for the alignment of optics and illumination, as well as the construction of various vignetting models.

Data underlying the results presented in this paper are not publicly available at the time of publication, which may be obtained from the authors upon reasonable request.

This research was supported by the Year 2022 Culture Technology R&D Program developed by the Ministry of Culture, Sports and Tourism (MCST) and the Korea Creative Content Agency (KOCCA). (Development of the system for digital data acquisition of modern and contemporary fine arts and supporting science-based art credibility analysis).

  1. E. S. Statnik, A. I. Salimon, and A. M. Korsunsky, “On the application of digital optical microscopy in the study of materials structure and deformation,” Mater. Today 33, 1917-1923 (2020).
    CrossRef
  2. S. Pertuz, D. Puig, and M. A. Garcia, “Analysis of focus measure operators for shape-from-focus,” Pattern Recognit. 46, 1415-1432 (2013).
    CrossRef
  3. G. Wang, C. Lopez-Molina, and B. De Baets, “Automated blob detection using iterative Laplacian of Gaussian filtering and unilateral second-order Gaussian kernels,” Digit. Signal Process. 96, 102592 (2020).
    CrossRef
  4. J. Quintana, R. Garcia, and L. Neumann, “A novel method for color correction in epiluminescence microscopy,” Comput. Med. Imaging Graph. 35, 646-652 (2011).
    Pubmed CrossRef
  5. J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. 105, 75-85 (2018).
    CrossRef
  6. P. Nagy, G. Vámosi, A. Bodnár, S. J. Lockett, and J. Szöllősi, “Intensity-based energy transfer measurements in digital imaging microscopy,” Eur. Biophys. J. 27, 377-389 (1998).
    Pubmed CrossRef
  7. S. Van der Jeught, J. A. N. Buytaert, and J. J. J. Dirckx, “Real-time geometric lens distortion correction using a graphics processing unit,” Opt. Eng. 51, 027002 (2012).
    CrossRef
  8. T. Masunari, and M. Hisaka, “Optical microscopy using annular full-color LED for quantitative phase and spectroscopic imaging of biological tissues,” Proc. SPIE 11140, 1114001-197 (2019).
  9. T. Mitsunaga and S. K. Nayar, “Radiometric self calibration,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Fort Collins, CO, USA, June. 1999), pp. 374-380.
  10. Y. Bai, M. Chin, and T. Tajbakhsh, “Lens shading modulation,” US Patent US10171786B2 (2019).
  11. D. Reddy, J. Bai, and R. Ramamoorthi, “External mask based depth and light field camera,” in Proc. IEEE International Conference on Computer Vision Workshops-ICCV (Sydney, Australia, Dec. 2-8, 2013), pp. 37-44.
    CrossRef
  12. D. Karatzas, M. Rusinol, C. Antens, and M. Ferrer, “Segmentation robust to the vignetting effect for machine vision systems,” in Proc. International Conference on Pattern Recognition, (Tampa, FL, USA, Dec. 8-11, 2008), pp. 1-4.
    CrossRef
  13. L. Mignard-Debise and I. Ihrke, “A vignetting model for light field cameras with an application to light field microscopy,” IEEE Trans. Comput. Imaging 5, 585-595 (2019).
    CrossRef
  14. H. T. Kim, E. J. Ha, K. C. Jin, and B. W. Kim, “Optimal color lighting for scanning images of flat panel display using simplex search,” J. Imaging 4, 133 (2018).
    CrossRef
  15. S. S. Lee, S. Pelet, M. Peter, and R. Dechant, “A rapid and effective vignetting correction for quantitative microscopy,” RSC Adv. 4, 52727-52733 (2014).
    CrossRef
  16. H. Park, H. J. Choi, K. Rhew, H. Lim, H. Lee, I. Jang, and C. H. Min, “Development of accurate dose evaluation technique of X-ray inspection for quality assurance of semiconductor with Monte Carlo simulation,” Appl. Radiat. Isot. 154, 108851 (2019).
    Pubmed CrossRef
  17. J. Kwak, K. B. Lee, J. Jang, K. S. Chang, and C. O. Kim, “Automatic inspection of salt-and-pepper defects in OLED panels using image processing and control chart techniques,” J. Intell. Manuf. 30, 1047-1055 (2019).
    CrossRef
  18. H. T. Kim, Y. J. Moon, H. Kang, and J. Y. Hwang, “On-machine measurement for surface flatness of transparent and thin film in laser ablation process,” Coatings 10, 885 (2020).
    CrossRef
  19. S. B. Kang and R. S. Weiss, “Can we calibrate a camera using an image of a flat, textureless Lambertian surface?,” in Computer VisionECCV 2000 (Lecture Notes in Computer Science book series), (Springer, Germany, 2000), pp. 640-653.
    CrossRef
  20. D. B. Goldman, “Vignetting and exposure calibration and compensation,” IEEE Trans. Pattern Anal. Machine Intell. 32, 2276-2288 (2010).
    Pubmed CrossRef
  21. F. Piccinini and A. Bevilacqua, “Colour vignetting correction for microscopy image mosaics used for quantitative analyses,” BioMed Res. Inter. 2018, 7082154 (2018).
    Pubmed KoreaMed CrossRef
  22. Y.-O. Tak, A. Park, J. Choi, J. Eom, H.-S. Kwon, and J. B. Eom, “Simple shading correction method for brightfield whole slide imaging,” Sensors 20, 3084 (2020).
    Pubmed KoreaMed CrossRef
  23. F. Piccinini, E. Lucarelli, A. Gherardi, and A. Bevilacqua, “Multi-image based method to correct vignetting effect in light microscopy images,” J. Microsc. 248, 6-22 (2012).
    Pubmed CrossRef
  24. J. Manfroid, “On CCD standard stars and flat-field calibration,” Astron. Astrophys. Suppl. Ser. 118, 391-395 (1996).
    CrossRef
  25. S. J. Kim and M. Pollefeys, “Robust radiometric calibration and vignetting correction,” IEEE Trans. Pattern Anal. Machine Intell. 30, 562-576 (2008).
    Pubmed CrossRef
  26. J. Stumpfel, A. Jones, A. Wenger, C. Tchou, T. Hawkins, and P. Debevec, “Direct HDR capture of the sun and sky,” in Proc. Special Interest Group on Computer Graphics and Interactive Techniques Conference- SIGGRAPH06 (Boston, MA, USA, Jul. 30-Aug. 3, 2006), pp. 5-es.
    CrossRef
  27. A. Kordecki, H. Palus, and A. Bal, “Practical vignetting correction method for digital camera with measurement of surface luminance distribution,” Signal Image Video Process. 10, 1417-1424 (2016).
    CrossRef
  28. J. Wang, X. Wang, P. Zhang, S. Xie, S. Fu, Y. Li, and H. Han, “Correction of uneven illumination in color microscopic image based on fully convolutional network,” Opt. Express 29, 28503-28520 (2021).
    Pubmed CrossRef
  29. Y. Zheng, J. Yu, S. B. Kang, S. Lin, and C. Kambhamettu, “Single-image vignetting correction using radial gradient symmetry,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, AK, USA, Jun. 23-28, 2008), pp. 1-8.
  30. F. J. W.-M. Leong, M. Brady, J. O’. McGee, “Correction of uneven illumination (vignetting) in digital microscopy images,” J. Clin. Pathol. 56, 619-621 (2003).
    Pubmed KoreaMed CrossRef
  31. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. (Cambridge University Press, UK, 1992).
  32. S. Zhang, W. Hua, J. Liu, G. Li, and Q. Wang, “Multiview-based Spectral weighted and low-rank for row-sparsity hyperspectral unmixing,” Curr. Opt. Photonics 5, 431-443 (2021).

Article

Article

Curr. Opt. Photon. 2022; 6(2): 161-170

Published online April 25, 2022 https://doi.org/10.3807/COPP.2022.6.2.161

Copyright © Optical Society of Korea.

Vignetting Dimensional Geometric Models and a Downhill Simplex Search

Hyung Tae Kim1 , Duk Yeon Lee2, Dongwoon Choi2, Jaehyeon Kang2, Dong-Wook Lee2

1Digital Transformation R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea
2Robotics R&D Department, Korea Institute of Industrial Technology, Ansan 15588, Korea

Correspondence to:*htkim@kitech.re.kr, ORCID 0000-0001-5711-551X

Received: October 15, 2021; Revised: December 23, 2021; Accepted: December 28, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Three-dimensional (3D) geometric models are introduced to correct vignetting, and a downhill simplex search is applied to determine the coefficients of a 3D model used in digital microscopy. Vignetting is nonuniform illuminance with a geometric regularity on a two-dimensional (2D) image plane, which allows the illuminance distribution to be estimated using 3D models. The 3D models are defined using generalized polynomials and arbitrary coefficients. Because the 3D models are nonlinear, their coefficients are determined using a simplex search. The cost function of the simplex search is defined to minimize the error between the 3D model and the reference image of a standard white board. The conventional and proposed methods for correcting the vignetting are used in experiments on four inspection systems based on machine vision and microscopy. The methods are investigated using various performance indices, including the coefficient of determination, the mean absolute error, and the uniformity after correction. The proposed method is intuitive and shows performance similar to the conventional approach, using a smaller number of coefficients.

Keywords: Machine vision, Mathematical model, Microscopy camera calibration, Simplex search, Vignetting correction

I. INTRODUCTION

Although digital microscopy originated from bioengineering through technical advances, it has become industrialized and applied in scientific metrology [1]. A digital microscopy system is constructed by attaching a digital camera and an image-processing unit to a conventional optical microscope. This simple combination has been used in many technical applications, such as autofocusing [2], filtering [3], white balancing [4], calibration [5], detection [6], stitching, and multifocus fusion. Because of the structural problems that occur in optical microscopes, distortion [7], aberrations [8], and vignetting [2] are inevitable when acquiring an image using digital microscopy. Vignetting is a nonuniform distribution of light intensity in an image, owing to different optical paths in a microscope. The center of the image is usually bright, but the brightness decreases toward the periphery [9]. Vignetting commonly occurs in current-imaging devices, such as smart phones [10], digital single-lens reflexes (DSLRs) [11], industrial cameras [12], microscopes [13], and line-scanning systems [14]. Because the correction of such vignetting is essentially hidden within these digital imaging devices, users rarely witness it. However, vignetting is frequently seen in microscopy when applied in bioimaging [15], semiconductors [16], flat-panel displays [17], and printed electronics [18]. Vignetting appears sensitively in the output of image fusion, such as in stitching and panoramic images. Image fusion has become popular in digital imaging devices, as well as digital microscopy, and the correction of vignetting is significant for synthesizing a natural and continuous image.

Vignetting is caused by the partial obstruction of the light path from an object plane to an image plane [19]. The sources of vignetting can be classified as the geometric optics of the image plane, the angular sensitivity of the digital sensors, and the light path and path blocking within and in front of a lens, respectively [20]. Vignetting in digital microscopy is corrected using hardware and software. Hardware-based correction is achieved through geometric optics, and thus is usually expensive. The computational cost required to analyze the optics is also high, and the light from commercial illumination is scattered; it is therefore difficult to design optics to correct vignetting. Furthermore, it is impossible to create optics for various commercial lenses and illumination conditions; thus, software correction is preferred in practice.

Software correction is conventionally conducted by applying gain to an actual image after determining the gain from a reference image. The reference image is acquired using a standard target, and the gain of each pixel is then determined by comparing a target value to the gray levels of pixels in the reference image [21]. Flat-field correction (FFC) is a popular method applied in microscopy, machine vision, and telescopy. Although illumination is continuous in a reference image, pixel-based FFC can be discontinuous, owing to camera noise and local damage occurring in a standard target [22]. Thus, regression approaches such as polynomials of the pixel gain [23, 24], an off-axis illumination model [13, 19, 25], and radial polynomials [26] have been discussed for continuity when applying FFC. Kordecki et al. [27] proposed a fast and accurate correction method using a parabolic array, achieved by slicing the reference image in the horizontal and vertical directions. The gain is determined by averaging second-order polynomials in both directions; however the number of coefficients increases significantly to 3 × (width + height). Recent studies have applied machine learning to the correction of vignetting, and convolutional neural networks have been experimented on using uneven illumination for medical imaging [28].

The illuminance distribution of the reference image forms a geometric surface; thus, high-order polynomials and regression have been applied to model the vignetting. Mitsunaga and Nayar [9] proposed a general formulation of polynomials whose coefficients were determined using the least mean square. Vignetting forms symmetric distributions in many cases; thus, simplified low-order polynomials are advantageous for accelerating computation [12, 15, 28, 29]. However, these polynomial models have been derived from 2D models; a 3D model dealing with vignetting has yet to be investigated. In previous studies, the isotropic Gaussian model was briefly discussed for the potential to correct vignetting [15, 30]. However, a determination of the coefficients of the Gaussian function was not presented, because of nonlinear regression.

Considering the illumination distribution of the reference image, many 3D surfaces such as spheres, ellipsoids, paraboloids, and Gaussian surfaces have been applicable for modeling vignetting. These 3D models have only a few coefficients, providing a simple and intuitive formulation. The nonlinear regression used to determine the coefficients can be achieved through multi-dimensional optimum methods. Thus, this study proposes a vignetting-correction method using 3D models and a downhill simplex search. The performances of the conventional and proposed methods are investigated using four inspection systems, ranging from the ultraviolet (UV) to the near infrared (NIR).

The remainder of this paper is organized as follows. Section 2 describes the generalized 3D model and nonlinear regression using a downhill simplex search. Section 3 presents the experimental conditions and performance indices. The experimental results and a discussion are presented in section 4. Finally, concluding remarks are presented in section 5.

II. METHODS

2.1. 3D vignetting model

A FFC adjusts the gray level of each pixel using a reference image of the vignetting. In a simple case, the gain of an individual pixel is obtained from the reference image and the target value. The gray level of an actual image is then corrected using the gain. Equation (1) shows the normalized relation between the corrected image O(x, y), the source image I(x, y), and the reference image V(x, y) [21, 22].

Ox,y=I(x,y)Bvx,yBV¯=Gx,yIx,y.

Here B is the background noise, V¯ is the average vignetting in the reference image V(x, y), and G(x, y) is the pixel gain. The reference image is acquired from a standard white target; however, these direct pixel-to-pixel corrections are undesirable, owing to camera noise and stains of the standard white target. Thus, a regression of the vignetting is advantageous for guaranteeing continuity and estimating the overall tendency. Vignetting is observed as the distribution of light intensity on a 2D image plane; thus, a vignetting model can be defined using a 3D shape, as shown in Fig. 1. Here (x, y) indicates the image coordinates and z is the gray level, which follows a 3D model. The gain of each pixel is calculated using the 3D model and the target level, as shown in Eq. (1). The gray level of the corrected image becomes uniform according to the target level. A 3D model of the vignetting, f (x, y), is defined as an arbitrary polynomial, the coefficients of which can be determined using the optimization based on nonlinear regression.

Figure 1. Concept of vignetting correction, using a 3D model from a reference image to the corrected image.

A cost function for the optimum method is defined to minimize the error between the reference image and 3D model, as follows:

E=minx,yx y Vx,yfx,y2.

The light distribution under vignetting is typically axisymmetric; thus, radial polynomials have been proposed in previous studies [19, 20], the generalized form of which is as follows:

fr=i=0nairi x,y

The cross section of the vignetting is approximated as a parabola; thus, the vignetting model can be arranged using parabolas, after slicing the reference image in the xy direction [27]. The vignetting model is organized as an (x, y) parabola array, and the coefficients of each parabola can be determined using the least-mean-squares method.

fx,y=12 i=0 2axixi+ayiyi.

The parabolic array model is quite accurate and can easily determine the coefficients. However, the number of coefficients becomes 3 × (width + height), which reaches an extremely high value in the case of a megapixel camera. Considering the light distribution in the reference image, the shape is approximated as a 3D shape such as a paraboloid, ellipsoid, or Gaussian surface [15, 30]. The normalized coordinates of the 3D vignetting model can be written as generalized polynomials, as follows:

u,v=xxe2,yych,

fx,y=f0+ d+i=1n ai ui+bi uiv ni+ci viY,

where (xc, yc) is the center of vignetting, ad are coefficients, γ is an exponent and f0 is the offset. In the case of a paraboloid model, for example, n = 2, γ = 1, and bi = 0. An ellipsoid model can be defined as n = 2, γ = 1 / 2, and bi = 0. In addition, the formulation of the radial and Gaussian model is generalized to elliptical and anisotropic Gaussian models.

fx,y=f0+ i=1ncia iu2+b iv2i,

fx,y=f0+ed+i=1n aiui+biuiv ni+civi.

These generalized models are formulated as elliptical and anisotropic polynomials and are extended from previous studies. These elliptical and anisotropic formulations present geometric properties such as the aspect ratio and center of vignetting. As an existing method, a parabolic array model is applied, and the elliptical and anisotropic Gaussian models are extended from conventional models. The paraboloid and ellipsoid models are additionally tested in this study.

2.2. Nonlinear regression using downhill simplex search

The 3D models in Eqs. (6)–(8) and the cost function in Eq. (2) are nonlinear; thus, optimum methods are required to determine the coefficients. As a downhill simplex search, the well-known Nelder-Mead approach is one of the most popular methods for nonderivative, multidimensional nonlinear optimization [31]. Thus, the coefficients of the 3D models can be determined by minimizing the cost function between the vignetting and 3D model. The unknown variables for the optimum methods are the center, offset, and coefficients. The unknown variables are defined as generalized coordinates with dimensions of 3 × m + 4, as shown in Eq. (9):

q=xc,ycd,fo,a1,b1,c1,,am,bm,cm.

Table 1 lists the geometric equations for the 3D model. The elliptical polynomial and anisotropic Gaussian models are extensions of the radial model used in previous research. The intuitive forms are simple, because b is zero and constant terms c can be merged.

TABLE 1. 3D vignetting models and unknown coefficients for the simplex search.

ShapesGeometric EquationsGeneralized FormIntuitive Form
Elliptic Polynomials (O6)r=xxc2A+yyc2Bz=f0+c2r2+c4r4+c6r6(xc,yc,f0,a2,a4,a6,b2,b4,b6,c2,c4,c6)xc,yc,f0,A,B,c2,c4,c6
Anisotropic Gaussianz=f0+Ce xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B,C
Paraboloidz=f0+ xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B
Ellipsoidz=f0+C+ xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B,C


Because the dimension of the generalized coordinate is m, m + 1 vertices are constructed for the simplex search. The initial conditions for the vertices are obtained using the parabolic array model [27]. The coefficients at the center of the parabolic array model are converted to the initial conditions according to the geometric relations. In the first step of the simplex search, the vertices are sorted according to the value of the cost function. The maximum, next (second maximum), and minimum points are then determined. After the middle point excluding the maximum is calculated, the middle vector between the two points is calculated as follows:

qmid=1m i=1,imax m+1qi,

qmid=qmaxqmid.

To test for a lower value, a reflection point is then defined from among the vertices along the middle vector. If the reflection point is lower than the maximum point, an expansion point is defined from the reflection point to repeat the test.

qref,qexp=qmax qmid,qref qmid.

If either of the two points is lower than the maximum, the maximum vertex is replaced by the test point, and the terminal condition is checked. Otherwise, the next test point is designated inside the vertices. This contraction is applied to determine the test point between the maximum and middle points.

qcon=qmax+qmid2.

If the contraction point is lower than the maximum, the maximum is replaced by the contraction point, and the terminal condition is applied. Otherwise, a shrinkage is applied to reduce the vertices toward the current minimum.

qi,new=qmin+qi,old2.

After these tests, the following terminal condition is examined, and the sorting step is repeated if the error is unsatisfactory.

ε=E q maxE q minE q max+E q min

Figure 2 shows these procedures of the simplex search summarized in a flow chart.

Figure 2. Flow chart of the simplex search and transformation of a vertex, for a two-variable case.

III. EXPERIMENTS

3.1. Experiments

Reference images are acquired using four inspection systems, as shown in Table 2. The common components of the inspection systems include an industrial camera, a zoom lens, and a light source, although their specifications differ. The inspection systems have different resolutions, acquisition speeds, and spectral ranges from UV to NIR. The reference images are obtained using a standard white board, after finding the focus and adjusting the light intensity. The light intensity is set at the maximum optical power without pixel saturation. The reference images are then transferred to a PC for image processing. The vignetting models mentioned above are applied to the reference images, and the coefficients are determined using a simplex search. The terminal condition of the simplex search is 10−6. The gain is calculated using Eq. (1) and is multiplied by the actual images to correct the vignetting. The performance indices are computed using reference and corrected images.

TABLE 2. Specifications of the inspection machines used in the test.

CameraSpectral RangeResolutionBitsLensLightMagnificationPhoto
UV-Vis-NIR193–1100 nm1392 × 104014Motorized ZoomHalogen0.58–12.5
Color390–1100 nm2592 × 204810Motorized ZoomWhite LED0.09–393.8
Vis-NIR350–1100 nm2592 × 204810Manual ZoomHalogen0.75–4.5
High Speed390–1050 nm1280 × 102412Motorized ZoomWhite LED0.58–12


The conventional and proposed methods are implemented using C++ codes with open sources under Linux. The C++ codes are generated into generalized subroutines that can handle various image formats. The subroutines are integrated into the library and reused as a software development kit (SDK). The PC for the vignetting correction is a high-performance parallel system consisting of a hexacore, a GPU, and 64 GB of memory.

3.2. Performance Indices

Performance indices are defined to compare the results of the conventional and proposed methods. These performance indices are the coefficient of determination (R12), Pearson correlation coefficient (R22), mean absolute error (MAE), root-mean-square error (RMSE), signal-to-reconstruction error (SRE) [32], and uniformity after the vignetting correction (UFM). Here R2 indicates the correlation; a perfect correlation is 1.0, and no correlation is 0.0. MAE, whereas for an ideal case the RMSE and UFM approach 0.0, and the SRE reaches infinity. These indices are calculated from the reference and corrected images, as shown in Eqs. (16)–(20).

R12=1 y=1h x=1w V x,yf x,y2 y=1h x=1w V x,yV¯2,

R22=CovV x,y,f x,yσvx,yσf(x,y),

MAE=1hw y=1hx=1wVx,yfx,y,

RMSE=1hw y=1hx=1wV x,yf x,y2,

SRE=10log10y=1h x=1w Vx,y2y=1h x=1w Vx,yfx,y2.

IV. RESULTS and DISCUSSION

The experimental results using the four inspection systems are summarized in Tables 36. In the case of the UV-Vis-NIR inspection system, as shown in Table 3, R1 and R2 are approximately 1.0, which indicates high correlation. The MAE and RMSE range from 0.98% to 1.80%, and the uniformity after correction of vignetting is 1.05%–1.62%. The SRE is a sensitive indicator for higher discrimination, and here the parabolic array model scores the highest. Although the parabolic array model is the most accurate for the UV-Vis-NIR inspection system, the number of coefficients is 7296. The other geometric models show similar accuracy, at fewer than ten coefficients.

TABLE 3. Performance indices of the UV-Vis-NIR inspection system applied in the experiment.

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array72960.98730.98730.98711.238438.19481.0523
Elliptic Polynomials80.97640.97661.33991.685635.51711.5031
Anisotropic Gaussian60.97600.97631.33871.701335.43661.5268
Paraboloid50.97420.97441.39701.761935.13281.5757
Ellipsoid60.97310.97331.42421.800534.94411.6190


TABLE 4. Performance indices of the color inspection system used in the experiment.

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array139200.99530.99541.58221.974334.44140.9091
Elliptic Polynomials80.99630.99641.31281.747935.49950.9241
Anisotropic Gaussian60.99680.99691.20751.625836.12830.8515
Paraboloid50.97440.97563.46914.628227.04124.3283
Ellipsoid60.97150.97273.66954.885426.57164.5206


TABLE 5. Performance indices of the Vis-NIR inspection system utilized in the experiment.

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array139200.98070.98083.00613.842328.62941.2655
Elliptic Polynomials80.97880.97932.93404.033828.20691.4277
Anisotropic Gaussian60.98190.98232.74103.722328.90501.2428
Paraboloid50.91930.92645.73287.868922.40297.2386
Ellipsoid60.91140.91935.99748.245221.99717.5272


TABLE 6. Performance indices of a high-speed inspection system applied in the experiment.

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array69120.97050.97051.52401.922734.37590.6445
Elliptic Polynomials80.96020.96031.78582.231033.08400.7651
Anisotropic Gaussian60.96040.96051.78152.225733.10490.7638
Paraboloid50.95710.95721.84782.317232.75490.7979
Ellipsoid60.95500.95511.88962.372632.54960.8217


Table 4 shows the performance of the color inspection system, for which the anisotropic Gaussian model achieves the highest accuracy, followed by the elliptic polynomial. R1 and R2 show high correlation, whereas the paraboloid and ellipsoid show large errors. The uniformity of the parabolic array, elliptic polynomial, and anisotropic Gaussian polynomial models are below 1.0%. Whereas the number of coefficients of the parabolic array model reaches 13920, that of the anisotropic Gaussian was 6. Considering the number of parameters, the anisotropic Gaussian model is simple and competitive in this case.

The performance of the Vis-NIR inspection system is shown in Table 5. The overall accuracy is lower than that of the other inspection systems, because the MAE, RMSE, and UFM were higher. However, uniformity after correction of vignetting is approximately 1.2%–1.4%. In this case, the anisotropic Gaussian model is the most accurate.

Table 6 shows the results of the high-speed inspection system, where the parabolic array model is the most accurate, followed by the anisotropic Gaussian approach. However, the other models show similar accuracy, using a lower number of coefficients. The number of coefficients is 6912 and 6 for the parabolic array and anisotropic Gaussian models respectively. The UFM after correction is below 1.0% for all models.

Table 7 shows the results of vignetting correction using the parabolic array and the anisotropic Gaussian models in the experiments. Some of the reference images are dark, due to the reflectance of the standard white board. For inspection samples such as semiconductor and flat-panel displays, the brightness is sufficient, owing to high reflectivity. The images show little difference after vignetting correction, when viewed with the naked eye.

TABLE 7. Reference and corrected images in the experiment.

Specific Features of the CameraReference Image for VignettingImage Correction Using a Parabolic Array ModelImage Correction Using an Anisotropic Gaussian Model
UV-Vis-NIR
Color
Vis-NIR
High-speed


Figure 3 shows a summary of the processing time required to determine coefficients and vignetting correction in the experiment. The conventional parabolic array model is advantageous in terms of processing time, because iteration is unnecessary. A simplex search of the proposed model iterates the cost function based on pixel operation until the coefficients are sufficiently determined. After determining the coefficients, the difference in processing time of vignetting correction decreases to under 50 ms. Among the proposed models, the paraboloid model is the fastest because it is similar to the conventional model. Furthermore, the initial condition is obtained using the conventional model. The processing times of the proposed models are longer than that of the conventional model, but are tolerable in practice.

Figure 3. Comparison of processing time required to (a) determine coefficients and (b) correct vignetting.

The results generally show that the conventional parabolic array model has advantages, in terms of accuracy and processing time in the experimental cases. The anisotropic Gaussian model shows the highest accuracy in two cases, and a similar performance to that of the conventional model. This fact is unexpected, because the Gaussian model has received little attention and has only been discussed in a few previous studies. Although the isotropic Gaussian model in previous research used an axisymmetric radial exponent [30], the coefficient determination was not presented. Theoretical analyses of the parabolic array and polynomial models have been reported in previous studies. However, the anisotropic Gaussian model is considered for the experiments in this study, and a theoretical approach is required in the future. Considering the various optical combinations of cameras, lenses, and illumination in microscopy, the models for vignetting are not limited to the conventional ones. The most widely fitted vignetting model for a microscope system is determined by the inspection conditions.

A simplex search makes it possible to determine the coefficients of the generalized equations, as well as the other 3D models described in this study. The simplex search provides the possibility of constructing various geometric models and formulations for the correction of vignetting. The experimental models achieve a similar accuracy using 1/1000–1/2000 the number of coefficients, compared to the parabolic array model. The 3D models are intuitive and simple, compared to conventional models. The 3D models also provide geometric properties of the vignetting, such as the vignetting center and aspect ratio. These geometric properties can be used to align the optical and illumination axes. The proposed method is also available for an aspherical surface and inhomogeneous polynomials. In the future, we will also provide a parallel-processing architecture for the correction of vignetting, for real-time imaging using a graphics processing unit and a multicore processor.

V. CONCLUSION

A geometric modeling method for the correction of vignetting using 3D equations and a simplex search was proposed in this study. The 3D models were implemented into generalized nonlinear polynomials, considering conventional models. The coefficients of the 3D models were determined using a simplex search, and performance indices were defined for the experiment. Reference images were acquired using four inspection systems, and performance indices were obtained using the proposed models. The C++ code for the experiment was applied using open sources, and could handle various test conditions. Although the parabolic array model generally showed good performance during the experiments, the results of this study found that the anisotropic Gaussian model was unexpectedly accurate in certain cases. The proposed 3D models showed similar accuracy, but using 1/1000–1/2000 of the coefficients, to that of the parabolic array model. These 3D vignetting models are intuitive and provide the overall characteristics of the vignetting. The proposed method provides solutions for the alignment of optics and illumination, as well as the construction of various vignetting models.

DISCLOSURES

The authors declare no conflicts of interest.

DATA AVAILABILITY

Data underlying the results presented in this paper are not publicly available at the time of publication, which may be obtained from the authors upon reasonable request.

ACKNOWLEDGMENT

This research was supported by the Year 2022 Culture Technology R&D Program developed by the Ministry of Culture, Sports and Tourism (MCST) and the Korea Creative Content Agency (KOCCA). (Development of the system for digital data acquisition of modern and contemporary fine arts and supporting science-based art credibility analysis).

FUNDING

Korean Creative Content Agency (KOCCA R202006 0004).

Fig 1.

Figure 1.Concept of vignetting correction, using a 3D model from a reference image to the corrected image.
Current Optics and Photonics 2022; 6: 161-170https://doi.org/10.3807/COPP.2022.6.2.161

Fig 2.

Figure 2.Flow chart of the simplex search and transformation of a vertex, for a two-variable case.
Current Optics and Photonics 2022; 6: 161-170https://doi.org/10.3807/COPP.2022.6.2.161

Fig 3.

Figure 3.Comparison of processing time required to (a) determine coefficients and (b) correct vignetting.
Current Optics and Photonics 2022; 6: 161-170https://doi.org/10.3807/COPP.2022.6.2.161

TABLE 1 3D vignetting models and unknown coefficients for the simplex search

ShapesGeometric EquationsGeneralized FormIntuitive Form
Elliptic Polynomials (O6)r=xxc2A+yyc2Bz=f0+c2r2+c4r4+c6r6(xc,yc,f0,a2,a4,a6,b2,b4,b6,c2,c4,c6)xc,yc,f0,A,B,c2,c4,c6
Anisotropic Gaussianz=f0+Ce xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B,C
Paraboloidz=f0+ xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B
Ellipsoidz=f0+C+ xxc2A+ yyc2Bxc,yc,f0,d,a1,c1,a2,c2xc,yc,f0,A,B,C

TABLE 2 Specifications of the inspection machines used in the test

CameraSpectral RangeResolutionBitsLensLightMagnificationPhoto
UV-Vis-NIR193–1100 nm1392 × 104014Motorized ZoomHalogen0.58–12.5
Color390–1100 nm2592 × 204810Motorized ZoomWhite LED0.09–393.8
Vis-NIR350–1100 nm2592 × 204810Manual ZoomHalogen0.75–4.5
High Speed390–1050 nm1280 × 102412Motorized ZoomWhite LED0.58–12

TABLE 3 Performance indices of the UV-Vis-NIR inspection system applied in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array72960.98730.98730.98711.238438.19481.0523
Elliptic Polynomials80.97640.97661.33991.685635.51711.5031
Anisotropic Gaussian60.97600.97631.33871.701335.43661.5268
Paraboloid50.97420.97441.39701.761935.13281.5757
Ellipsoid60.97310.97331.42421.800534.94411.6190

TABLE 4 Performance indices of the color inspection system used in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array139200.99530.99541.58221.974334.44140.9091
Elliptic Polynomials80.99630.99641.31281.747935.49950.9241
Anisotropic Gaussian60.99680.99691.20751.625836.12830.8515
Paraboloid50.97440.97563.46914.628227.04124.3283
Ellipsoid60.97150.97273.66954.885426.57164.5206

TABLE 5 Performance indices of the Vis-NIR inspection system utilized in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array139200.98070.98083.00613.842328.62941.2655
Elliptic Polynomials80.97880.97932.93404.033828.20691.4277
Anisotropic Gaussian60.98190.98232.74103.722328.90501.2428
Paraboloid50.91930.92645.73287.868922.40297.2386
Ellipsoid60.91140.91935.99748.245221.99717.5272

TABLE 6 Performance indices of a high-speed inspection system applied in the experiment

ModelNo. of Coeff.R12R22MAE (%)RMSE (%)SREUFM (%)
Ideal-1.00001.00000.00000.00000.0000
Parabolic Array69120.97050.97051.52401.922734.37590.6445
Elliptic Polynomials80.96020.96031.78582.231033.08400.7651
Anisotropic Gaussian60.96040.96051.78152.225733.10490.7638
Paraboloid50.95710.95721.84782.317232.75490.7979
Ellipsoid60.95500.95511.88962.372632.54960.8217

TABLE 7 Reference and corrected images in the experiment

Specific Features of the CameraReference Image for VignettingImage Correction Using a Parabolic Array ModelImage Correction Using an Anisotropic Gaussian Model
UV-Vis-NIR
Color
Vis-NIR
High-speed

References

  1. E. S. Statnik, A. I. Salimon, and A. M. Korsunsky, “On the application of digital optical microscopy in the study of materials structure and deformation,” Mater. Today 33, 1917-1923 (2020).
    CrossRef
  2. S. Pertuz, D. Puig, and M. A. Garcia, “Analysis of focus measure operators for shape-from-focus,” Pattern Recognit. 46, 1415-1432 (2013).
    CrossRef
  3. G. Wang, C. Lopez-Molina, and B. De Baets, “Automated blob detection using iterative Laplacian of Gaussian filtering and unilateral second-order Gaussian kernels,” Digit. Signal Process. 96, 102592 (2020).
    CrossRef
  4. J. Quintana, R. Garcia, and L. Neumann, “A novel method for color correction in epiluminescence microscopy,” Comput. Med. Imaging Graph. 35, 646-652 (2011).
    Pubmed CrossRef
  5. J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. 105, 75-85 (2018).
    CrossRef
  6. P. Nagy, G. Vámosi, A. Bodnár, S. J. Lockett, and J. Szöllősi, “Intensity-based energy transfer measurements in digital imaging microscopy,” Eur. Biophys. J. 27, 377-389 (1998).
    Pubmed CrossRef
  7. S. Van der Jeught, J. A. N. Buytaert, and J. J. J. Dirckx, “Real-time geometric lens distortion correction using a graphics processing unit,” Opt. Eng. 51, 027002 (2012).
    CrossRef
  8. T. Masunari, and M. Hisaka, “Optical microscopy using annular full-color LED for quantitative phase and spectroscopic imaging of biological tissues,” Proc. SPIE 11140, 1114001-197 (2019).
  9. T. Mitsunaga and S. K. Nayar, “Radiometric self calibration,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Fort Collins, CO, USA, June. 1999), pp. 374-380.
  10. Y. Bai, M. Chin, and T. Tajbakhsh, “Lens shading modulation,” US Patent US10171786B2 (2019).
  11. D. Reddy, J. Bai, and R. Ramamoorthi, “External mask based depth and light field camera,” in Proc. IEEE International Conference on Computer Vision Workshops-ICCV (Sydney, Australia, Dec. 2-8, 2013), pp. 37-44.
    CrossRef
  12. D. Karatzas, M. Rusinol, C. Antens, and M. Ferrer, “Segmentation robust to the vignetting effect for machine vision systems,” in Proc. International Conference on Pattern Recognition, (Tampa, FL, USA, Dec. 8-11, 2008), pp. 1-4.
    CrossRef
  13. L. Mignard-Debise and I. Ihrke, “A vignetting model for light field cameras with an application to light field microscopy,” IEEE Trans. Comput. Imaging 5, 585-595 (2019).
    CrossRef
  14. H. T. Kim, E. J. Ha, K. C. Jin, and B. W. Kim, “Optimal color lighting for scanning images of flat panel display using simplex search,” J. Imaging 4, 133 (2018).
    CrossRef
  15. S. S. Lee, S. Pelet, M. Peter, and R. Dechant, “A rapid and effective vignetting correction for quantitative microscopy,” RSC Adv. 4, 52727-52733 (2014).
    CrossRef
  16. H. Park, H. J. Choi, K. Rhew, H. Lim, H. Lee, I. Jang, and C. H. Min, “Development of accurate dose evaluation technique of X-ray inspection for quality assurance of semiconductor with Monte Carlo simulation,” Appl. Radiat. Isot. 154, 108851 (2019).
    Pubmed CrossRef
  17. J. Kwak, K. B. Lee, J. Jang, K. S. Chang, and C. O. Kim, “Automatic inspection of salt-and-pepper defects in OLED panels using image processing and control chart techniques,” J. Intell. Manuf. 30, 1047-1055 (2019).
    CrossRef
  18. H. T. Kim, Y. J. Moon, H. Kang, and J. Y. Hwang, “On-machine measurement for surface flatness of transparent and thin film in laser ablation process,” Coatings 10, 885 (2020).
    CrossRef
  19. S. B. Kang and R. S. Weiss, “Can we calibrate a camera using an image of a flat, textureless Lambertian surface?,” in Computer VisionECCV 2000 (Lecture Notes in Computer Science book series), (Springer, Germany, 2000), pp. 640-653.
    CrossRef
  20. D. B. Goldman, “Vignetting and exposure calibration and compensation,” IEEE Trans. Pattern Anal. Machine Intell. 32, 2276-2288 (2010).
    Pubmed CrossRef
  21. F. Piccinini and A. Bevilacqua, “Colour vignetting correction for microscopy image mosaics used for quantitative analyses,” BioMed Res. Inter. 2018, 7082154 (2018).
    Pubmed KoreaMed CrossRef
  22. Y.-O. Tak, A. Park, J. Choi, J. Eom, H.-S. Kwon, and J. B. Eom, “Simple shading correction method for brightfield whole slide imaging,” Sensors 20, 3084 (2020).
    Pubmed KoreaMed CrossRef
  23. F. Piccinini, E. Lucarelli, A. Gherardi, and A. Bevilacqua, “Multi-image based method to correct vignetting effect in light microscopy images,” J. Microsc. 248, 6-22 (2012).
    Pubmed CrossRef
  24. J. Manfroid, “On CCD standard stars and flat-field calibration,” Astron. Astrophys. Suppl. Ser. 118, 391-395 (1996).
    CrossRef
  25. S. J. Kim and M. Pollefeys, “Robust radiometric calibration and vignetting correction,” IEEE Trans. Pattern Anal. Machine Intell. 30, 562-576 (2008).
    Pubmed CrossRef
  26. J. Stumpfel, A. Jones, A. Wenger, C. Tchou, T. Hawkins, and P. Debevec, “Direct HDR capture of the sun and sky,” in Proc. Special Interest Group on Computer Graphics and Interactive Techniques Conference- SIGGRAPH06 (Boston, MA, USA, Jul. 30-Aug. 3, 2006), pp. 5-es.
    CrossRef
  27. A. Kordecki, H. Palus, and A. Bal, “Practical vignetting correction method for digital camera with measurement of surface luminance distribution,” Signal Image Video Process. 10, 1417-1424 (2016).
    CrossRef
  28. J. Wang, X. Wang, P. Zhang, S. Xie, S. Fu, Y. Li, and H. Han, “Correction of uneven illumination in color microscopic image based on fully convolutional network,” Opt. Express 29, 28503-28520 (2021).
    Pubmed CrossRef
  29. Y. Zheng, J. Yu, S. B. Kang, S. Lin, and C. Kambhamettu, “Single-image vignetting correction using radial gradient symmetry,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, AK, USA, Jun. 23-28, 2008), pp. 1-8.
  30. F. J. W.-M. Leong, M. Brady, J. O’. McGee, “Correction of uneven illumination (vignetting) in digital microscopy images,” J. Clin. Pathol. 56, 619-621 (2003).
    Pubmed KoreaMed CrossRef
  31. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. (Cambridge University Press, UK, 1992).
  32. S. Zhang, W. Hua, J. Liu, G. Li, and Q. Wang, “Multiview-based Spectral weighted and low-rank for row-sparsity hyperspectral unmixing,” Curr. Opt. Photonics 5, 431-443 (2021).