G-0K8J8ZR168
검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Article

Curr. Opt. Photon. 2021; 5(5): 524-531

Published online October 25, 2021 https://doi.org/10.3807/COPP.2021.5.5.524

Copyright © Optical Society of Korea.

Identification and Correction of Microlens-array Error in an Integral-imaging-microscopy System

Shariar Md Imtiaz1, Ki-Chul Kwon1, Md. Shahinur Alam1, Md. Biddut Hossain1, Nam Changsup2, Nam Kim1

1School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Korea
2Department of Mechanical ICT Engineering, College of Future Convergence, Hoseo University, Cheonan 31066, Korea

Corresponding author: namkim@chungbuk.ac.kr, ORCID 0000-0001-8109-2055

Received: July 6, 2021; Revised: August 30, 2021; Accepted: September 6, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

In an integral-imaging microscopy (IIM) system, a microlens array (MLA) is the primary optical element; however, surface errors impede the resolution of a raw image’s details. Calibration is a major concern with regard to incorrect projection of the light rays. A ray-tracing-based calibration method for an IIM camera is proposed, to address four errors: MLA decentering, rotational, translational, and subimage-scaling errors. All of these parameters are evaluated using the reference image obtained from the ray-traced white image. The areas and center points of the microlens are estimated using an “8-connected” and a “center-of-gravity” method respectively. The proposed approach significantly improves the rectified-image quality and nonlinear image brightness for an IIM system. Numerical and optical experiments on multiple real objects demonstrate the robustness and effectiveness of our proposed method, which achieves on average a 35% improvement in brightness for an IIM raw image.

Keywords: Integral imaging microscopy system, Microlens array, Microscopy camera calibration, Rectification of microlens image

OCIS codes: (100.2980) Image enhancement; (110.0180) Microscopy; (110.3000) Image quality assessment; (110.3010) Image reconstruction techniques

Integral-imaging microscopy (IIM) is a three-dimensional (3D) microscopy modality that senses differences in color intensity and depth within a single frame, through a microlens array (MLA) [1]. It provides detailed information on parallax and depth for various applications, including digital imaging involving super-resolution [24], depth mapping [5, 6], and 3D reconstruction [7]. In 1908, Gabriel Lippmann invented the first light-field camera, which he referred to as an “integral photography” device [8]; this camera had a carefully spaced lens array, to allow multiple photographs to be taken of a scene. Since its introduction, this camera has undergone numerous modifications, including changes in the camera array. Hand-held models have also been introduced [9].

In the basic IIM configuration, the elemental-image array (EIA) captured by the MLA holds a collection of subimages arranged in a particular sequence for 3D reconstruction [10], as shown in Fig. 1. The main purpose is to ensure efficient use of the IIM data during the processing and restoration of 3D images, where the raw image contains the depth information of the estimated target. However, fabrication and assembly limitations lead to defects in MLAs, including rotational [11, 12], translational [12], decentering [13], and pitch [11] errors. These errors diminish the quality, brightness, and resolution of the captured image, and make it difficult to determine the MLA’s location [13]. A large amount of detail in the raw image is lost or distorted, resulting in a significant decrease in target flow efficiency [14, 15]. Therefore, it is necessary to correct the distortions in IIM images caused by MLA errors.

Figure 1.Optical setup and schematic diagram of an integral-imaging microscopy (IIM) system.

Different calibration methods for microlens-based cameras have been suggested. Recent advances in light-field-camera technology have improved the precision of more recent configurations. Su et al. [16] created a nine-parameter geometric model for a plenoptic light-field camera, based on the central projection between the MLA and image sensor; rotational error is estimated using the subimage center point and an optimization algorithm. Suliga et al. [17] presented a mathematical method for calibrating intrinsic MLA parameters in plenoptic cameras, using raw images to calculate rotational and translational offsets based on the subimage center point. Li et al. [12] introduced an error-correction model based on rotational, translational, and tilt errors; errors were calculated over different ranges using the proposed method. Jin et al. [18] introduced an MLA rotation-rectification method for light-field Lytro cameras, based on locating the center of each MLA raw image. Cho et al. [19] introduced a calibration algorithm and corrected rotational error of the MLA using the pixel axis of the raw light-field image. Zhao et al. [20] developed a numerical centroid-shift method based on the geometric encircled energy. Li et al. [11] defined the rotational error between the MLA and image sensor; in addition, they introduced a method to correct for errors associated with the pitch, microlens radius, and decentering, based on the center point of the microlens subimages and edge detection. Most of the calibration techniques are focused on commercial devices, such as the Lytro camera [12, 18]. These techniques use the cheeseboard detection method and compare its results to the ground truth. But these conventional methods are not compatible with IIM, so ray tracing is one of the most effective techniques that can perfectly identify the “center of gravity” of a MLA.

The methods in the abovementioned studies tend to idealize the MLA’s surface parameters, and only consider image distortion. However, the surface error of the MLA and its image-distortion error are not constant [21, 22]. Therefore, local deterioration caused by microlens surface errors cannot be efficiently rectified by integral methods; moreover, such techniques suffer from considerable computational complexity and negligible correction effects. The rotation-angle calculation depends on positional data, and involves precalculation of the system’s intrinsic parameters; this increases the required time and complexity of computations. There are also various, unavoidable device faults, such as distortion of the main lens, MLA translational errors, and scaling-factor errors between optical components.

In this study, we develop an IIM-camera calibration method based on ray tracing of the image, to address four types of error: MLA decentering, rotational, translational, and subimage-scaling errors. The effectiveness and reliability of the method are verified by comparing real-object images of different targets, before and after correction.

2.1. Geometric Camera Model

The IIM system consists of an objective lens, tube lens, MLA, and charge-coupled device (CCD). Fig. 1 illustrates the structural layout of the IIM-system model, in which the MLA is placed between the microscopy tube lens and CCD image sensor. Light information passes through the objective lens and microlenses of the MLA. An EIA is captured by the CCD, which in turn forms a series of subimages. An orthographic-view image (OVI) is produced from the EIA image’s information [10].

Suppose a specimen point with an initial coordinate (x, y, z) is imaged on the EIA plane through the (m, n)th elemental lens and camera lens as

XEI(m,n)=fMfC(m×PELx)fcm×PEL(zfM)(gfM)(zfM)YEI(m,n)=fMfC(n×PELy)fcn×PEL(zfM)(gfM)(zfM),

where fM and fC are the focal lengths of the MLA and camera lens respectively, PEL is the pitch of the elemental lens, and g is the gap between the MLA and main lens of the camera. The entire MLA consists of m × n microlenses, which are densely arranged in a square-matrix form, as shown in Fig. 2. The unit is placed in the mth row and nth column of the MLA. Therefore, each microlens corresponds to pixels that provide directional light information to the CCD.

Figure 2.Surface-design model of the microlens array (MLA), and its coordinate system.

2.2. Microlens Error Model

2.2.1. Rotational Error

Each microlens in the IIM camera forms a subimage on the CCD sensor that matches the directional light information. To acquire the correct image, the microlenses should be arranged in a matrix format, with parallel positioning of the image-sensor plane and MLA plane, in which the light rays strike each microlens at a different angle or projection. In the manufacturing and assembly process, the MLA is not aligned precisely with the image sensor. If there is an angle between the MLA and image sensor, rotational error occurs, as shown in Fig. 3(a).

Figure 3.Schematic diagram of the microlens array’s (MLA) (a) rotational error, (b) translational error, and (c) scaling-factor error.

2.2.2. Translation Error

In the IIM camera, translational error in the MLA leads to blurring and poor-quality digitally refocused images. Each microlens forms a subimage on the CCD sensor that corresponds to directional light information. When the microlenses are arranged, a fixed matrix format is needed for efficient use of the sensor pixels and subimages, with no crosstalk. Under this condition, any deviation in the horizontal or vertical position of the microlenses from their assumed, standard matrix position introduces translational error, as shown in Fig. 3(b).

2.2.3. Scaling Error

In the MLA, deviation of the lens diameter of the microlens surface from the standard diameter introduces a scaling-factor error, as shown in Fig. 3(c). The lens-diameter selection is important, because it determines the light-propagation characteristics and field angle. The scaling process tends to maximize an object’s dimensions, based on its original coordinates, to achieve the desired result.

Analyses have shown that, in some subimages of the light field picture, error is caused by significant distortion related to a change in position, border scattering, and brightness variation [13]. Here, we introduce a method for calibrating a distorted IIM image using raw white images, as described in the following steps in Fig. 4.

Figure 4.Process diagram for the proposed method of microlens-error estimation and correction.

To calibrate the error of an IIM image, a white-scene image consisting of 1856 × 1856 pixels is captured; notably, this image must be both white and homogeneous. A region of interest (ROI) is selected by cropping (e.g. to obtain a 10 × 10 MLA), as shown in Fig. 5(a). Then, the red-green-blue (RGB) image is converted to a grayscale image. In image binarization, the adaptive operation of the thresholding process is exploited, as shown in Fig. 5(b). Adaptive thresholding determines the value of the threshold based on the mean value of a specific neighborhood area or pixel. The microlens light-intensity area and center position of each microlens is computed using a center-of-gravity method, defined as the weighted average of the pixel values in each row and column, as shown in Figs. 5(c) and 5(d) respectively. Then, we calculate the rotational angle based on the center index of each row. The measured angle is applied to adjust the image based on the direction of the angle, following Eq. (2):

Figure 5.Intermediate steps of the calibration process. (a) A white Integral-imaging microscopy (IIM) image with a region of interest (ROI), (b) image binarization, (c) circle detection, and (d) microlens-center identification.

[ x' y']=[ cosα sinα sinα cosα][ x y][ x' y']=[ cosα sinα sinα cosα][ x y].

Here x and y denote the original coordinates of the image, and α is the rotational angle.

The MLA translational error is illustrated in Fig. 6. Extracting the subimage from a white image is an important precondition for identifying the microlens gravity-index-shift error in the MLA as shown in Fig. 6(b). Each subimage contains 29 × 29 pixels. The subimage is transformed to a grayscale image, and Otsu’s method [23, 24] is utilized to convert the grayscale image to a binary one, as shown in Fig. 6 (c). For the binary image, we define each microlens as a distinct region using the circle-detection method. Fig. 6(d) presents the intermediate results of these steps. In this proposed method, the average translation position is calculated for each row and column based on the microlens’s center-pixel position, from which any deviation results in translational error. As a result, the original position is shifted to the average center direction, as shown in Fig. 7(b).

Figure 6.Microlens subimage division. (a) A raw white image, (b) extracting a microlens subimage, (c) binarization of a subimage, and (d) subimage area shifted from the center point.

Figure 7.The MLA’s first subimage: (a) without correction, (b) average translation, and (c) after scaling correction.

To address scaling-factor error, the subimage’s center position and area of light intensity must first be calculated. The determined scaling factor is applied to convert the original subimage to the maximum lenslet area, as shown in Fig. 7(c). In our experiments the maximum size of a single lenslet’s area is 29 × 29 pixels. The bilinear-interpolation technique is then utilized to maximize the subimage.

In this experiment we use a BX41 microscope (Olympus, Tokyo, Japan), as shown in Fig. 8, consisting of an objective lens, tube lens, lens array, CCD, and control computer. The lens array is arranged in a 100 × 100 square matrix. Each lenslet has an area of 125 × 125 µm2 and a spherical surface. Our method is implemented in the MATLAB® environment. Detailed specifications of the optical components and personal computer are given in Table 1.

TABLE 1 Specifications of the proposed Integral-imaging microscopy (IIM) calibration system

Optical devicesSpecifications
IIM UnitObjective lens×10
Tube lens×10
MLANumber of lens array100 × 100 (ROI 64 × 64)
Elemental lens pitch125 μm
CameraSensor resolution2048 × 2048 pixels (RGB)
Pixel pitch5.5 μm
Focal length2.4 mm
User PCFrame rate90 fps
CPUIntel Core i5-9400F 2.9 GHz
Memory16 GB
Operating systemWindows 10 Pro (64-bit)


Figure 8.Experimental setup for the proposed IIM-camera calibration system.

The rectification method [25] is compared to the proposed method to verify the effectiveness, in terms of structural similarity index (SSIM) and mean squared error (MSE) values. In this system the MLA rotational errors are set from 0.1° to 1.0°. The comparison of results is shown in Fig. 9, based on different rotational factors. We find that the proposed method outperforms the traditional one in all cases. The SSIM and MSE values are almost similar.

Figure 9.Comparison of the proposed method and the image-rectification method of [27]. The SSIM and MSE values are calculated according to the rotation factor (0.1°–1.0°).

Using the IIM method, our calibration method is applied to the IIM images of four specimens. Figure 10 shows the corresponding images, which include (i) a 2D image, (ii) the EIA used to capture the directional-view images, (iii) the initial OVI reconstructed using the computational integral-imaging reconstruction (CIIR) [26, 27] algorithm, and (iv) the final reconstructed OVI image. In this experiment the brightness of the initial OVI is not uniform, as shown in Fig. 10(iii). The image’s brightness improves after applying our proposed calibration method, as shown in Fig. 10(iv).

Figure 10.Experimental results for different specimens: (a) fly, (b) microchip, (c) mosquito, and (d) sand crystal; (i) 2D image, (ii) 64 × 64 elemental image, (iii) initial OVI without correction, and (iv) final orthographic-view image (OVI) with correction.

Figure 11 shows the percentage of bright pixels in each of the four initial and calibrated specimen OVIs. Our experimental results show that the proposed calibration method successfully enhances the uniform brightness of the images in the directional view. It is observed that the images of the microchip and sand crystal enjoy better results than those of the fly and mosquito, because of the light information. We can see from Fig. 10 that the microchip and sand crystal are brighter, causing more light reflection. Calibration does not have a huge effect on the weak-light-source area. For the same reason, the mosquito sample exhibits the worst performance.

Figure 11.Comparison of orthographic-view images (OVI) before and after correction, with the measured percentage of bright pixel values for different specimens (fly, microchip, mosquito, and sand crystal).

In this work, we introduced a ray-tracing-based MLA calibration method for an IIM system. The goal was to minimize errors in MLA alignment, and thus improve image quality and brightness. The proposed method attenuated MLA errors using a reference image. An efficient error-detection and -correction method was proposed, in which a four-step calibration process is employed for EIA processing. Experiments with real IIM images showed that the proposed calibration method can accommodate nonlinear MLA errors to achieve an enhanced, error-free IIM image with improved brightness uniformity. Our experimental results for four images of various specimens (a fly, a microchip resistor, a mosquito, and a sand crystal) demonstrate that the proposed method has the robustness needed to correct for various errors. Future work is expected to include a more advanced lens-distortion model, which should improve the accuracy of the IIM image-capturing system by overcoming the various complexities associated with the microscope.

This work was supported by the National Research Foundation of Korea (NRF) (NRF-2018R1D1A3B07044041, NRF-2020R1A2C1101258), and was supported under the Grand Information Technology Research Center support program (IITP-2020-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation), grant funded by the Korean government.

  1. M. Levoy, “Light fields and computational imaging,” Computer 39, 46-55 (2006).
    CrossRef
  2. K.-C. Kwon, K. H. Kwon, M.-U. Erdenebat, Y.-L. Piao, Y.-T. Lim, M. Y. Kim and N. Kim, “Resolution-enhancement for an integral imaging microscopy using deep learning,” IEEE Photonics J. 11, 6900512 (2019).
    CrossRef
  3. M. S. Alam, K.-C. Kwon, M.-U. Erdenebat, M. Y. Abbass, A. Alam and N. Kim, “Super-resolution enhancement method based on generative adversarial network for integral imaging microscopy,” Sensors 21, 2164 (2021).
    Pubmed KoreaMed CrossRef
  4. M. Y. Abbass, K.-C. Kwon, M. S. Alam, Y.-L. Piao, K.-Y. Lee and N. Kim, “Image super resolution based on residual dense CNN and guided filters,” Multimed. Tools Appl. 80, 5403-5421 (2021).
    CrossRef
  5. K.-C. Kwon, M.-U. Erdenebat, S. Alam, Y.-T. Lim, K. G. Kim and N. Kim, “Integral imaging microscopy with enhanced depth-of-field using a spatial multiplexing,” Opt. Express 24, 2072-2083 (2016).
    Pubmed CrossRef
  6. C. Shin, H.-G. Jeon, Y. Yoon, I. S. Kweon and S. J. Kim, “Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, 4748-4757 (2018 Jun).
    CrossRef
  7. K.-C. Kwon, K. H. Kwon, M.-U. Erdenebat, Y.-L. Piao, Y.-T. Lim, Y. Zhao, M. Y. Kim and N. Kim, “Advanced three-dimensional visualization system for an integral imaging microscope using a fully convolutional depth estimation network,” IEEE Photonics J. 12, 3900714 (2020).
    CrossRef
  8. T. Georgiev and C. Intwala, “Light field camera design for integral view photography,” in Tech. Rep., (Adobe Systems Incorporated, 2006).
  9. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,,” in Tech. Rep. CTSR-02, (Department of Computer Science, Stanford University, 2005).
  10. K.-C Kwon, J.-S. Jeong, M.-U. Erdenebat, Y.-T. Lim, K.-H. Yoo and N. Kim, “Real-time interactive display for integral imaging microscopy,” Appl. Opt. 53, 4450-4459 (2014).
    Pubmed CrossRef
  11. S. Li, Y. Yuan, Z. Gao and H. Tan, “High-accuracy correction of a microlens array for plenoptic imaging sensors,” Sensors 19, 3922 (2019).
    Pubmed KoreaMed CrossRef
  12. T.-J. Li, S.-N. Li, S. Li, Y. Yuan and H.-P. Tan, “Correction model for microlens array assembly error in light field camera,” Opt. Express 24, 24524-24543 (2016).
    Pubmed CrossRef
  13. S.-N. Li, Y. Yuan, B. Liu, F.-Q. Wang and H.-P. Tan, “Influence of microlens array manufacturing errors on light-field imaging,” Opt. Commun. 410, 40-52 (2018).
    CrossRef
  14. S. Shi, J. Wang, J, Ding, Z. Zhao and T. H. New, “Parametric study on light field volumetric particle image velocimetry,” Flow Meas. Instrum. 49, 70-88 (2016).
    CrossRef
  15. J. Zhao, Z. Liu and B. Guo, “Three-dimensional digital image correlation method based on a light field camera,” Opt. Lasers Eng. 116, 19-25 (2019).
    CrossRef
  16. L. Su, Q. Yan, J. Cao and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22-27 (2016).
    CrossRef
  17. P. Suliga and T. Wrona, “Microlens array calibration method for a light field camera,,” in Proc. 19th International Carpathian Control Conference-ICCC, (Szilvasvarad, Hungary, 2018). pp. 19-22.
    CrossRef
  18. J. Jin, Y. Cao, W. Cai, W. Zheng and P. Zhou, “An effective rectification method for lenselet-based plenoptic cameras,” Proc. SPIE 10020, 100200F (2016).
  19. D. Cho, M. Lee, S. Kim and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,,” in Proc. IEEE International Conference on Computer Vision, (Sydney, Australia, 2013). pp. 3280-3287.
    CrossRef
  20. Z. Zhao, M. Hui, M. Liu, L. Dong, X. Liu and Y. Zhao, “Centroid shift analysis of microlens array detector in interference imaging system,” Opt. Commun. 354, 132-139 (2015).
    CrossRef
  21. X. Liu, X. Zhang, F. Fang, Z. Zeng, H. Gao and X. Hu, “Influence of machining errors on form errors of microlens arrays in ultra-precision turning,” Int. J. Mach. Tools Manuf. 96, 80-93 (2015).
    CrossRef
  22. V. Dembele, I. Choi, S. Kheiryzadehkhanghah, S. Choi, J. Kim, C. S. Kim and D. Kim, “Interferometric snapshot spectro-ellipsometry: calibration and systematic error analysis,” Curr. Opt. Photon. 4, 345-352 (2020).
  23. K. Wu, E. Otoo and A. Shoshani, “Optimizing connected component labeling algorithms,” Proc. SPIE 5747, 1965-1976 (2005).
  24. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern. SMC-9, 62-66 (1979).
    CrossRef
  25. S. Li, Y. Zhu, C. Zhang, Y. Yuan and H. Tan, “Rectification of images distorted by microlens array errors in plenoptic cameras,” Sensors 18, 2019 (2019).
    Pubmed KoreaMed CrossRef
  26. Y.-T. Lim, J.-H. Park, K.-C. Kwon and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17, 19253-19263 (2009).
    Pubmed CrossRef
  27. S. Alam, K.-C. Kwon, M.-U. Erdenebat, Y.-T. Lim, S. Imtiaz, A. Sufian, S.-H. Jeon and N. Kim, “Resolution Enhancement of an Integral Imaging Microscopy Using Generative Adversarial Network,,” in Proc. Conference on Lasers and Electro-Optics Pacific Rim-CLEO-PR, (Sydney, Australia, 2020). paper C3G_4.
    KoreaMed CrossRef

Article

Article

Curr. Opt. Photon. 2021; 5(5): 524-531

Published online October 25, 2021 https://doi.org/10.3807/COPP.2021.5.5.524

Copyright © Optical Society of Korea.

Identification and Correction of Microlens-array Error in an Integral-imaging-microscopy System

Shariar Md Imtiaz1, Ki-Chul Kwon1, Md. Shahinur Alam1, Md. Biddut Hossain1, Nam Changsup2, Nam Kim1

1School of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Korea
2Department of Mechanical ICT Engineering, College of Future Convergence, Hoseo University, Cheonan 31066, Korea

Correspondence to:namkim@chungbuk.ac.kr, ORCID 0000-0001-8109-2055

Received: July 6, 2021; Revised: August 30, 2021; Accepted: September 6, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In an integral-imaging microscopy (IIM) system, a microlens array (MLA) is the primary optical element; however, surface errors impede the resolution of a raw image’s details. Calibration is a major concern with regard to incorrect projection of the light rays. A ray-tracing-based calibration method for an IIM camera is proposed, to address four errors: MLA decentering, rotational, translational, and subimage-scaling errors. All of these parameters are evaluated using the reference image obtained from the ray-traced white image. The areas and center points of the microlens are estimated using an “8-connected” and a “center-of-gravity” method respectively. The proposed approach significantly improves the rectified-image quality and nonlinear image brightness for an IIM system. Numerical and optical experiments on multiple real objects demonstrate the robustness and effectiveness of our proposed method, which achieves on average a 35% improvement in brightness for an IIM raw image.

Keywords: Integral imaging microscopy system, Microlens array, Microscopy camera calibration, Rectification of microlens image

I. INTRODUCTION

Integral-imaging microscopy (IIM) is a three-dimensional (3D) microscopy modality that senses differences in color intensity and depth within a single frame, through a microlens array (MLA) [1]. It provides detailed information on parallax and depth for various applications, including digital imaging involving super-resolution [24], depth mapping [5, 6], and 3D reconstruction [7]. In 1908, Gabriel Lippmann invented the first light-field camera, which he referred to as an “integral photography” device [8]; this camera had a carefully spaced lens array, to allow multiple photographs to be taken of a scene. Since its introduction, this camera has undergone numerous modifications, including changes in the camera array. Hand-held models have also been introduced [9].

In the basic IIM configuration, the elemental-image array (EIA) captured by the MLA holds a collection of subimages arranged in a particular sequence for 3D reconstruction [10], as shown in Fig. 1. The main purpose is to ensure efficient use of the IIM data during the processing and restoration of 3D images, where the raw image contains the depth information of the estimated target. However, fabrication and assembly limitations lead to defects in MLAs, including rotational [11, 12], translational [12], decentering [13], and pitch [11] errors. These errors diminish the quality, brightness, and resolution of the captured image, and make it difficult to determine the MLA’s location [13]. A large amount of detail in the raw image is lost or distorted, resulting in a significant decrease in target flow efficiency [14, 15]. Therefore, it is necessary to correct the distortions in IIM images caused by MLA errors.

Figure 1. Optical setup and schematic diagram of an integral-imaging microscopy (IIM) system.

Different calibration methods for microlens-based cameras have been suggested. Recent advances in light-field-camera technology have improved the precision of more recent configurations. Su et al. [16] created a nine-parameter geometric model for a plenoptic light-field camera, based on the central projection between the MLA and image sensor; rotational error is estimated using the subimage center point and an optimization algorithm. Suliga et al. [17] presented a mathematical method for calibrating intrinsic MLA parameters in plenoptic cameras, using raw images to calculate rotational and translational offsets based on the subimage center point. Li et al. [12] introduced an error-correction model based on rotational, translational, and tilt errors; errors were calculated over different ranges using the proposed method. Jin et al. [18] introduced an MLA rotation-rectification method for light-field Lytro cameras, based on locating the center of each MLA raw image. Cho et al. [19] introduced a calibration algorithm and corrected rotational error of the MLA using the pixel axis of the raw light-field image. Zhao et al. [20] developed a numerical centroid-shift method based on the geometric encircled energy. Li et al. [11] defined the rotational error between the MLA and image sensor; in addition, they introduced a method to correct for errors associated with the pitch, microlens radius, and decentering, based on the center point of the microlens subimages and edge detection. Most of the calibration techniques are focused on commercial devices, such as the Lytro camera [12, 18]. These techniques use the cheeseboard detection method and compare its results to the ground truth. But these conventional methods are not compatible with IIM, so ray tracing is one of the most effective techniques that can perfectly identify the “center of gravity” of a MLA.

The methods in the abovementioned studies tend to idealize the MLA’s surface parameters, and only consider image distortion. However, the surface error of the MLA and its image-distortion error are not constant [21, 22]. Therefore, local deterioration caused by microlens surface errors cannot be efficiently rectified by integral methods; moreover, such techniques suffer from considerable computational complexity and negligible correction effects. The rotation-angle calculation depends on positional data, and involves precalculation of the system’s intrinsic parameters; this increases the required time and complexity of computations. There are also various, unavoidable device faults, such as distortion of the main lens, MLA translational errors, and scaling-factor errors between optical components.

In this study, we develop an IIM-camera calibration method based on ray tracing of the image, to address four types of error: MLA decentering, rotational, translational, and subimage-scaling errors. The effectiveness and reliability of the method are verified by comparing real-object images of different targets, before and after correction.

II. MODELS AND METHODS

2.1. Geometric Camera Model

The IIM system consists of an objective lens, tube lens, MLA, and charge-coupled device (CCD). Fig. 1 illustrates the structural layout of the IIM-system model, in which the MLA is placed between the microscopy tube lens and CCD image sensor. Light information passes through the objective lens and microlenses of the MLA. An EIA is captured by the CCD, which in turn forms a series of subimages. An orthographic-view image (OVI) is produced from the EIA image’s information [10].

Suppose a specimen point with an initial coordinate (x, y, z) is imaged on the EIA plane through the (m, n)th elemental lens and camera lens as

XEI(m,n)=fMfC(m×PELx)fcm×PEL(zfM)(gfM)(zfM)YEI(m,n)=fMfC(n×PELy)fcn×PEL(zfM)(gfM)(zfM),

where fM and fC are the focal lengths of the MLA and camera lens respectively, PEL is the pitch of the elemental lens, and g is the gap between the MLA and main lens of the camera. The entire MLA consists of m × n microlenses, which are densely arranged in a square-matrix form, as shown in Fig. 2. The unit is placed in the mth row and nth column of the MLA. Therefore, each microlens corresponds to pixels that provide directional light information to the CCD.

Figure 2. Surface-design model of the microlens array (MLA), and its coordinate system.

2.2. Microlens Error Model

2.2.1. Rotational Error

Each microlens in the IIM camera forms a subimage on the CCD sensor that matches the directional light information. To acquire the correct image, the microlenses should be arranged in a matrix format, with parallel positioning of the image-sensor plane and MLA plane, in which the light rays strike each microlens at a different angle or projection. In the manufacturing and assembly process, the MLA is not aligned precisely with the image sensor. If there is an angle between the MLA and image sensor, rotational error occurs, as shown in Fig. 3(a).

Figure 3. Schematic diagram of the microlens array’s (MLA) (a) rotational error, (b) translational error, and (c) scaling-factor error.

2.2.2. Translation Error

In the IIM camera, translational error in the MLA leads to blurring and poor-quality digitally refocused images. Each microlens forms a subimage on the CCD sensor that corresponds to directional light information. When the microlenses are arranged, a fixed matrix format is needed for efficient use of the sensor pixels and subimages, with no crosstalk. Under this condition, any deviation in the horizontal or vertical position of the microlenses from their assumed, standard matrix position introduces translational error, as shown in Fig. 3(b).

2.2.3. Scaling Error

In the MLA, deviation of the lens diameter of the microlens surface from the standard diameter introduces a scaling-factor error, as shown in Fig. 3(c). The lens-diameter selection is important, because it determines the light-propagation characteristics and field angle. The scaling process tends to maximize an object’s dimensions, based on its original coordinates, to achieve the desired result.

III. PROPOSED METHOD

Analyses have shown that, in some subimages of the light field picture, error is caused by significant distortion related to a change in position, border scattering, and brightness variation [13]. Here, we introduce a method for calibrating a distorted IIM image using raw white images, as described in the following steps in Fig. 4.

Figure 4. Process diagram for the proposed method of microlens-error estimation and correction.

To calibrate the error of an IIM image, a white-scene image consisting of 1856 × 1856 pixels is captured; notably, this image must be both white and homogeneous. A region of interest (ROI) is selected by cropping (e.g. to obtain a 10 × 10 MLA), as shown in Fig. 5(a). Then, the red-green-blue (RGB) image is converted to a grayscale image. In image binarization, the adaptive operation of the thresholding process is exploited, as shown in Fig. 5(b). Adaptive thresholding determines the value of the threshold based on the mean value of a specific neighborhood area or pixel. The microlens light-intensity area and center position of each microlens is computed using a center-of-gravity method, defined as the weighted average of the pixel values in each row and column, as shown in Figs. 5(c) and 5(d) respectively. Then, we calculate the rotational angle based on the center index of each row. The measured angle is applied to adjust the image based on the direction of the angle, following Eq. (2):

Figure 5. Intermediate steps of the calibration process. (a) A white Integral-imaging microscopy (IIM) image with a region of interest (ROI), (b) image binarization, (c) circle detection, and (d) microlens-center identification.

[ x' y']=[ cosα sinα sinα cosα][ x y][ x' y']=[ cosα sinα sinα cosα][ x y].

Here x and y denote the original coordinates of the image, and α is the rotational angle.

The MLA translational error is illustrated in Fig. 6. Extracting the subimage from a white image is an important precondition for identifying the microlens gravity-index-shift error in the MLA as shown in Fig. 6(b). Each subimage contains 29 × 29 pixels. The subimage is transformed to a grayscale image, and Otsu’s method [23, 24] is utilized to convert the grayscale image to a binary one, as shown in Fig. 6 (c). For the binary image, we define each microlens as a distinct region using the circle-detection method. Fig. 6(d) presents the intermediate results of these steps. In this proposed method, the average translation position is calculated for each row and column based on the microlens’s center-pixel position, from which any deviation results in translational error. As a result, the original position is shifted to the average center direction, as shown in Fig. 7(b).

Figure 6. Microlens subimage division. (a) A raw white image, (b) extracting a microlens subimage, (c) binarization of a subimage, and (d) subimage area shifted from the center point.

Figure 7. The MLA’s first subimage: (a) without correction, (b) average translation, and (c) after scaling correction.

To address scaling-factor error, the subimage’s center position and area of light intensity must first be calculated. The determined scaling factor is applied to convert the original subimage to the maximum lenslet area, as shown in Fig. 7(c). In our experiments the maximum size of a single lenslet’s area is 29 × 29 pixels. The bilinear-interpolation technique is then utilized to maximize the subimage.

IV. RESULTS

In this experiment we use a BX41 microscope (Olympus, Tokyo, Japan), as shown in Fig. 8, consisting of an objective lens, tube lens, lens array, CCD, and control computer. The lens array is arranged in a 100 × 100 square matrix. Each lenslet has an area of 125 × 125 µm2 and a spherical surface. Our method is implemented in the MATLAB® environment. Detailed specifications of the optical components and personal computer are given in Table 1.

TABLE 1. Specifications of the proposed Integral-imaging microscopy (IIM) calibration system.

Optical devicesSpecifications
IIM UnitObjective lens×10
Tube lens×10
MLANumber of lens array100 × 100 (ROI 64 × 64)
Elemental lens pitch125 μm
CameraSensor resolution2048 × 2048 pixels (RGB)
Pixel pitch5.5 μm
Focal length2.4 mm
User PCFrame rate90 fps
CPUIntel Core i5-9400F 2.9 GHz
Memory16 GB
Operating systemWindows 10 Pro (64-bit)


Figure 8. Experimental setup for the proposed IIM-camera calibration system.

The rectification method [25] is compared to the proposed method to verify the effectiveness, in terms of structural similarity index (SSIM) and mean squared error (MSE) values. In this system the MLA rotational errors are set from 0.1° to 1.0°. The comparison of results is shown in Fig. 9, based on different rotational factors. We find that the proposed method outperforms the traditional one in all cases. The SSIM and MSE values are almost similar.

Figure 9. Comparison of the proposed method and the image-rectification method of [27]. The SSIM and MSE values are calculated according to the rotation factor (0.1°–1.0°).

Using the IIM method, our calibration method is applied to the IIM images of four specimens. Figure 10 shows the corresponding images, which include (i) a 2D image, (ii) the EIA used to capture the directional-view images, (iii) the initial OVI reconstructed using the computational integral-imaging reconstruction (CIIR) [26, 27] algorithm, and (iv) the final reconstructed OVI image. In this experiment the brightness of the initial OVI is not uniform, as shown in Fig. 10(iii). The image’s brightness improves after applying our proposed calibration method, as shown in Fig. 10(iv).

Figure 10. Experimental results for different specimens: (a) fly, (b) microchip, (c) mosquito, and (d) sand crystal; (i) 2D image, (ii) 64 × 64 elemental image, (iii) initial OVI without correction, and (iv) final orthographic-view image (OVI) with correction.

Figure 11 shows the percentage of bright pixels in each of the four initial and calibrated specimen OVIs. Our experimental results show that the proposed calibration method successfully enhances the uniform brightness of the images in the directional view. It is observed that the images of the microchip and sand crystal enjoy better results than those of the fly and mosquito, because of the light information. We can see from Fig. 10 that the microchip and sand crystal are brighter, causing more light reflection. Calibration does not have a huge effect on the weak-light-source area. For the same reason, the mosquito sample exhibits the worst performance.

Figure 11. Comparison of orthographic-view images (OVI) before and after correction, with the measured percentage of bright pixel values for different specimens (fly, microchip, mosquito, and sand crystal).

V. CONCLUSION

In this work, we introduced a ray-tracing-based MLA calibration method for an IIM system. The goal was to minimize errors in MLA alignment, and thus improve image quality and brightness. The proposed method attenuated MLA errors using a reference image. An efficient error-detection and -correction method was proposed, in which a four-step calibration process is employed for EIA processing. Experiments with real IIM images showed that the proposed calibration method can accommodate nonlinear MLA errors to achieve an enhanced, error-free IIM image with improved brightness uniformity. Our experimental results for four images of various specimens (a fly, a microchip resistor, a mosquito, and a sand crystal) demonstrate that the proposed method has the robustness needed to correct for various errors. Future work is expected to include a more advanced lens-distortion model, which should improve the accuracy of the IIM image-capturing system by overcoming the various complexities associated with the microscope.

ACKNOWLEDGMENT

This work was supported by the National Research Foundation of Korea (NRF) (NRF-2018R1D1A3B07044041, NRF-2020R1A2C1101258), and was supported under the Grand Information Technology Research Center support program (IITP-2020-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation), grant funded by the Korean government.

Fig 1.

Figure 1.Optical setup and schematic diagram of an integral-imaging microscopy (IIM) system.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 2.

Figure 2.Surface-design model of the microlens array (MLA), and its coordinate system.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 3.

Figure 3.Schematic diagram of the microlens array’s (MLA) (a) rotational error, (b) translational error, and (c) scaling-factor error.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 4.

Figure 4.Process diagram for the proposed method of microlens-error estimation and correction.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 5.

Figure 5.Intermediate steps of the calibration process. (a) A white Integral-imaging microscopy (IIM) image with a region of interest (ROI), (b) image binarization, (c) circle detection, and (d) microlens-center identification.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 6.

Figure 6.Microlens subimage division. (a) A raw white image, (b) extracting a microlens subimage, (c) binarization of a subimage, and (d) subimage area shifted from the center point.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 7.

Figure 7.The MLA’s first subimage: (a) without correction, (b) average translation, and (c) after scaling correction.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 8.

Figure 8.Experimental setup for the proposed IIM-camera calibration system.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 9.

Figure 9.Comparison of the proposed method and the image-rectification method of [27]. The SSIM and MSE values are calculated according to the rotation factor (0.1°–1.0°).
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 10.

Figure 10.Experimental results for different specimens: (a) fly, (b) microchip, (c) mosquito, and (d) sand crystal; (i) 2D image, (ii) 64 × 64 elemental image, (iii) initial OVI without correction, and (iv) final orthographic-view image (OVI) with correction.
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

Fig 11.

Figure 11.Comparison of orthographic-view images (OVI) before and after correction, with the measured percentage of bright pixel values for different specimens (fly, microchip, mosquito, and sand crystal).
Current Optics and Photonics 2021; 5: 524-531https://doi.org/10.3807/COPP.2021.5.5.524

TABLE 1 Specifications of the proposed Integral-imaging microscopy (IIM) calibration system

Optical devicesSpecifications
IIM UnitObjective lens×10
Tube lens×10
MLANumber of lens array100 × 100 (ROI 64 × 64)
Elemental lens pitch125 μm
CameraSensor resolution2048 × 2048 pixels (RGB)
Pixel pitch5.5 μm
Focal length2.4 mm
User PCFrame rate90 fps
CPUIntel Core i5-9400F 2.9 GHz
Memory16 GB
Operating systemWindows 10 Pro (64-bit)

References

  1. M. Levoy, “Light fields and computational imaging,” Computer 39, 46-55 (2006).
    CrossRef
  2. K.-C. Kwon, K. H. Kwon, M.-U. Erdenebat, Y.-L. Piao, Y.-T. Lim, M. Y. Kim and N. Kim, “Resolution-enhancement for an integral imaging microscopy using deep learning,” IEEE Photonics J. 11, 6900512 (2019).
    CrossRef
  3. M. S. Alam, K.-C. Kwon, M.-U. Erdenebat, M. Y. Abbass, A. Alam and N. Kim, “Super-resolution enhancement method based on generative adversarial network for integral imaging microscopy,” Sensors 21, 2164 (2021).
    Pubmed KoreaMed CrossRef
  4. M. Y. Abbass, K.-C. Kwon, M. S. Alam, Y.-L. Piao, K.-Y. Lee and N. Kim, “Image super resolution based on residual dense CNN and guided filters,” Multimed. Tools Appl. 80, 5403-5421 (2021).
    CrossRef
  5. K.-C. Kwon, M.-U. Erdenebat, S. Alam, Y.-T. Lim, K. G. Kim and N. Kim, “Integral imaging microscopy with enhanced depth-of-field using a spatial multiplexing,” Opt. Express 24, 2072-2083 (2016).
    Pubmed CrossRef
  6. C. Shin, H.-G. Jeon, Y. Yoon, I. S. Kweon and S. J. Kim, “Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, 4748-4757 (2018 Jun).
    CrossRef
  7. K.-C. Kwon, K. H. Kwon, M.-U. Erdenebat, Y.-L. Piao, Y.-T. Lim, Y. Zhao, M. Y. Kim and N. Kim, “Advanced three-dimensional visualization system for an integral imaging microscope using a fully convolutional depth estimation network,” IEEE Photonics J. 12, 3900714 (2020).
    CrossRef
  8. T. Georgiev and C. Intwala, “Light field camera design for integral view photography,” in Tech. Rep., (Adobe Systems Incorporated, 2006).
  9. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,,” in Tech. Rep. CTSR-02, (Department of Computer Science, Stanford University, 2005).
  10. K.-C Kwon, J.-S. Jeong, M.-U. Erdenebat, Y.-T. Lim, K.-H. Yoo and N. Kim, “Real-time interactive display for integral imaging microscopy,” Appl. Opt. 53, 4450-4459 (2014).
    Pubmed CrossRef
  11. S. Li, Y. Yuan, Z. Gao and H. Tan, “High-accuracy correction of a microlens array for plenoptic imaging sensors,” Sensors 19, 3922 (2019).
    Pubmed KoreaMed CrossRef
  12. T.-J. Li, S.-N. Li, S. Li, Y. Yuan and H.-P. Tan, “Correction model for microlens array assembly error in light field camera,” Opt. Express 24, 24524-24543 (2016).
    Pubmed CrossRef
  13. S.-N. Li, Y. Yuan, B. Liu, F.-Q. Wang and H.-P. Tan, “Influence of microlens array manufacturing errors on light-field imaging,” Opt. Commun. 410, 40-52 (2018).
    CrossRef
  14. S. Shi, J. Wang, J, Ding, Z. Zhao and T. H. New, “Parametric study on light field volumetric particle image velocimetry,” Flow Meas. Instrum. 49, 70-88 (2016).
    CrossRef
  15. J. Zhao, Z. Liu and B. Guo, “Three-dimensional digital image correlation method based on a light field camera,” Opt. Lasers Eng. 116, 19-25 (2019).
    CrossRef
  16. L. Su, Q. Yan, J. Cao and Y. Yuan, “Calibrating the orientation between a microlens array and a sensor based on projective geometry,” Opt. Lasers Eng. 82, 22-27 (2016).
    CrossRef
  17. P. Suliga and T. Wrona, “Microlens array calibration method for a light field camera,,” in Proc. 19th International Carpathian Control Conference-ICCC, (Szilvasvarad, Hungary, 2018). pp. 19-22.
    CrossRef
  18. J. Jin, Y. Cao, W. Cai, W. Zheng and P. Zhou, “An effective rectification method for lenselet-based plenoptic cameras,” Proc. SPIE 10020, 100200F (2016).
  19. D. Cho, M. Lee, S. Kim and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,,” in Proc. IEEE International Conference on Computer Vision, (Sydney, Australia, 2013). pp. 3280-3287.
    CrossRef
  20. Z. Zhao, M. Hui, M. Liu, L. Dong, X. Liu and Y. Zhao, “Centroid shift analysis of microlens array detector in interference imaging system,” Opt. Commun. 354, 132-139 (2015).
    CrossRef
  21. X. Liu, X. Zhang, F. Fang, Z. Zeng, H. Gao and X. Hu, “Influence of machining errors on form errors of microlens arrays in ultra-precision turning,” Int. J. Mach. Tools Manuf. 96, 80-93 (2015).
    CrossRef
  22. V. Dembele, I. Choi, S. Kheiryzadehkhanghah, S. Choi, J. Kim, C. S. Kim and D. Kim, “Interferometric snapshot spectro-ellipsometry: calibration and systematic error analysis,” Curr. Opt. Photon. 4, 345-352 (2020).
  23. K. Wu, E. Otoo and A. Shoshani, “Optimizing connected component labeling algorithms,” Proc. SPIE 5747, 1965-1976 (2005).
  24. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern. SMC-9, 62-66 (1979).
    CrossRef
  25. S. Li, Y. Zhu, C. Zhang, Y. Yuan and H. Tan, “Rectification of images distorted by microlens array errors in plenoptic cameras,” Sensors 18, 2019 (2019).
    Pubmed KoreaMed CrossRef
  26. Y.-T. Lim, J.-H. Park, K.-C. Kwon and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17, 19253-19263 (2009).
    Pubmed CrossRef
  27. S. Alam, K.-C. Kwon, M.-U. Erdenebat, Y.-T. Lim, S. Imtiaz, A. Sufian, S.-H. Jeon and N. Kim, “Resolution Enhancement of an Integral Imaging Microscopy Using Generative Adversarial Network,,” in Proc. Conference on Lasers and Electro-Optics Pacific Rim-CLEO-PR, (Sydney, Australia, 2020). paper C3G_4.
    KoreaMed CrossRef