검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2023; 7(6): 701-707

Published online December 25, 2023 https://doi.org/10.3807/COPP.2023.7.6.701

Copyright © Optical Society of Korea.

Gaussian Model for Laser Image on Curved Surface

Annmarie Grant, Sy-Hung Bach, Soo-Yeong Yi

Department of Electrical and Information Engineering, Seoul National University of Science and Technology, Seoul 01811, Korea

Corresponding author: *suylee@seoultech.ac.kr, ORCID 0000-0001-8110-1468

Received: July 17, 2023; Revised: September 20, 2023; Accepted: October 9, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

In laser imaging, accurate extraction of the laser’s center is essential. Several methods exist to extract the laser’s center in an image, such as the geometric mean, the parabolic curve fitting, and the Gaussian curve fitting, etc. The Gaussian curve fitting is the most suitable because it is based on the physical properties of the laser. The width of the Gaussian laser beam depends on the distance from the laser source to the target object. It is assumed in general that the distance remains constant at a laser spot resulting in a symmetric Gaussian model for the laser image. However, on a curved surface of the object, the distance is not constant; The laser beam is narrower on the side closer to the focal point of the laser light and wider on the side closer to the laser source, which causes the distribution of the laser beam to skew. This study presents a modified Gaussian model in the laser imaging to incorporate the slant angle of a curved object. The proposed method is verified with simulation and experiments.

Keywords: Gaussian intensity, Imaging, Shape measurement, Slant angle, Stripe laser

OCIS codes: (120.5800) Scanners; (120.6085) Space instrumentation; (120.6650) Surface measurements figure

The laser imaging system is a type of noncontact and active 3D measurement system that projects a collimated stripe laser light of distinct frequency on a target object and measures the deformation of the laser light according to the shape of the object through a camera [16]. The 3D shape data can be obtained based on triangulation [7]. The laser imaging methods are used in a myriad of fields, such as cultural relic and scene 3D reconstruction [8, 9], welding [1013], underwater surveying and exploration [1416], railway inspection [17], and biometrics [1820].

Much of the current research in stripe laser imaging has concerned with noise reduction [1, 2, 10, 15] and laser camera calibration [7, 21]. Extracting the center of the laser beam is also a crucial task for shape measurement. Because the laser light appears across several pixels in the image, it is difficult to accurately determine the center position of the laser light. Conventional methods simply select the pixel with maximum intensity among the laser pixels, calculate the average position of the two peaks of the laser pixels [22], or fit a parabolic curve to the laser pixels to find the maximum position of the curve [2325] in the laser image. Convolutional neural networks (CNN) and feature clustering were combined for an automatic laser stripe detection algorithm [26]. These methods are based on the empirical observation of laser images, not on the physical phenomena of laser light.

The intensity of a laser beam is generally known as physically based on a Gaussian distribution. In [27], a Gaussian model of laser intensity on a target object is presented as a function of distance to the laser source. This model can be used to determine the center position of the laser beam on the object. However, because the object to be measured has an irregularly curved surface, not only the distance but also the slant angle of the surface influences the laser light. On the slant surface of the object, each end of the laser beam is at a different distance from the laser source causing the Gaussian beam to skew. Based on that observation, this study aims to present a modified Gaussian model for a laser light on a slant surface to improve accuracy and incorporate a mathematical interpretation of the physical cause of the skewed laser distribution. The experiments will verify the proposed model and show that it is able to extract the accurate center of a laser light on the object surface at a slant angle.

The paper is organized as follows: Related works are presented in Section II. Section III presents the modified Gaussian model for laser beam intensity to include the slant angle of a target object. The experiment and results are discussed in Section IV and concluded in Section V.

Various ever-evolving methods exist to detect the center of a laser beam in an image. A common method is to detect the edges of the laser using a Sobel operator [12] and compute their geometric center. The gray gravity method is widely used [22, 28], though this method is sensitive to laser saturation [4]. When derivative filters are convolved with the image, the zero-crossing of the result gives the peak location [29, 30]. The Otsu image segmentation method can be used to isolate the laser stripe from the background, so that center detection algorithms can more accurately calculate the center of laser beam [31, 32]. Automatic algorithms utilizing neural networks are useful for industrial applications [26, 33]. The optical properties of the experiment surface can be used to select a derivative filter for increased accuracy [3]. These methods have various demerits, such as complexity, processing time, and costly memory requirements. Figure 1 shows the common issues in the laser imaging.

Figure 1.Common issues in laser imaging. (a) Noise, (b) saturation, and (c) asymmetry.

Because the physical nature of laser intensity follows a Gaussian distribution, the Gaussian curve fitting is the most appropriate for modeling the laser beam in an image. The sum of multiple Gaussian curves of varying variances and standard deviations is adopted for the curve fitting to produce more accurate results than a single Gaussian curve [4]. In [34], the optimal laser center was estimated using weighted Gaussian signals of varying expectations and variances. While the effects of asymmetry due to object curvature on the gray distribution model were studied in [35], the reasoning behind it was not discussed.

This study proposes a modified mathematical model for Gaussian curve fitting of a laser beam in an image. This modification incorporates the slant angle of a target object to enhance accuracy and explain for the physical meaning behind the asymmetry of the beam distribution.

3.1. Gaussian Distribution of Laser Intensity

The intensity of a Gaussian laser beam in the image plane can be expressed as a Gaussian distribution of the form:

Ii=I0e2(iμ)2w(y)2,

where I0 is the laser intensity at the center of the beam, i is the pixel index in the image, µ is the mean, and w(y) represents the beam width as a function of the distance from the laser source [27]. The beam width can be expressed as

w(y)=wo1+DyyR2,

where wo is the beam waist, yR is the Rayleigh length, D is the distance from the laser source to the focal point, and y represents the distance from the laser source to the object. The Rayleigh length, yR, is given by πwo2/λ where λ denotes the wavelength of the laser. For example, if a red laser with 650 nm wavelength is adopted and the beam waist is 0.1 mm, the value of yR is 48.3 mm.

In the region far outside the Rayleigh length, (Dy) >> yR between the focal point and the laser source, the beam width has an approximately linear relationship with the distance to the laser, represented by

w(y)wsDy.

where ws is defined as wo/yR. It is assumed in this study that a target object is placed at the position where Eq. (3) is satisfied.

3.2. Laser Beam Width as a Function of Slant Angle

A simple stripe laser imaging system is shown in Fig. 2, where the laser is illuminated on a curved object. Figure 3 shows the slant Gaussian model where θ is the slant angle at the laser spot on the object. The distance to the object at i = 0 is denoted by d0, and the point on the object surface at i = 0 is represented by P0 = [0 d0]t. The tangential vector on the object surface is A, which is expressed as [cos θ sin θ]t. Thus, a point on the object surface along i can be expressed as

Figure 2.Stripe laser imaging on curved object and image example.

Figure 3.Slant Gaussian model for laser light on object.

P=P0+iA.

Eq. (4) is expanded as

xy=0d0+icosθsinθ.

Because the beam width depends on the distance (y), Eq. (1) can be rewritten as

I=I0e2(iμ)2ws(Dd0isinθ)2.

Computer simulations of this Gaussian model are shown in Fig. 4 for ±50° slant angles. As expected, the slope for the +10° curve is demonstrably steeper on the right side as this side is closer to the focal point. On the contrary, the −10° curve exhibits a contrasting slope: It is steeper on the left side due to its closer distance to the focal point. Eq. (6) explains the skew symmetry of the laser beam distribution in the image. Using this model, extracting the more accurate center of the laser beam is possible.

Figure 4.Simulation graphs of slant Gaussian model. (a) 50°, (b) −50°.

Figures 5 and 6 are the simulation results presenting the differences in the extracted center points of the laser image when using the geometric mean and the proposed Gaussian model according to the slant angle of the target object. The center points of the laser detected using the geometric mean method are denoted with colored circles in Fig. 5. As the slant angle increases, the skew also increasingly shifts to the left and the geometric mean also shifts away from the center point of the slant Gaussian curve.

Figure 5.Intensity distribution skew as a result of slant angle.

Figure 6.Difference in center positions of laser image obtained from geometric mean and the proposed Gaussian model. (a) Positive angles, (b) negative angles.

4.1. Experiment Setup

A monocular camera and a stripe laser source were used in the experiment to verify the proposed model. Table 1 shows the specification of the experimental setup of the laser imaging system.

TABLE 1 Specification of experimental setup

Camera Resolution2,048 × 1,536
Image Sensor Size1/1.8″
Pixel Size (μm)3.45
Laser-object Distance (mm)200
Imaging Area (mm2)49.3 × 37
Laser Wavelength (nm)650


The wavelength of the infrared laser source is longer than 780 nm and that of the red laser is 650 nm. The longer wavelength of the infrared laser source leads to a shorter Rayleigh length. However, because the infrared laser is invisible and requires an additional optical filter in camera, this study adopts the red laser for the convenience of experiments.

Instead of the target object, the laser source was installed at a slant angle and the slant angle is adjustable. A flat object covered with smooth, nonreflective white paper was placed beneath the camera. The experiment setup and a sample laser light image are shown in Fig. 7.

Figure 7.Experiment setup and experiment image.

Laser images were obtained for angles from −70° to 70° at 5° steps. The well-known Gaussian image filtering algorithm is employed to reduce the speckle noise at an early stage of image processing before applying the curve fitting to find the center of the laser image in this study. The parameters I0, w0, D, d0, µ, and θ of the proposed model in Eq. (6) were obtained from the curve fitting algorithm using the gradient descent method.

4.2. Results

Figure 8 shows the curve fitting of the proposed model for ±10° and ±70°. For positive angles, the left side of the Gaussian distribution has a less steep slope than the right side because the left side of the target object is closer to the laser source as shown in the figures. The opposite is true for negative angles.

Figure 8.Curve fitting of the proposed Gaussian model. (a) ±10°, (b) ±70°.

The extracted center position of the laser light in the image using the proposed method and the conventional geometric mean of the two peak positions of the laser pixels [22] were compared. Figure 9 shows the difference in the extracted center position at each slant angle obtained by the two methods. As the slant angle increases, the difference in pixels of the two center extraction methods increases. An increasing slant angle causes an increasing difference in the beam width at either end of the object. This results in a Gaussian distribution that is increasingly skewed towards one side. The increasing skew is expressed as an increasing difference between the simple geometric mean of the two laser peaks and the peak of the slant Gaussian curve. This result accords with the simulation results explained in Fig. 5.

Figure 9.Comparison of center extraction methods. (a) Positive angles, (b) negative angles.

Table 2 shows the value of the slant angle obtained from the proposed slant Gaussian model with respect to the ground truth value of the slant angles of the target object. The average error between the ground truth values and those of the proposed Gaussian model is 0.3487° for the positive angles and −0.2803° for the negative angles.

TABLE 2 Ground truth vs. experimental results of slant angle

Positive Angles (deg)
Ground Truth10152025303540455055606570
Result of Experiments10.007314.997019.806424.951229.611734.957239.390644.949849.423654.954359.125564.980468.3118
Negative Angles (deg)
Ground Truth−10−15−20−25−30−35−40−45−50−55−60−65−70
Result of Experiments−10.0599−14.9817−19.7952−24.9643−29.7594−34.9519−39.7336−44.9516−49.5814−54.9581−59.1232−64.9686−68.5273

This study proposed an imaging model of a laser light for extracting the accurate center of laser image. The width of a laser beam is a function of distance from the laser source. When a laser source illuminates a slant surface of a target object, the distance from the laser source to the surface varies from one side of the beam to the other, resulting in a change in the width of the laser image and a skewness in the intensity distribution.

Based on the physical phenomena of the laser light, the proposed model accounts for the difference in slope on each side of the intensity distribution by incorporating the slant angle of an object as a variable of the beam width. Thus, a more accurate Gaussian model is achieved. It was shown via experiments that there is a difference in the center pixel extracted using curve fitting of the proposed model and the geometric center method. In addition, the proposed model was able to accurately estimate the slant angle of a surface.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

  1. Y. Wang and B. Geng, “Edge detection method used for red laser stripe located on microscope images,” in Proc. IEEE International Conference on Mechatronics and Automation (Changchun, China, Aug. 5-8, 2018), pp. 2346-2351.
    CrossRef
  2. Q. Wang, Y. Qian, J. Yang, and Y. Wang, “A novel image evaluation method for three-dimensional laser scanning in full degrees of freedom,” in Proc. 2nd International Conference on Future Computer and Communication (Wuhan, China, May 21-24, 2010), pp. V3-517.
  3. J. Forest, J. Salvi, E. Cabruja, and C. Pous, “Laser stripe peak detector for 3D scanners. A FIR filter approach,” in Proc. 17th International Conference on Pattern Recognition (Cambridge, UK, Aug. 26, 2004), pp. 646-649.
    CrossRef
  4. I. Besic and Z. Avdagic, “Laser stripe sub-pixel peak detection in real-time 3D scanning using power modulation,” in Proc. 42nd Annual Conference of the IEEE Industrial Electronics Society (Florence, Italy, Oct. 23-26, 2016), pp. 951-956.
    CrossRef
  5. L. H. Pham, D. N.-N. Tran, C. H. Rhie, and J. W. Jeon, “Analysis of the smartphone camera exposure effect on laser extraction,” in Proc. International Conference on Electronics, Information, and Communication-ICEIC (Jeju, Korea, Jan. 31-Feb. 3, 2021), pp. 1-4.
    CrossRef
  6. H. Zhao, R. Dai, and C. Xiao, “A machine vision system for stacked substrates counting with a robust stripe detection algorithm,” IEEE Trans. Syst. Man Cybern.: Syst. 49, 2352-2361 (2019).
    CrossRef
  7. S. Yi and S. Min, “A practical calibration method for stripe laser imaging system,” IEEE Trans. Instrum. Meas. 70, 5003307 (2021).
    CrossRef
  8. X. Gao, S. Shen, L. Zhu, T. Shi, Z. Wang, and Z. Hu, “Complete scene reconstruction by merging images and laser scans,” IEEE Trans. Circ. Syst. Video Technol. 30, 3688-3701 (2020).
    CrossRef
  9. J. Liu, J. Zhang, and J. Xu, “Cultural relic 3D reconstruction from digital images and laser point clouds,” in Proc. Congress on Image and Signal Processing (Sanya, China, May 27-30, 2008), pp. 349-353.
    CrossRef
  10. X. Liu, “Image processing in weld seam tracking with laser vision based on radon transform and fcm clustering segmentation,” in Proc. International Conference Intelligent Computation Technology and Automation (Changsha, China, May 11-12, 2010), pp. 470-473.
    CrossRef
  11. Y. Zou, S. Cai, P. Li, and K. Zuo, “Features extraction of butt joint for tailored blank laser welding based on three-line stripe laser vision sensor,” in Proc. 29th Chinese Control and Decision Conference (Chongqing, China, May 28-30, 2017), pp. 7736-7739.
    CrossRef
  12. C. Li and G. Gong, “Research on the adaptive recognition and location of the weld based on the characteristics of laser stripes,” in Proc. 10th International Conference on Intelligent Human-Machine Systems and Cybernetics (Hangzhou, China, Aug. 25-26, 2018), pp. 163-166.
  13. L. Zhang, Q. Ye, W. Yang, and J. Jiao, “Weld line detection and tracking via spatial-temporal cascaded hidden Markov models and cross structured light,” IEEE Trans. Instrum. Meas. 63, 742-753 (2014).
    CrossRef
  14. Y. Yang, B. Zheng, H. Zheng, Z. Wang, G. Wu, and J. Wang, “3D reconstruction for underwater laser line scanning,” in Proc. MTS/IEEE OCEANS (Bergen, Norway, Jun. 10-14, 2013), pp. 1-3.
    CrossRef
  15. J. Liu, “Research on sparse code shrinkage denoise in underwater 3D laser scanning images,” in Proc. IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (Chengdu, China, Dec. 20-22, 2019), pp. 140-144.
    CrossRef
  16. J. Liu, “Research on laser stripe extraction in underwater 3D laser scanning,” in Proc. IEEE Int'l Conf. on Information and Automation (Ningbo, China, Aug. 1-3, 2016), pp. 159-165.
    CrossRef
  17. C. Aytekin, Y. Rezaeitabar, S. Dogru, and I. Ulusoy, “Railway fastener inspection by real-time machine vision,” Syst. 45, 1101-1107 (2015).
    CrossRef
  18. R. T. McKeon and P. J. Flynn, “Three-dimensional facial imaging using a static light screen (SLS) and a dynamic subject,” IEEE Trans. Instrum. Meas. 59, 774-783 (2010).
    CrossRef
  19. D. Zhang, G. Lu, W. Li, L. Zhang, and N. Luo, “Palmprint recognition using 3-D information,” IEEE Trans. Syst. Man Cybern. Part C 39, 505-519 (2009).
    CrossRef
  20. B. Zhang, W. Li, P. Qing, and D. Zhang, “Palm-Print Classification by Global Features,” IEEE Trans. Syst. Man Cybern. Syst. 43, 370-378 (2013).
    CrossRef
  21. J. Hattuniemi and A. Makynen, “A calibration method of triangulation sensors for thickness measurement,” in Proc. IEEE Instrumentation and Measurement Technology Conference (Singapore, May 5-7, 2009), pp. 566-569.
    CrossRef
  22. H. Wang, Y. Wang, J. Zhang, and J. Cao, “Laser stripe center detection under the condition of uneven scattering metal surface for geometric measurement,” IEEE Trans. Instrum. Meas. 69, 2182-2192 (2019).
    CrossRef
  23. P. Fasogbon, L. Duvieubourg, and L. Macaire, “Fast laser stripe extraction for 3D metallic object measurement,” in Proc. 42nd Annual Conference of the IEEE Industrial Electronics Society (Florence, Italy, Oct. 23-26, 2016), pp. 923-927.
    CrossRef
  24. A. Molder, O. Martens, T. Saar, and R. Land, “Laser line detection with sub-pixel accuracy,” Elektron Ir Elektrotech 20, 132-135 (2014).
    CrossRef
  25. R. B. Fisher and D. K. Naidu, "A comparison of algorithms for subpixel peak detection," in Image Technology, J. L. C. Sanz, Ed (Springer Berlin, Germany, 1996), Chapter 15, pp. 384-404.
    CrossRef
  26. W. Liu, H. Di, Y. Zhang, Y. Lu, X. Cheng, J. Cui, and Z. Jia, “Automatic detection and segmentation of laser stripes for industrial measurement,” IEEE Trans. Instrum. Meas. 69, 4507-4515 (2020).
    CrossRef
  27. R. Paschotta, “Laser beams,” in Field Guide to Lasers (SPIE Press, USA, 2008), pp. 18-19.
    CrossRef
  28. Y. Li, J. Zhou, F. Huang, and L. Liu, “Sub-pixel extraction of laser stripe center using an improved gray-gravity method,” Sensors 17, 814 (2017).
    Pubmed KoreaMed CrossRef
  29. F. Blais and M. Rioux, “Real-time numerical peak detector,” Signal Process. 11, 145-155 (1986).
    CrossRef
  30. J. Forest, J. Teixidor, J. Salvi, and E. Cabruja, “A proposal for laser scanners sub-pixel accuracy peak detector,” in Proc. Workshop on European Scientific and Industrial Collaboration (2003), pp. 525-532.
  31. L. Zhang, Y. Xu, and C. Wu, “Features extraction for structured light stripe image based on OTSU threshold,” in Proc.International Symposium on Knowledge Acquisition and Modeling (Sanya, China, Oct. 8-9, 2011), pp. 92-95.
    CrossRef
  32. G. Rao and W. Zhang, “Laser stripe center extraction based on ridgeline tracking algorithm,” in Proc. 2nd International Conference on Mechanical Engineering, Intelligent Manufacturing and Automation Technology (Guilin, China, Jan. 7-9, 2022), pp. 1-4.
  33. L. Yang, J. Fan, B. Huo, E. Li, and Y. Liu, “Image denoising of seam images with deep learning for laser vision seam tracking,” IEEE Sensors J. 22, 6098-6107 (2022).
    CrossRef
  34. C. Li, X. Ye, Y. Gong, and T. Wang, “A center-line extraction algorithm of laser stripes based on multi-Gaussian signals fitting,” in Proc. IEEE International Conference on Information and Automation-ICIA (Ningbo, China, Aug. 1-3, 2016), pp. 189-194.
  35. B. Liu, Q. Xue, and P. Sun, “Research on the gray distribution model of stripe in structured light 3D measurement,” in Proc. 8th International Symposium on Next Generation Electronics-ISNE (Zhengzhou, China, Oct. 9-10, 2019), pp. 1-4.
    CrossRef

Article

Research Paper

Curr. Opt. Photon. 2023; 7(6): 701-707

Published online December 25, 2023 https://doi.org/10.3807/COPP.2023.7.6.701

Copyright © Optical Society of Korea.

Gaussian Model for Laser Image on Curved Surface

Annmarie Grant, Sy-Hung Bach, Soo-Yeong Yi

Department of Electrical and Information Engineering, Seoul National University of Science and Technology, Seoul 01811, Korea

Correspondence to:*suylee@seoultech.ac.kr, ORCID 0000-0001-8110-1468

Received: July 17, 2023; Revised: September 20, 2023; Accepted: October 9, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In laser imaging, accurate extraction of the laser’s center is essential. Several methods exist to extract the laser’s center in an image, such as the geometric mean, the parabolic curve fitting, and the Gaussian curve fitting, etc. The Gaussian curve fitting is the most suitable because it is based on the physical properties of the laser. The width of the Gaussian laser beam depends on the distance from the laser source to the target object. It is assumed in general that the distance remains constant at a laser spot resulting in a symmetric Gaussian model for the laser image. However, on a curved surface of the object, the distance is not constant; The laser beam is narrower on the side closer to the focal point of the laser light and wider on the side closer to the laser source, which causes the distribution of the laser beam to skew. This study presents a modified Gaussian model in the laser imaging to incorporate the slant angle of a curved object. The proposed method is verified with simulation and experiments.

Keywords: Gaussian intensity, Imaging, Shape measurement, Slant angle, Stripe laser

I. INTRODUCTION

The laser imaging system is a type of noncontact and active 3D measurement system that projects a collimated stripe laser light of distinct frequency on a target object and measures the deformation of the laser light according to the shape of the object through a camera [16]. The 3D shape data can be obtained based on triangulation [7]. The laser imaging methods are used in a myriad of fields, such as cultural relic and scene 3D reconstruction [8, 9], welding [1013], underwater surveying and exploration [1416], railway inspection [17], and biometrics [1820].

Much of the current research in stripe laser imaging has concerned with noise reduction [1, 2, 10, 15] and laser camera calibration [7, 21]. Extracting the center of the laser beam is also a crucial task for shape measurement. Because the laser light appears across several pixels in the image, it is difficult to accurately determine the center position of the laser light. Conventional methods simply select the pixel with maximum intensity among the laser pixels, calculate the average position of the two peaks of the laser pixels [22], or fit a parabolic curve to the laser pixels to find the maximum position of the curve [2325] in the laser image. Convolutional neural networks (CNN) and feature clustering were combined for an automatic laser stripe detection algorithm [26]. These methods are based on the empirical observation of laser images, not on the physical phenomena of laser light.

The intensity of a laser beam is generally known as physically based on a Gaussian distribution. In [27], a Gaussian model of laser intensity on a target object is presented as a function of distance to the laser source. This model can be used to determine the center position of the laser beam on the object. However, because the object to be measured has an irregularly curved surface, not only the distance but also the slant angle of the surface influences the laser light. On the slant surface of the object, each end of the laser beam is at a different distance from the laser source causing the Gaussian beam to skew. Based on that observation, this study aims to present a modified Gaussian model for a laser light on a slant surface to improve accuracy and incorporate a mathematical interpretation of the physical cause of the skewed laser distribution. The experiments will verify the proposed model and show that it is able to extract the accurate center of a laser light on the object surface at a slant angle.

The paper is organized as follows: Related works are presented in Section II. Section III presents the modified Gaussian model for laser beam intensity to include the slant angle of a target object. The experiment and results are discussed in Section IV and concluded in Section V.

II. RELATED WORKS

Various ever-evolving methods exist to detect the center of a laser beam in an image. A common method is to detect the edges of the laser using a Sobel operator [12] and compute their geometric center. The gray gravity method is widely used [22, 28], though this method is sensitive to laser saturation [4]. When derivative filters are convolved with the image, the zero-crossing of the result gives the peak location [29, 30]. The Otsu image segmentation method can be used to isolate the laser stripe from the background, so that center detection algorithms can more accurately calculate the center of laser beam [31, 32]. Automatic algorithms utilizing neural networks are useful for industrial applications [26, 33]. The optical properties of the experiment surface can be used to select a derivative filter for increased accuracy [3]. These methods have various demerits, such as complexity, processing time, and costly memory requirements. Figure 1 shows the common issues in the laser imaging.

Figure 1. Common issues in laser imaging. (a) Noise, (b) saturation, and (c) asymmetry.

Because the physical nature of laser intensity follows a Gaussian distribution, the Gaussian curve fitting is the most appropriate for modeling the laser beam in an image. The sum of multiple Gaussian curves of varying variances and standard deviations is adopted for the curve fitting to produce more accurate results than a single Gaussian curve [4]. In [34], the optimal laser center was estimated using weighted Gaussian signals of varying expectations and variances. While the effects of asymmetry due to object curvature on the gray distribution model were studied in [35], the reasoning behind it was not discussed.

This study proposes a modified mathematical model for Gaussian curve fitting of a laser beam in an image. This modification incorporates the slant angle of a target object to enhance accuracy and explain for the physical meaning behind the asymmetry of the beam distribution.

III. SLANT GAUSSIAN MODEL

3.1. Gaussian Distribution of Laser Intensity

The intensity of a Gaussian laser beam in the image plane can be expressed as a Gaussian distribution of the form:

Ii=I0e2(iμ)2w(y)2,

where I0 is the laser intensity at the center of the beam, i is the pixel index in the image, µ is the mean, and w(y) represents the beam width as a function of the distance from the laser source [27]. The beam width can be expressed as

w(y)=wo1+DyyR2,

where wo is the beam waist, yR is the Rayleigh length, D is the distance from the laser source to the focal point, and y represents the distance from the laser source to the object. The Rayleigh length, yR, is given by πwo2/λ where λ denotes the wavelength of the laser. For example, if a red laser with 650 nm wavelength is adopted and the beam waist is 0.1 mm, the value of yR is 48.3 mm.

In the region far outside the Rayleigh length, (Dy) >> yR between the focal point and the laser source, the beam width has an approximately linear relationship with the distance to the laser, represented by

w(y)wsDy.

where ws is defined as wo/yR. It is assumed in this study that a target object is placed at the position where Eq. (3) is satisfied.

3.2. Laser Beam Width as a Function of Slant Angle

A simple stripe laser imaging system is shown in Fig. 2, where the laser is illuminated on a curved object. Figure 3 shows the slant Gaussian model where θ is the slant angle at the laser spot on the object. The distance to the object at i = 0 is denoted by d0, and the point on the object surface at i = 0 is represented by P0 = [0 d0]t. The tangential vector on the object surface is A, which is expressed as [cos θ sin θ]t. Thus, a point on the object surface along i can be expressed as

Figure 2. Stripe laser imaging on curved object and image example.

Figure 3. Slant Gaussian model for laser light on object.

P=P0+iA.

Eq. (4) is expanded as

xy=0d0+icosθsinθ.

Because the beam width depends on the distance (y), Eq. (1) can be rewritten as

I=I0e2(iμ)2ws(Dd0isinθ)2.

Computer simulations of this Gaussian model are shown in Fig. 4 for ±50° slant angles. As expected, the slope for the +10° curve is demonstrably steeper on the right side as this side is closer to the focal point. On the contrary, the −10° curve exhibits a contrasting slope: It is steeper on the left side due to its closer distance to the focal point. Eq. (6) explains the skew symmetry of the laser beam distribution in the image. Using this model, extracting the more accurate center of the laser beam is possible.

Figure 4. Simulation graphs of slant Gaussian model. (a) 50°, (b) −50°.

Figures 5 and 6 are the simulation results presenting the differences in the extracted center points of the laser image when using the geometric mean and the proposed Gaussian model according to the slant angle of the target object. The center points of the laser detected using the geometric mean method are denoted with colored circles in Fig. 5. As the slant angle increases, the skew also increasingly shifts to the left and the geometric mean also shifts away from the center point of the slant Gaussian curve.

Figure 5. Intensity distribution skew as a result of slant angle.

Figure 6. Difference in center positions of laser image obtained from geometric mean and the proposed Gaussian model. (a) Positive angles, (b) negative angles.

IV. EXPERIMENTAL RESULTS

4.1. Experiment Setup

A monocular camera and a stripe laser source were used in the experiment to verify the proposed model. Table 1 shows the specification of the experimental setup of the laser imaging system.

TABLE 1. Specification of experimental setup.

Camera Resolution2,048 × 1,536
Image Sensor Size1/1.8″
Pixel Size (μm)3.45
Laser-object Distance (mm)200
Imaging Area (mm2)49.3 × 37
Laser Wavelength (nm)650


The wavelength of the infrared laser source is longer than 780 nm and that of the red laser is 650 nm. The longer wavelength of the infrared laser source leads to a shorter Rayleigh length. However, because the infrared laser is invisible and requires an additional optical filter in camera, this study adopts the red laser for the convenience of experiments.

Instead of the target object, the laser source was installed at a slant angle and the slant angle is adjustable. A flat object covered with smooth, nonreflective white paper was placed beneath the camera. The experiment setup and a sample laser light image are shown in Fig. 7.

Figure 7. Experiment setup and experiment image.

Laser images were obtained for angles from −70° to 70° at 5° steps. The well-known Gaussian image filtering algorithm is employed to reduce the speckle noise at an early stage of image processing before applying the curve fitting to find the center of the laser image in this study. The parameters I0, w0, D, d0, µ, and θ of the proposed model in Eq. (6) were obtained from the curve fitting algorithm using the gradient descent method.

4.2. Results

Figure 8 shows the curve fitting of the proposed model for ±10° and ±70°. For positive angles, the left side of the Gaussian distribution has a less steep slope than the right side because the left side of the target object is closer to the laser source as shown in the figures. The opposite is true for negative angles.

Figure 8. Curve fitting of the proposed Gaussian model. (a) ±10°, (b) ±70°.

The extracted center position of the laser light in the image using the proposed method and the conventional geometric mean of the two peak positions of the laser pixels [22] were compared. Figure 9 shows the difference in the extracted center position at each slant angle obtained by the two methods. As the slant angle increases, the difference in pixels of the two center extraction methods increases. An increasing slant angle causes an increasing difference in the beam width at either end of the object. This results in a Gaussian distribution that is increasingly skewed towards one side. The increasing skew is expressed as an increasing difference between the simple geometric mean of the two laser peaks and the peak of the slant Gaussian curve. This result accords with the simulation results explained in Fig. 5.

Figure 9. Comparison of center extraction methods. (a) Positive angles, (b) negative angles.

Table 2 shows the value of the slant angle obtained from the proposed slant Gaussian model with respect to the ground truth value of the slant angles of the target object. The average error between the ground truth values and those of the proposed Gaussian model is 0.3487° for the positive angles and −0.2803° for the negative angles.

TABLE 2. Ground truth vs. experimental results of slant angle.

Positive Angles (deg)
Ground Truth10152025303540455055606570
Result of Experiments10.007314.997019.806424.951229.611734.957239.390644.949849.423654.954359.125564.980468.3118
Negative Angles (deg)
Ground Truth−10−15−20−25−30−35−40−45−50−55−60−65−70
Result of Experiments−10.0599−14.9817−19.7952−24.9643−29.7594−34.9519−39.7336−44.9516−49.5814−54.9581−59.1232−64.9686−68.5273

V. CONCLUSION

This study proposed an imaging model of a laser light for extracting the accurate center of laser image. The width of a laser beam is a function of distance from the laser source. When a laser source illuminates a slant surface of a target object, the distance from the laser source to the surface varies from one side of the beam to the other, resulting in a change in the width of the laser image and a skewness in the intensity distribution.

Based on the physical phenomena of the laser light, the proposed model accounts for the difference in slope on each side of the intensity distribution by incorporating the slant angle of an object as a variable of the beam width. Thus, a more accurate Gaussian model is achieved. It was shown via experiments that there is a difference in the center pixel extracted using curve fitting of the proposed model and the geometric center method. In addition, the proposed model was able to accurately estimate the slant angle of a surface.

FUNDING

Research program funded by Seoul National University of Science and Technology.

DISCLOSURES

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

DATA AVAILABILITY

Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

Fig 1.

Figure 1.Common issues in laser imaging. (a) Noise, (b) saturation, and (c) asymmetry.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 2.

Figure 2.Stripe laser imaging on curved object and image example.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 3.

Figure 3.Slant Gaussian model for laser light on object.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 4.

Figure 4.Simulation graphs of slant Gaussian model. (a) 50°, (b) −50°.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 5.

Figure 5.Intensity distribution skew as a result of slant angle.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 6.

Figure 6.Difference in center positions of laser image obtained from geometric mean and the proposed Gaussian model. (a) Positive angles, (b) negative angles.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 7.

Figure 7.Experiment setup and experiment image.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 8.

Figure 8.Curve fitting of the proposed Gaussian model. (a) ±10°, (b) ±70°.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

Fig 9.

Figure 9.Comparison of center extraction methods. (a) Positive angles, (b) negative angles.
Current Optics and Photonics 2023; 7: 701-707https://doi.org/10.3807/COPP.2023.7.6.701

TABLE 1 Specification of experimental setup

Camera Resolution2,048 × 1,536
Image Sensor Size1/1.8″
Pixel Size (μm)3.45
Laser-object Distance (mm)200
Imaging Area (mm2)49.3 × 37
Laser Wavelength (nm)650

TABLE 2 Ground truth vs. experimental results of slant angle

Positive Angles (deg)
Ground Truth10152025303540455055606570
Result of Experiments10.007314.997019.806424.951229.611734.957239.390644.949849.423654.954359.125564.980468.3118
Negative Angles (deg)
Ground Truth−10−15−20−25−30−35−40−45−50−55−60−65−70
Result of Experiments−10.0599−14.9817−19.7952−24.9643−29.7594−34.9519−39.7336−44.9516−49.5814−54.9581−59.1232−64.9686−68.5273

References

  1. Y. Wang and B. Geng, “Edge detection method used for red laser stripe located on microscope images,” in Proc. IEEE International Conference on Mechatronics and Automation (Changchun, China, Aug. 5-8, 2018), pp. 2346-2351.
    CrossRef
  2. Q. Wang, Y. Qian, J. Yang, and Y. Wang, “A novel image evaluation method for three-dimensional laser scanning in full degrees of freedom,” in Proc. 2nd International Conference on Future Computer and Communication (Wuhan, China, May 21-24, 2010), pp. V3-517.
  3. J. Forest, J. Salvi, E. Cabruja, and C. Pous, “Laser stripe peak detector for 3D scanners. A FIR filter approach,” in Proc. 17th International Conference on Pattern Recognition (Cambridge, UK, Aug. 26, 2004), pp. 646-649.
    CrossRef
  4. I. Besic and Z. Avdagic, “Laser stripe sub-pixel peak detection in real-time 3D scanning using power modulation,” in Proc. 42nd Annual Conference of the IEEE Industrial Electronics Society (Florence, Italy, Oct. 23-26, 2016), pp. 951-956.
    CrossRef
  5. L. H. Pham, D. N.-N. Tran, C. H. Rhie, and J. W. Jeon, “Analysis of the smartphone camera exposure effect on laser extraction,” in Proc. International Conference on Electronics, Information, and Communication-ICEIC (Jeju, Korea, Jan. 31-Feb. 3, 2021), pp. 1-4.
    CrossRef
  6. H. Zhao, R. Dai, and C. Xiao, “A machine vision system for stacked substrates counting with a robust stripe detection algorithm,” IEEE Trans. Syst. Man Cybern.: Syst. 49, 2352-2361 (2019).
    CrossRef
  7. S. Yi and S. Min, “A practical calibration method for stripe laser imaging system,” IEEE Trans. Instrum. Meas. 70, 5003307 (2021).
    CrossRef
  8. X. Gao, S. Shen, L. Zhu, T. Shi, Z. Wang, and Z. Hu, “Complete scene reconstruction by merging images and laser scans,” IEEE Trans. Circ. Syst. Video Technol. 30, 3688-3701 (2020).
    CrossRef
  9. J. Liu, J. Zhang, and J. Xu, “Cultural relic 3D reconstruction from digital images and laser point clouds,” in Proc. Congress on Image and Signal Processing (Sanya, China, May 27-30, 2008), pp. 349-353.
    CrossRef
  10. X. Liu, “Image processing in weld seam tracking with laser vision based on radon transform and fcm clustering segmentation,” in Proc. International Conference Intelligent Computation Technology and Automation (Changsha, China, May 11-12, 2010), pp. 470-473.
    CrossRef
  11. Y. Zou, S. Cai, P. Li, and K. Zuo, “Features extraction of butt joint for tailored blank laser welding based on three-line stripe laser vision sensor,” in Proc. 29th Chinese Control and Decision Conference (Chongqing, China, May 28-30, 2017), pp. 7736-7739.
    CrossRef
  12. C. Li and G. Gong, “Research on the adaptive recognition and location of the weld based on the characteristics of laser stripes,” in Proc. 10th International Conference on Intelligent Human-Machine Systems and Cybernetics (Hangzhou, China, Aug. 25-26, 2018), pp. 163-166.
  13. L. Zhang, Q. Ye, W. Yang, and J. Jiao, “Weld line detection and tracking via spatial-temporal cascaded hidden Markov models and cross structured light,” IEEE Trans. Instrum. Meas. 63, 742-753 (2014).
    CrossRef
  14. Y. Yang, B. Zheng, H. Zheng, Z. Wang, G. Wu, and J. Wang, “3D reconstruction for underwater laser line scanning,” in Proc. MTS/IEEE OCEANS (Bergen, Norway, Jun. 10-14, 2013), pp. 1-3.
    CrossRef
  15. J. Liu, “Research on sparse code shrinkage denoise in underwater 3D laser scanning images,” in Proc. IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (Chengdu, China, Dec. 20-22, 2019), pp. 140-144.
    CrossRef
  16. J. Liu, “Research on laser stripe extraction in underwater 3D laser scanning,” in Proc. IEEE Int'l Conf. on Information and Automation (Ningbo, China, Aug. 1-3, 2016), pp. 159-165.
    CrossRef
  17. C. Aytekin, Y. Rezaeitabar, S. Dogru, and I. Ulusoy, “Railway fastener inspection by real-time machine vision,” Syst. 45, 1101-1107 (2015).
    CrossRef
  18. R. T. McKeon and P. J. Flynn, “Three-dimensional facial imaging using a static light screen (SLS) and a dynamic subject,” IEEE Trans. Instrum. Meas. 59, 774-783 (2010).
    CrossRef
  19. D. Zhang, G. Lu, W. Li, L. Zhang, and N. Luo, “Palmprint recognition using 3-D information,” IEEE Trans. Syst. Man Cybern. Part C 39, 505-519 (2009).
    CrossRef
  20. B. Zhang, W. Li, P. Qing, and D. Zhang, “Palm-Print Classification by Global Features,” IEEE Trans. Syst. Man Cybern. Syst. 43, 370-378 (2013).
    CrossRef
  21. J. Hattuniemi and A. Makynen, “A calibration method of triangulation sensors for thickness measurement,” in Proc. IEEE Instrumentation and Measurement Technology Conference (Singapore, May 5-7, 2009), pp. 566-569.
    CrossRef
  22. H. Wang, Y. Wang, J. Zhang, and J. Cao, “Laser stripe center detection under the condition of uneven scattering metal surface for geometric measurement,” IEEE Trans. Instrum. Meas. 69, 2182-2192 (2019).
    CrossRef
  23. P. Fasogbon, L. Duvieubourg, and L. Macaire, “Fast laser stripe extraction for 3D metallic object measurement,” in Proc. 42nd Annual Conference of the IEEE Industrial Electronics Society (Florence, Italy, Oct. 23-26, 2016), pp. 923-927.
    CrossRef
  24. A. Molder, O. Martens, T. Saar, and R. Land, “Laser line detection with sub-pixel accuracy,” Elektron Ir Elektrotech 20, 132-135 (2014).
    CrossRef
  25. R. B. Fisher and D. K. Naidu, "A comparison of algorithms for subpixel peak detection," in Image Technology, J. L. C. Sanz, Ed (Springer Berlin, Germany, 1996), Chapter 15, pp. 384-404.
    CrossRef
  26. W. Liu, H. Di, Y. Zhang, Y. Lu, X. Cheng, J. Cui, and Z. Jia, “Automatic detection and segmentation of laser stripes for industrial measurement,” IEEE Trans. Instrum. Meas. 69, 4507-4515 (2020).
    CrossRef
  27. R. Paschotta, “Laser beams,” in Field Guide to Lasers (SPIE Press, USA, 2008), pp. 18-19.
    CrossRef
  28. Y. Li, J. Zhou, F. Huang, and L. Liu, “Sub-pixel extraction of laser stripe center using an improved gray-gravity method,” Sensors 17, 814 (2017).
    Pubmed KoreaMed CrossRef
  29. F. Blais and M. Rioux, “Real-time numerical peak detector,” Signal Process. 11, 145-155 (1986).
    CrossRef
  30. J. Forest, J. Teixidor, J. Salvi, and E. Cabruja, “A proposal for laser scanners sub-pixel accuracy peak detector,” in Proc. Workshop on European Scientific and Industrial Collaboration (2003), pp. 525-532.
  31. L. Zhang, Y. Xu, and C. Wu, “Features extraction for structured light stripe image based on OTSU threshold,” in Proc.International Symposium on Knowledge Acquisition and Modeling (Sanya, China, Oct. 8-9, 2011), pp. 92-95.
    CrossRef
  32. G. Rao and W. Zhang, “Laser stripe center extraction based on ridgeline tracking algorithm,” in Proc. 2nd International Conference on Mechanical Engineering, Intelligent Manufacturing and Automation Technology (Guilin, China, Jan. 7-9, 2022), pp. 1-4.
  33. L. Yang, J. Fan, B. Huo, E. Li, and Y. Liu, “Image denoising of seam images with deep learning for laser vision seam tracking,” IEEE Sensors J. 22, 6098-6107 (2022).
    CrossRef
  34. C. Li, X. Ye, Y. Gong, and T. Wang, “A center-line extraction algorithm of laser stripes based on multi-Gaussian signals fitting,” in Proc. IEEE International Conference on Information and Automation-ICIA (Ningbo, China, Aug. 1-3, 2016), pp. 189-194.
  35. B. Liu, Q. Xue, and P. Sun, “Research on the gray distribution model of stripe in structured light 3D measurement,” in Proc. 8th International Symposium on Next Generation Electronics-ISNE (Zhengzhou, China, Oct. 9-10, 2019), pp. 1-4.
    CrossRef