검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2024; 8(2): 170-182

Published online April 25, 2024 https://doi.org/10.3807/COPP.2024.8.2.170

Copyright © Optical Society of Korea.

Point Cloud Measurement Using Improved Variance Focus Measure Operator

Yeni Li1,2, Liang Hou1 , Yun Chen1 , Shaoqi Huang3

1Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen 361102, China
2School of Mechanical and Automotive Engineering, Xiamen University of Technology, Xiamen 361024, China
3Aero Engine Corporation of China (AECC) Guizhou Liyang Aviation Power Co., Ltd., Guiyang 550014, China

Corresponding author: *hliang@xmu.edu.cn, ORCID 0000-0002-8271-9208
**yun.chen@xmu.edu.cn, ORCID 0000-0001-5548-7256

Received: November 24, 2023; Revised: January 23, 2024; Accepted: February 27, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

The dimensional accuracy and consistency of a dual oil circuit centrifugal fuel nozzle are important for fuel distribution and combustion efficiency in an engine combustion chamber. A point cloud measurement method was proposed to solve the geometric accuracy detection problem for the fuel nozzle. An improved variance focus measure operator was used to extract the depth point cloud. Compared with other traditional sharpness evaluation functions, the improved operator can generate the best evaluation curve, and has the least noise and the shortest calculation time. The experimental results of point cloud slicing measurement show that the best window size is 24 × 24 pixels. In the height measurement experiment of the standard sample block, the relative error is 2.32%, and in the fuel nozzle cone angle measurement experiment, the relative error is 2.46%, which can meet the high precision requirements of a dual oil circuit centrifugal fuel nozzle.

Keywords: Focus measure operator, Fuel nozzle, Out swirl injector, Point cloud measurement, Shape from focus

OCIS codes: (100.3010) Image reconstruction techniques; (110.2960) Image analysis; (120.3940) Metrology; (120.4820) Optical systems; (180.6900) Three-dimensional microscopy

The spray performance of a fuel nozzle has a great impact on the efficiency of the engine. In the combustion chamber of an aircraft engine, droplets are mixed with air to form an air-mixed fuel, which can provide energy for full combustion. A common engine fuel nozzle in the combustion chamber includes a swirl atomizer, pneumatic pesticide atomizer, and evaporative pipe atomizer. The swirl atomizer includes a single-orifice swirl atomizer and a dual-orifice swirl atomizer. The dual-orifice swirl atomizer’s structure is relatively complex; It mainly comprises an outer swirl injector and an inner swirl injector, which fuel the main oil circuit and the vice oil circuit. The dimensions of the entrance of the outer swirl injector orifice and the quality of the inner orifice surface directly affect combustion efficiency and emissions.

A number of methods have been studied to achieve high-accuracy detection of the fuel nozzle: Peiner et al. [1] researched a tactile sensor to detect the shape and roughness of the diesel nozzle hole to realize its characteristics and roughness. Huang and Ye [2] presented a method based on the gray level co-occurrence matrix (GLCM) and a circle inspection algorithm to detect the internal defects of a micro-spray nozzle, then applied a backpropagation neural network classifier to accurately identify nozzle defects, with an accuracy rate of 90.71%. Li et al. [3] used synchrotron X-ray micro-CT technology to evaluate wall surface characteristics. This technology can create a three-dimensional digital model of the fuel nozzle tip and represent the nozzle’s internal surface waviness with a spatial resolution of 3.7 µm.

However, these methods have limitations and cannot simultaneously achieve fast, non-destructive, high-precision comprehensive measurements. In order to obtain the high-precision depth of the outer swirl injector, several studies have proposed passive optical methods to obtain the accuracy depth of the fuel nozzle.

Several 3D measurement methods, including contact and non-contact measurement, were studied: The stylus measurement method is a typical contact measurement that uses a needle to contact the sample surface under a certain pressure and controls the needle to move slowly along the surface structure to obtain depth data. This method has the characteristics of stable measurement data, simple operation, and extensive measurement range, but may damage the surface structure and has limitations for certain complex structures.

In non-contact methods, electron microscopes and atomic force microscopes have super-resolution imaging characteristics, but their complex operation is unsuitable for industrial measurements. Shape from focus (SFF) technology, one of the non-contact methods, is a kind of passive optical method that uses focus position to reconstruct a three-dimensional object from image sequences. It can obtain the depth information of each pixel or pixel window by searching for the best focus position with the maximum focus volume (FV). This technology has the advantages of a simple principle, high accuracy, good real-time performance, and low scene requirements, which meet the requirements of 3D measurement of the fuel nozzle.

Hou et al. [4] proposed an improved fast SFF method to realize the precise measurement of the key features of a dual-orifice pressure-swirl nozzle, an approximate method that uses the peak region of the central pixel to replace the peak region of other pixels applied to detection. Li and Chen [5] verified that non-contact measurement based on shape from focus has the advantages of low cost and fast detection speed, and can achieve cross-scale size and morphology measurement.

There are two main kinds of research to improve the accuracy of 3D point cloud measurement; Improving the performance of the focus measure operator (FMO), and improving the quality of the image sequences. FMO performance directly affects the accuracy of the depth point cloud. So improved FMOs have been proposed to obtain the accurate depth position in the former method.

Martisek [6] presented a precise method for both 2D and 3D reconstruction, based on the Fourier transform. This method can produce a better result than reconstructions from confocal microscopes or 3D scanners, and the instruments are cheaper and easier to get. Yah et al. [7] proposed a novel multidirectional modified Laplacian operator that can directly identify the depth map and fused image simultaneously. The experimental results show that the proposed method performs better and is suitable for the field of quality inspection for micromanufacturing processes. Billiot et al. [8] provided a FMO based on the variance of Tenengrad to obtain an accuracy enabling the characterization of the number of grains per wheat ear to evaluate yield at an early stage. Heilmy and Choi [9] proposed a fuzzy-based focus measure to handle imprecise data. The robust operator can accurately estimate the focus level for high-magnification astronomical images with high blue colors. Pertuz et al. [10] presented and applied a methodology to compare the performance of different FMOs for SFF. The performance of the different operators was assessed in experiments.

Several factors can affect the quality of depth information extracted by the FMO, such as object texture, window size, surface roughness, illumination, and object material. Therefore some methods try to improve depth accuracy by optimizing these factors. Jang et al. [11] proposed a new FMO based on the adaptive sum of weighted modified Laplacian. The adaptive window size selection algorithm is based on the variance of gray level intensity in the image window, which is robust against image noise. Lee et al. [12] introduced an adaptive window to compute and enhance the focus measure. The proposed method was experimented using image sequences of synthetic and real objects. Lee et al. [13] proposed a pixel-by-pixel semi-variogram to determine the optimum window size. This technique can improve the quality of focus measurement in comparison to previous methods based on a fixed square window. Ali and Mahmood [14] found that the quality of a depth map is mainly dependent on the accuracy level of the image focus volume. They optimized the focus volume with energy minimization. Lee et al. [15] proposed a method to optimize focus measurement for SFF based on a genetic algorithm (GA). They segmented the cell background to optimize focus measurement, and applied the GA to the variance components with a small window. Fu et al. [16] proposed a robust SFF method in the process of finding the best-focused position. They calculated the gradient of the focus measure curve with an adaptive derivative step. The zero point of the gradient curve and the derivative step are used to find the best focused position, and the best focus position directly determine the accuracy of the 3D point cloud.

In the latter method of improving the accuracy of 3D point cloud measurement, improving the image quality or enhancing the initial depth map are used. These methods have strong robustness to noise, obtain more accurate focus values, and have higher precision in depth maps.

For instance, Gladines et al. [17] proposed phase correlation (PC) and shifted phase correlation (SPC) methods that outperform traditional methods in terms of measurement accuracy and robustness to noise. Ali et al. [18] proposed a structural prior that helps to maintain the structural details in the recovered depth map. By exploiting guided filtering, they improve the initial depth map with weighted least squares (WLS)-based regularization. Fan and Yu [19] developed a novel shape from a focus method combining a 3D steerable filter for improved performance on treating texture-less regions. The edge response and the axial imaging blur degree were used in this method. The results showed that more robust and accurate identification for the focused location can be achieved. Surh et al. [20] used both local and nonlocal characteristics. The structure of this new FMO makes the focus measure more robust against noise. Wang et al. [21] built a depth map denoising model to improve the depth map. The noise in the depth map is divided into anomalous noise and minor noise, and spatial clustering and bilateral filtering are used to process them separately. The noise reduction effect shows good results. Li et al. [22] proposed adaptive weighted guided image filtering for depth enhancement in the shape from focus.

This paper is organized as follows: Section 1 briefly introduces the related works of shape from focus technology. Section 2 briefly introduces the principle of this method and shows a 3D point cloud of a fuel nozzle extracted by the preprocessing image sequences based on saturated highlight removal. The new FMO is presented in Section 3. The measurement method for the fuel nozzle point cloud is presented in Section 4. In this section, the point cloud slicing method was applied to obtain the center coordinate and the radius of the inlet hole at different positions. Then two slices of different depths were combined to obtain the inner hole conical degree. Section 5 analyzes the experimental results, and the conclusion is provided in Section 6.

In the main oil circuit and the vice oil circuit, the key dimensions and orifice quality of the outer swirl injector directly affect the effectiveness and uniformity of fuel atomization Therefore, it is necessary to research injector performance. High precision three-dimensiaonal measurement of key dimensions of fuel nozzles The results can provide a reference for its optimization for design and processing.

The material of the swirl injector is martensitic stainless steel 9Cr18. Due to the reflection characteristics of a metal surface, it is easily prone to mirror reflection. At the same time, the illumination and reflection conditions in the assembly environment are complex, so there may be highlights on the surface of the fuel nozzle. Therefore, when using SFF technology to obtain image sequences and extract depth information from them, the problem of inaccurate data will be encountered.

According to [23], in specular images, the highlight areas contain some saturated pixels and some unsaturated pixels. In the HSV color space, the pixels are set to be saturated highlights when the brightness value is greater than a certain threshold. On the contrary, the remaining specular highlight pixels are set to be unsaturated pixels.

The saturated highlights will directly affect the image quality, especially when the depth data is extracted. So, it is necessary to remove the saturated highlights and repair the surface texture information. The more saturated the highlights, the greater the deviation of the point cloud. The fuel nozzle parts are mainly processed on Bumotec S191, which is a kind of turning and milling composite processing equipment.

The machining requirements of the outer swirl injector are as follows:

  • Turning the outlet injector.

  • Milling the outer cone.

  • Drilling the outlet hole.

  • Milling the swirl.

  • Intermediate inspection, heat treatment, polishing, and grinding of the large end and inner cone are conducted.

  • Clean the inner hole and complete the last inspection.

The roughness of the inner surface of the fuel nozzle directly affects the uniformity of fuel atomization. Therefore, the roughness of the inner surface of the fuel nozzle is require to not exceed 0.1 mm. I The minimum orifice aperture is approximately Ф 0.42 mm. The dimension accuracy and surface morphology of the out-swirl injector will determine the degree of fuel atomization. As shown in Fig. 1, the 3D point cloud extraction method can be explained as follows:

Figure 1.Schematic diagram of 3D point cloud extraction.

Step 1: Put the fuel nozzle on the measurement platform, composed of an industrial camera, a microscope lens, and a light source. Move the camera from bottom to top along the optical axis with a certain step to change the distance between the object and the camera. The image is blurred at the same height as the fuel nozzle, then repeats focused and blurred again.

Step 2: Due to the equipment and environment limitation during the process of image sequence acquisition, there may be low image quality. In particular, when collecting images of inner holes, it is necessary to provide sufficient brightness of the light sources, as there may be highlights in some areas on the metal surface. Unsaturated highlights do not affect the extraction of point clouds, while saturated highlights can lead to deviations in point cloud extraction. Therefore, before performing point cloud extraction, image preprocessing is necessary. If there is an image with saturated highlights, it is necessary to remove the saturated highlights and repair the image.

Step 3: The sequence image of each frame is segmented according to a suitable window size, and an improved variance FMO is used to evaluate the image sharpness. The best focus position is at the image sequence of maximum focus volume. Then the initial point cloud is extracted. Calibrated data is acquired by the image resolution and real height of the z-axis.

Step 4: A certain thickness of the point cloud is obtained as a tangent plane. The point cloud slicing is transformed into 2D grayscale values. Then the Hough transform is used to obtain the fitting circle’s position and the radius, and conical degree and cone angle of the fuel nozzle can be calculated by these.

As shown in Fig. 2, a transparent calibration ruler is used to calculate the image pixel equivalent. The minimum scale of the ruler is 0.1 mm and the size of each milled line is 0.05 mm. The horizontal field of view is 2.77 mm, and the vertical field of view is 1.85 mm. The image resolution is 5,496 × 3,672. The pixel equivalent of the x-axis is 2.77 mm/5,496 pixels, and the pixel equivalent of the y-axis is 1.85 mm/3,672 pixels. So, the average pixel equivalent of the camera field of view is 0.504 μm. The subdivision of the stepper motor is 16, the distance of the screw is 4 mm, and step per motor rotating halfway is 4 mm, due to the fact that the stepper motor normally consists of 200 pulses per cycle. But the subdivision of the driver is set to 16, and when the z-axis sends 1,600 pulses, the stepper motor travel is a half-cycle of 2 mm screw distance. The number of captured images is 201 frames. So, the frame distance is 10 μm.

Figure 2.Field of view: (a) x-axis, (b) y-axis.

Xc=0.504×ω×Xi,
Yc=0.504×ω×Yi,
Zc=10×Zi.

In Eqs. (1)–(3), Xi is the x-axis initial depth data, Yi is the initial data of the y-axis depth data, Zi is the z-axis depth data, and ω is the window size of the FMO. Xc, Yc and Zc are the actual dimensions of each point. The calibrated coordinate unit of the x, y, z-axis is μm.

3.1. Focus Measures Operator

The focus evaluation criterion is the basis for evaluating image sharpness. The performance of the FMO directly affects the reconstruction accuracy of the fuel nozzle. When the object is accurately focused, the edges of the image are clear and the gray contrast of the image is very strong. In the frequency domain, there are more high-frequency components of the image [24]. An ideal focus measure function should have good unimodal properties with a unique maximum at the all-in-focus position. High sensitivity, good stability and less computational complexity are also good indexes [25]. Therefore, the traditional focus evaluation function cannot meet the requirements of high precision.

The FMO can assess the sharpness of each pixel, and can be divided into two main families; One based on the time domain, and the other based on the frequency domain [26]. Traditional image evaluation functions include gray gradient function, gray entropy function and frequency domain function. The calculation time of the gray entropy function and frequency domain function are too long for real-time measurement [27]. As for spatial functions, there are the following typical evaluation functions including the energy of gradient function, Robert function, Tenengrad function, Brenner function, variance function, Laplacian function and Vollath function [28].

The energy of the image gradient (EOG) function finds the sum of squared directional gradients [29]. It is similar to the Tenengrad function, which uses the difference between adjacent points to calculate the gradient value of a point [30]. It is defined as:

FEOG= x=1 M1 y=1 N1[Gx (x,y)2+Gy(x,y)2],
Gx(x,y)=I(x+1,y)I(x,y),Gy(x,y)=I(x,y+1)I(x,y).

The variance function represents the statistical dispersion of the image gray distribution, and the gray value transformation range of a defocused image is small while the statistical dispersion is low [31]. The larger the variance function, the greater the statistical dispersion of image gray distribution. It is defined as:

FVariance=xy{[I(x,y)μ]2},
μ=1M×NxyI(x,y).

In Eq. (7), μ is the mean gray value of the image. M and N are the total number of pixels with x-dimensions and y-dimensions, respectively.

3.2. The Improved Focus Measure Operator

The fuel nozzle topography presents the surface roughness and quality of the process. The sharpness function for extracting the 3D point cloud of the fuel nozzle should satisfy the following requirements, such as better unimodality, high efficiency, and strong robustness. When extracting depth point clouds by the variance function, the evaluation curve’s unimodal performance gets worse. Two relatively close unimodal peaks are shown in Fig. 3(b).

Figure 3.Focus evaluation function curve: (a) Energy of the image gradient (EOG) function, (b) variance function, (c) variance EOG function.

Although the EOG function has a certain degree of volatility, it has good unimodal performance. By combining the advantages of the two functions, a new function is created. In experiments, the weights of the energy function and variance function are determined to be 0.3 and 0.7, respectively. As shown in Fig. 3(c), the newly created function has a good unimodal and the second peak is lower than the variance function. So, the improved sharpness function based on the variance function and the EOG function is created. It is defined as:

FVarianceEOG=xy0.7×(I(x,y)μ)2+0.3×(I(x+1,y)I(x,y))2,μ=1M×NxyI(x,y).

In experiments, it has been confirmed that the point cloud has the least noise when the weights are 0.7 and 0.3 in Eq. (8). Therefore, the combination of these weights is used for the creation of a new clarity evaluation function.

The performance of the new FMO will be evaluated by the sharpness evaluation curve and the accuracy of the 3D point cloud. As shown in Fig. 4, a window size of 24 × 24 at image positions (2029, 2515) is used to extract the 3D point cloud of the fuel nozzle [32]. The energy function (EOG) in Fig. 3(a) has good unimodal performance, but the highest position has two close points. The variance function in Fig. 3(b) has less unimodal performance than the energy function (EOG), but its highest position also has two close points. The new improved variance EOG in Fig. 3(c) has the best unimodal performance, and there are no points at the highest position. In conclusion, as shown in Fig. 3, the best focus frame is 62.

Figure 4.The best focus image frame.

3.3. Experiment of the Focus Measure Operator

The newly created sharpness evaluation function variance EOG was used to extract the point cloud from the inlet fuel nozzle image sequences with different window sizes.

The resolution of the fuel nozzle image is 5,496 × 3,672, and it is advisable to choose a window size that can be divided as much as possible for validation. Window sizes of 6, 12, and 24 can be divided, while 18, 30, and 36 cannot be divided.

The calibrated point cloud shown in Fig. 5 means that window size is very sensitive to the extraction of the 3D point cloud. The noise is minimal when the window size is 24, and the number of point clouds is 229 × 153, which is sufficient and does not require the processing of scattered points. Besides, the noise is extensive when the window size is 6, and then subsequently decreases in proportion to the window size. The point clouds are denser when the window size is 12. The noise continues to decrease and the number of point clouds is moderate when the window size is 18 and 24. The noise of a point cloud is not so high, but some depth information is lost in the point cloud when the window size is 30 and 36. The difference in window size between 18 and 24 lies in the amount of noise, and there is little difference between them. Therefore, 24 × 24 window size is selected.

Figure 5.3D reconstruction results of different window sizes based on the new focus measure operator. (a) 6 × 6 size, (b) 12 × 12 size, (c) 18 × 18 size, (d) 24 × 24 size, (e) 30 × 30 size, and (f) 36 × 36 size.

To further compare the performance of sharpness evaluation functions in point cloud extraction, eight different sharpness evaluation functions were used to extract point clouds from No. 1 fuel nozzle image sequences. The window size of 24 × 24 was adopted to extract the depth data. This window size can extract enough points and less noise, as shown in Fig. 6.

Figure 6.Point cloud extraction with the window size of 24 × 24 based on the different focus measure operators: (a) Energy of the image gradient (EOG) function, (b) variance function, and (c) variance EOG function.

The newly created function variance EOG has the best point data, while the EOG function is the worst. Other kinds of functions lead to some noise from the point cloud. As shown in Fig. 6, the 3D point cloud is extracted by six FMOs.

The most noise points are when the depth data is extracted by the EOG function, which is shown in Fig. 6(a) and the least noisy point is shown in Fig. 6(b). The best point cloud is the one in Fig. 6(c), which was extracted by the new sharpness function.

The extraction time for point clouds is shown in Table 1. Combining the analysis of the point cloud and calculation time of the eight functions, the improved focus evaluation function is more suitable to extract the point cloud of the fuel nozzle.

TABLE 1 Calculation time

No.Focus Measure OperatorTime (s)
1Energy of Gradient160.48
2Variance186.08
3Variance EOG171.19

4.1. Image Processing and Saturated Highlight Removal

As shown in Fig. 7, heated treatment fuel nozzle No. 1 was used to measure the geometric parameters. The inlet circle diameter and conical degree of the fuel nozzle were measured by the improved measure focus operator. Due to the high smoothness and reflectivity of the inner hole surface, there were some saturated highlights on the surface. The saturated highlights can lead to error or deviation when the 3D point cloud is extracted. So, the saturated highlight image should be repaired for correct depth extraction.

Figure 7.Dimensions of the fuel nozzle.

The highlight focus area of the image sequences should be inpainting, but the defocus highlight image can be ignored. A method based on MRF patch-match copies the patches of highlight-free areas to the highlight area within the same focus region [33]. First, the image is segmented into two main regions, the inlet hole and the entrance annular, respectively. Second, the offset of the two similar patches in each subregion was computed with the size of 12 × 12. Third, after obtaining the optimized image offset map, the approximate offset value was calculated using the nearest-neighbor field (NNF) algorithm [34]. Finally, according to the size and the distribution of the energy labels, the best matching patch of the subregion is copied to the highlight area of the same subregion [35].

The improved variance EOG focus measure operator was used for point cloud extraction. As shown in Fig. 8(a), there are some noisy points in the 3D point cloud. After brightening the image, the noise of the point cloud degreases, as shown in Fig. 8(b). After darkening the image, the noise of the point cloud increases, as shown in Fig. 8(c).

Figure 8.Point cloud extraction from different images: (a) Original image, (b) enhancement image, and (c) darkened image.

It can be concluded that the changes in image contrast and brightness have a very small impact on point cloud extraction. If the sharpness of the image is sufficient and there are no saturated highlights during the image acquisition process, the texture of the fuel nozzle inner hole image can be preserved. Therefore, regardless of the changes in image brightness and contrast, there is little difference when the 3D point cloud is extracted by the sharpness function. Consequently, if a stable light source was used in a sequence image acquisition system, the image sequences would be relatively stable, but the saturated highlights would lead to some texture loss. So, it is necessary to remove the saturated highlights, but the dark or bright areas that have clear textures do not need inpainting.

4.2. Calculation of Inner Hole Taper

There are two main kind methods for measuring 3D point clouds: One is the projection method, and the other is the slicing method [36]. The projection method projects the triangle mesh of the point cloud on the designated projection plane, which needs to check the filling of holes and topology. The slicing method converts complex 3D models into 2D models, which greatly reduces the complexity of space without affecting measurement accuracy [37].

The point cloud of the tangent plane with a certain thickness was obtained as a new point cloud set. The thickness of the point cloud slices should be moderate. If the thickness of the tangent is too large, the measurement deviation of the fitted circle will be large. If the thickness is too small, the number of points will be insufficient.

The point cloud measurement of the fuel nozzle based on the improved sharpness function is shown in Fig. 9:

Figure 9.The flow of point cloud measurement.

Step 1: The sequence image data were acquired at equal distance and processed on the MATLAB 2017 platform.

Step 2: Saturated highlights were removed and the image texture was inpainted by copying the similar patch algorithm.

Step 3: An improved sharpness evaluation operator was used to calculate the focus volume of the patch.

Step 4: The initial depth data were extracted by this new FMO with a suitable window size after traversing each pixel block.

Step 5: The initial depth point to the true depth data of the fuel nozzle was calibrated.

Step 6: Two different heights of the point cloud were selected, and two slicing point clouds were obtained as new point sets. The slicing point should have a certain thickness.

Step 7: The 3D slice was converted into a 2D gray image, and the Hough transform was used to obtain the circle center and radius. Then the taper and the cone angle were calculated according to Eq. (1) and Eq. (3).

The thickness is suitable for the tangent plane. Point cloud slices are obtained along the z-axis of the initial point cloud as shown in Fig. 10, and the direction of the normal vector is consistent with the center axis of the nozzle. The intersecting 3D point cloud was converted into a 2D plane. Then circle fitting was performed to obtain the center and radius of the sliced point cloud after two-dimensional transformation. In Fig. 10, d1 is the diameter of the big circle at the frame and d2 is the diameter of the small circle. The angle of the fuel nozzle cone is 2α, and the distance of the two slice point clouds is h. The center and diameter of the cross-section interception can be obtained by the Hough transform algorithm, so that the angle and the conical degree of the cone can be calculated by Eqs. (9)–(11).

Figure 10.Cross-section circle and cone angle of inner hole.

The conical degree K is shown in Eq. (1). The angle of the fuel nozzle cone is shown in Eq. (2) and Eq. (3).

K=d2d1h.

Then the angle of cone is defined as:

tanα=d2d12h=K2,
2α=2arctanK2.

As shown in Fig. 11, a fuel nozzle geometric parameter measurement experimental system was established to measure the diameter and cone angle of the fuel nozzle inner hole. The experimental platform mainly consists of a z-axis stepper motor, an industrial camera, a microscope lens, and a bowl-shaped light source. All experiments were run on a PC with an Intel Core i7 2.9 GHz and 16 G RAM. The model of camera was MER-2000-19U3C with the resolution of 5,496 × 3,672. In order to obtain better image sequences, a microscope lens with a depth range of 120–840 μm was used for testing. The lens model is OPTEM-304310, with an object distance of 89 mm. The microscope has multiple magnifications of 0.7x, 1.0x, 1.5x, 2.0x, 3.0x, 4.0x, and 4.5x. The fuel nozzle image sequences collected in this paper were obtained by selecting a microscope lens with a magnification of 4.5x. The NA numerical aperture of the lens is 0.026–0.070, corresponding to a depth of field of 0.84–0.12 mm.

Figure 11.The fuel nozzle measurement system.

The number of frames is 201. Software implementation and validation were done in LabVIEW and MATLAB on image sequences of the fuel nozzle. The main equipment parameters in the experiment are shown in Table 2.

TABLE 2 Experimental parameters

Experimental ConditionsParameter
Resolution of Step Motor (μm)0.1
Image Size of CMOS Camera (μm)2.4 × 2.4
DOF of Microscopic Lens (μm)840–120
Magnification of Microscopic Lens0.7x–4.5x
Luminous Diameter of Bowl Light (mm)65.6


5.1. Measurement of the Standard Gauge Block

In order to verify the effectiveness of our method, a standard gauge block with a height of 1,150 μm was used to extract the point cloud and measure its true height. A 10 μm step distance was used to collect image sequences of the standard gauge block, and there were 200 frames of images. The frames are 15, 52 and 167, as shown in Fig. 12(a). Also, 3D point cloud extraction was performed on the image sequences to obtain the initial point cloud, and then the initial point cloud was calibrated. The pixel equivalent was 1.85 μm on the x, y-axis. The window size for sharpness evaluation was 24 × 24, and the pixel equivalent on the z-axis was 10 μm. The calibrated point cloud is shown in Fig. 12(b). Two crossed sections in the middle position of the point cloud were obtained. The height distance between the two point clouds was calculated as hz. The average height of the standard sample is 1,123.3 μm. The error value is 26.7 μm. So, the relative error is 2.32%.

Figure 12.Point cloud of standard sample blocks extracted with a window size of 24 × 24. (a) Sequence images, (b) 3D-point cloud.

5.2. Measuring the Cone Angle and Conical Degree

The No. 1 heated treatment fuel nozzle was used to verify the measurement algorithm. There will be sufficient depth points and less noise with the improved variance EOG with a window size of 24 × 24.

The point cloud was extracted by the improved FMO with a window size of 24 × 24. A slice of point cloud with a thickness of 4 at the 36-frame position in the initial point cloud, correspondence in the height of the calibrated point cloud is 413.5 μm, which is shown in Fig. 13(a). After two-dimensional transformation of the 3D slice point cloud, the fitting circle radius r1 is 212.2 μm. A thickness of 4 is at 53 frame positions in the initial point cloud, which is shown in Fig. 13(b). The calibrated height is 608.8 μm and the radius of the fitted circle r2 is 397.8 μm. The height distance of h between the point slices is 195.3 μm. The cone angles a are calculated as 87° and the conical degree K is 1.90 according to Eq. (9). The measurement result is 89.2° with a Keyence microscope, and the absolute error is 2.2° and the relative error is 2.46%. After two-dimensional transformation of the 3D slice point cloud, the fitting circle radius r1 is 212.2 μm.

Figure 13.The point cloud extracted by a window size of 24 × 24. (a) The first point slice and fitting circle, (b) the second point slice and fitting circle.

In this paper, the improved FMO variance EOG is proposed to solve the high-precision measurement problem for the geometric parameters of fuel nozzles. The proposed algorithm can extract the depth point cloud with high accuracy and have the best sharpness evaluation curve. In the image process step, it was concluded that a dark image or a light image cannot affect the depth position of the point cloud except for the saturated highlights. So, if there are saturated highlights, the image sequences need to be repaired. By comparing and analyzing the performance between the traditional sharpness functions and the new FMO, a window size of 24 × 24 was found to be the optimal one. The improved function has good sharpness and a single peak.

The experimental results showed that this new operator has stronger robustness and higher accuracy. The relative error of the standard sample block depth data reached 2.32%. In the measurement of the fuel nozzle cone angle, the relative error is 2.46%. This method can meet the measurement requirements of the fuel nozzle and other complex structure objects.

National Natural Science Foundation of China (Grant no. 51975495); the Fujian Province Science and Technology Innovation Platform Project (Grant no. 2022-P-022); the Guiding Funds of the Central Government to Support the Development of Local Science and Technology (Grant no. 2022L3049).

Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

  1. E. Peiner, M. Balke, and L. Doering, “From measurement inside fuel injector nozzle spray holes,” Microelectron. Eng. 86, 984-986 (2009).
    CrossRef
  2. K.-Y. Huang and Y.-T. Ye, “Machine vision system for the inspection of micro-spray nozzle,” Sensors 15, 15326-15338 (2015).
    Pubmed KoreaMed CrossRef
  3. Z. Li, W. Zhao, Z. Wu, H. Gong, Z. Hu, J. Deng, and L. Li, “The measurement of internal surface characteristics of fuel nozzle orifices using the synchrotron X-ray micro-CT technology,” Sci. China Technol. Sci. 61, 1621-1627 (2018).
    CrossRef
  4. L. Hou, J. Zou, W. Zhang, Y. Chen, W. Shao, Y. Li, and S. Chen, “An improved shape from focus method for measurement of three-dimensional features of fuel nozzles,” Sensors 23, 265 (2023).
    Pubmed KoreaMed CrossRef
  5. Y. Li, L. Hou, and Y. Chen, “Fractal analysis of fuel nozzle surface morphology based on the 3D-sandbox method,” Micromachines 14, 904 (2023).
    Pubmed KoreaMed CrossRef
  6. D. Martisek, “Fast shape from focus method for 3D object reconstruction,” Optik 169, 16-26 (2018).
    CrossRef
  7. T. Yah, Z. Hu, Y. Qian, Z. Qiao, and L. Zhang, “3D shape reconstruction from multifocus image fusion using a multidirectional modified Laplacian operator,” Pattern Recognit. 98, 107065 (2020).
    CrossRef
  8. B. Billiot, F. Cointault, L. Journaux, J.-C. Simon, and P. Gouton, “3D image acquisition system based on shape from focus technique,” Sensor 13, 5040-5053 (2013).
    Pubmed KoreaMed CrossRef
  9. I. Helmy and W. Choi, “Machine learning-based automatic focusing for high magnification system,” Eng. Appl. Arti. Intell. 113, 105648 (2023).
    CrossRef
  10. S. Pertuz, D. Puig, and M. A. Garcia, “Analysis of focus measure operators for shape from focus,” Pattern Recognit. 46, 1415-1432 (2013).
    CrossRef
  11. H.-S. Jang, G. Yun, H. Mutahira, and M. S. Muhammad, “A new focus measure operator for enhancing image focus in 3D shape recovery,” Microsc. Res. Tech. 84, 2483-2493 (2021).
    Pubmed CrossRef
  12. I. Lee, M. T. Mahmood, and T.-S. Choi, “Adaptive window selection for 3D shape recovery from image focus,” Opt. Laser Tech. 45, 21-31 (2013).
    CrossRef
  13. I.-H. Lee, S.-O. Shim, and T.-S. Choi, “Improving focus measurement via variable window shape on surface radiance distribution for 3D shape reconstruction,” Opt. Lasers Eng. 51, 520-526 (2013).
    CrossRef
  14. U. Ali and M. T. Mahmood, “Energy minimization for image focus volume in shape from focus,” Pattern Recognit. 126, 108559 (2022).
    CrossRef
  15. I.-H. Lee, M. T. Mahmood, S.-O. Shim, and T.-S. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimed. Tools Appl. 71, 247-262 (2013).
    CrossRef
  16. B. Fu, R. He, Y. Yuan, W. Jia, S. Yang, and F. Liu, “Shape from focus using gradient of focus measure curve,” Opt. Lasers Eng. 160, 107320 (2023).
    CrossRef
  17. J. Gladines, S. Sels, De. Boi, and S. Vanlanduit, “A phase correlation based peak detection method for accurate shape from focus measurement,” Measurement 213, 112726 (2023).
    CrossRef
  18. U. Ali, I. H. Lee, and M. T. Mahmood, “Incorporating structural prior for depth regularization in shape from focus,” Comput. Vis. Image Underst. 227, 103619 (2023).
    CrossRef
  19. T. Fan and H. Yu, “A novel shape from focus method based on 3D steerable filters for improved performance on treating texture less region,” Opt. Commun. 410, 254-261 (2018).
    CrossRef
  20. J. Surh, H.-G. Jeon, Y. Park, S. Im, H. Ha, and I. S. Kweon, “Noise robust depth from focus using a ring difference filter,” in Proc. 2017 IEEE Conference on computer vision and pattern Recognition (CVPR) (Honolulu, Hawaii, USA, Jul. 21-26), pp. 6328-6337.
    CrossRef
  21. Y. Wang, H. Jia, P. Jia, K. Chen, and X. Zhang, “A novel algorithm for three-dimensional shape reconstruction for microscopic objects based on shape from focus,” Opt. Laser Tech. 168, 109931 (2024).
    CrossRef
  22. Y. Li, Z. Li, C. Zheng, and S. Wu, “Adaptive weighted guided image filtering for depth enhancement in shape-from-focus,” Pattern Recognit. 131, 108900 (2022).
    CrossRef
  23. W. Feng, X. Cheng, X. Li, Q. Liu, and Z. Zhai, “Specular highlight removal based on dichromatic reflection model and priority-based adaptive direction with light field camera,” Opt. Lasers Eng. 172, 107856 (2024).
    CrossRef
  24. Z. Li, J. Dong, W. Zhong, G. Wang, X. Liu, Q. Liu, and X. Song, “Motionless shape-from-focus depth measurement via high-speed axial optical scanning,” Opt. Commun. 546, 129756 (2023).
    CrossRef
  25. Z. Ma, D. Kim, and Y.-G. Shin, “Shape from focus reconstruction using nonlocal matting Laplacian prior followed by MRF-based refinement,” Pattern Recognit. 103, 107302 (2020).
    CrossRef
  26. Z. Zhang, F. Liu, Z. Zhou, Y. He, and H. Fang, “Roughness measurement of leaf surface based on shape from focus,” Plant Methods 17, 72 (2021).
    Pubmed KoreaMed CrossRef
  27. K. Xie, D. Lei, W. Du, P. Bai, F. Zhu, and F. Liu, “A new operator based on edge detection for monitoring the cable under different illumination,” Mech. Syst. Signal Process. 187, 109926 (2023).
    CrossRef
  28. Y. Wang, K. Chen, H. Jia, P. Jia, and X. Zhang, “Shape from focus reconstruction using block processing followed by local heat-diffusion-based refinement,” Opt. Lasers Eng. 170, 107754 (2023).
    CrossRef
  29. S. K. Nayar and Y. Nakagawa, “Shape from focus: An effective approach for rough surfaces,” in Proc. IEEE International Conference on Robotics Automation (Cincinnati, OH, USA, May. 13-18, 1990), pp. 218-225.
  30. M. G. Chun and S. G. Kong, “Focusing in thermal imagery using morphological gradient operator,” Pattern Recognit. Lett. 38, 20-25 (2014).
    CrossRef
  31. Y. Tian, H. Cui, Z. Pan, J. Liu, S. Yang, L. Liu, W. Wang, and L. Li, “Improved three-dimensional reconstruction algorithm from a multifocus microscopic image sequence based on a nonsubsampled wavelet transform,” Appl. Opt. 57, 3864-3872 (2018).
    Pubmed CrossRef
  32. Y. Li, L. Hou, and Y. Chen, “3D measurement method for saturated highlight characteristics on surface of fuel nozzle,” Sensors 22, 5661 (2022).
    Pubmed KoreaMed CrossRef
  33. C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “Patch match: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph. 28, 24 (2009).
    CrossRef
  34. K. He and J. Sun, “Image completion approaches using the statistics of similar patches,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 2423-2435 (2014).
    Pubmed CrossRef
  35. J. Cheng and Z. Li, “Markov random field-based image inpainting with direction structure distribution analysis for maintaining structure coherence,” Signal Process. 9, 182-197 (2019).
    CrossRef
  36. S. M. I. Zolanvari and D. F. Laefer, “Slicing method for curved façade and window extraction from point clouds,” ISPRS J. Photogramm. Remote Sens. 119, 334-346 (2016).
    CrossRef
  37. D. Krawczyk and R. Sitnik, “Segmentation of 3D point cloud data representing full human body geometry: A review,” Pattern Recognit. 139, 109444 (2023).
    CrossRef

Article

Research Paper

Curr. Opt. Photon. 2024; 8(2): 170-182

Published online April 25, 2024 https://doi.org/10.3807/COPP.2024.8.2.170

Copyright © Optical Society of Korea.

Point Cloud Measurement Using Improved Variance Focus Measure Operator

Yeni Li1,2, Liang Hou1 , Yun Chen1 , Shaoqi Huang3

1Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen 361102, China
2School of Mechanical and Automotive Engineering, Xiamen University of Technology, Xiamen 361024, China
3Aero Engine Corporation of China (AECC) Guizhou Liyang Aviation Power Co., Ltd., Guiyang 550014, China

Correspondence to:*hliang@xmu.edu.cn, ORCID 0000-0002-8271-9208
**yun.chen@xmu.edu.cn, ORCID 0000-0001-5548-7256

Received: November 24, 2023; Revised: January 23, 2024; Accepted: February 27, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The dimensional accuracy and consistency of a dual oil circuit centrifugal fuel nozzle are important for fuel distribution and combustion efficiency in an engine combustion chamber. A point cloud measurement method was proposed to solve the geometric accuracy detection problem for the fuel nozzle. An improved variance focus measure operator was used to extract the depth point cloud. Compared with other traditional sharpness evaluation functions, the improved operator can generate the best evaluation curve, and has the least noise and the shortest calculation time. The experimental results of point cloud slicing measurement show that the best window size is 24 × 24 pixels. In the height measurement experiment of the standard sample block, the relative error is 2.32%, and in the fuel nozzle cone angle measurement experiment, the relative error is 2.46%, which can meet the high precision requirements of a dual oil circuit centrifugal fuel nozzle.

Keywords: Focus measure operator, Fuel nozzle, Out swirl injector, Point cloud measurement, Shape from focus

I. INTRODUCTION

The spray performance of a fuel nozzle has a great impact on the efficiency of the engine. In the combustion chamber of an aircraft engine, droplets are mixed with air to form an air-mixed fuel, which can provide energy for full combustion. A common engine fuel nozzle in the combustion chamber includes a swirl atomizer, pneumatic pesticide atomizer, and evaporative pipe atomizer. The swirl atomizer includes a single-orifice swirl atomizer and a dual-orifice swirl atomizer. The dual-orifice swirl atomizer’s structure is relatively complex; It mainly comprises an outer swirl injector and an inner swirl injector, which fuel the main oil circuit and the vice oil circuit. The dimensions of the entrance of the outer swirl injector orifice and the quality of the inner orifice surface directly affect combustion efficiency and emissions.

A number of methods have been studied to achieve high-accuracy detection of the fuel nozzle: Peiner et al. [1] researched a tactile sensor to detect the shape and roughness of the diesel nozzle hole to realize its characteristics and roughness. Huang and Ye [2] presented a method based on the gray level co-occurrence matrix (GLCM) and a circle inspection algorithm to detect the internal defects of a micro-spray nozzle, then applied a backpropagation neural network classifier to accurately identify nozzle defects, with an accuracy rate of 90.71%. Li et al. [3] used synchrotron X-ray micro-CT technology to evaluate wall surface characteristics. This technology can create a three-dimensional digital model of the fuel nozzle tip and represent the nozzle’s internal surface waviness with a spatial resolution of 3.7 µm.

However, these methods have limitations and cannot simultaneously achieve fast, non-destructive, high-precision comprehensive measurements. In order to obtain the high-precision depth of the outer swirl injector, several studies have proposed passive optical methods to obtain the accuracy depth of the fuel nozzle.

Several 3D measurement methods, including contact and non-contact measurement, were studied: The stylus measurement method is a typical contact measurement that uses a needle to contact the sample surface under a certain pressure and controls the needle to move slowly along the surface structure to obtain depth data. This method has the characteristics of stable measurement data, simple operation, and extensive measurement range, but may damage the surface structure and has limitations for certain complex structures.

In non-contact methods, electron microscopes and atomic force microscopes have super-resolution imaging characteristics, but their complex operation is unsuitable for industrial measurements. Shape from focus (SFF) technology, one of the non-contact methods, is a kind of passive optical method that uses focus position to reconstruct a three-dimensional object from image sequences. It can obtain the depth information of each pixel or pixel window by searching for the best focus position with the maximum focus volume (FV). This technology has the advantages of a simple principle, high accuracy, good real-time performance, and low scene requirements, which meet the requirements of 3D measurement of the fuel nozzle.

Hou et al. [4] proposed an improved fast SFF method to realize the precise measurement of the key features of a dual-orifice pressure-swirl nozzle, an approximate method that uses the peak region of the central pixel to replace the peak region of other pixels applied to detection. Li and Chen [5] verified that non-contact measurement based on shape from focus has the advantages of low cost and fast detection speed, and can achieve cross-scale size and morphology measurement.

There are two main kinds of research to improve the accuracy of 3D point cloud measurement; Improving the performance of the focus measure operator (FMO), and improving the quality of the image sequences. FMO performance directly affects the accuracy of the depth point cloud. So improved FMOs have been proposed to obtain the accurate depth position in the former method.

Martisek [6] presented a precise method for both 2D and 3D reconstruction, based on the Fourier transform. This method can produce a better result than reconstructions from confocal microscopes or 3D scanners, and the instruments are cheaper and easier to get. Yah et al. [7] proposed a novel multidirectional modified Laplacian operator that can directly identify the depth map and fused image simultaneously. The experimental results show that the proposed method performs better and is suitable for the field of quality inspection for micromanufacturing processes. Billiot et al. [8] provided a FMO based on the variance of Tenengrad to obtain an accuracy enabling the characterization of the number of grains per wheat ear to evaluate yield at an early stage. Heilmy and Choi [9] proposed a fuzzy-based focus measure to handle imprecise data. The robust operator can accurately estimate the focus level for high-magnification astronomical images with high blue colors. Pertuz et al. [10] presented and applied a methodology to compare the performance of different FMOs for SFF. The performance of the different operators was assessed in experiments.

Several factors can affect the quality of depth information extracted by the FMO, such as object texture, window size, surface roughness, illumination, and object material. Therefore some methods try to improve depth accuracy by optimizing these factors. Jang et al. [11] proposed a new FMO based on the adaptive sum of weighted modified Laplacian. The adaptive window size selection algorithm is based on the variance of gray level intensity in the image window, which is robust against image noise. Lee et al. [12] introduced an adaptive window to compute and enhance the focus measure. The proposed method was experimented using image sequences of synthetic and real objects. Lee et al. [13] proposed a pixel-by-pixel semi-variogram to determine the optimum window size. This technique can improve the quality of focus measurement in comparison to previous methods based on a fixed square window. Ali and Mahmood [14] found that the quality of a depth map is mainly dependent on the accuracy level of the image focus volume. They optimized the focus volume with energy minimization. Lee et al. [15] proposed a method to optimize focus measurement for SFF based on a genetic algorithm (GA). They segmented the cell background to optimize focus measurement, and applied the GA to the variance components with a small window. Fu et al. [16] proposed a robust SFF method in the process of finding the best-focused position. They calculated the gradient of the focus measure curve with an adaptive derivative step. The zero point of the gradient curve and the derivative step are used to find the best focused position, and the best focus position directly determine the accuracy of the 3D point cloud.

In the latter method of improving the accuracy of 3D point cloud measurement, improving the image quality or enhancing the initial depth map are used. These methods have strong robustness to noise, obtain more accurate focus values, and have higher precision in depth maps.

For instance, Gladines et al. [17] proposed phase correlation (PC) and shifted phase correlation (SPC) methods that outperform traditional methods in terms of measurement accuracy and robustness to noise. Ali et al. [18] proposed a structural prior that helps to maintain the structural details in the recovered depth map. By exploiting guided filtering, they improve the initial depth map with weighted least squares (WLS)-based regularization. Fan and Yu [19] developed a novel shape from a focus method combining a 3D steerable filter for improved performance on treating texture-less regions. The edge response and the axial imaging blur degree were used in this method. The results showed that more robust and accurate identification for the focused location can be achieved. Surh et al. [20] used both local and nonlocal characteristics. The structure of this new FMO makes the focus measure more robust against noise. Wang et al. [21] built a depth map denoising model to improve the depth map. The noise in the depth map is divided into anomalous noise and minor noise, and spatial clustering and bilateral filtering are used to process them separately. The noise reduction effect shows good results. Li et al. [22] proposed adaptive weighted guided image filtering for depth enhancement in the shape from focus.

This paper is organized as follows: Section 1 briefly introduces the related works of shape from focus technology. Section 2 briefly introduces the principle of this method and shows a 3D point cloud of a fuel nozzle extracted by the preprocessing image sequences based on saturated highlight removal. The new FMO is presented in Section 3. The measurement method for the fuel nozzle point cloud is presented in Section 4. In this section, the point cloud slicing method was applied to obtain the center coordinate and the radius of the inlet hole at different positions. Then two slices of different depths were combined to obtain the inner hole conical degree. Section 5 analyzes the experimental results, and the conclusion is provided in Section 6.

II. METHOD

In the main oil circuit and the vice oil circuit, the key dimensions and orifice quality of the outer swirl injector directly affect the effectiveness and uniformity of fuel atomization Therefore, it is necessary to research injector performance. High precision three-dimensiaonal measurement of key dimensions of fuel nozzles The results can provide a reference for its optimization for design and processing.

The material of the swirl injector is martensitic stainless steel 9Cr18. Due to the reflection characteristics of a metal surface, it is easily prone to mirror reflection. At the same time, the illumination and reflection conditions in the assembly environment are complex, so there may be highlights on the surface of the fuel nozzle. Therefore, when using SFF technology to obtain image sequences and extract depth information from them, the problem of inaccurate data will be encountered.

According to [23], in specular images, the highlight areas contain some saturated pixels and some unsaturated pixels. In the HSV color space, the pixels are set to be saturated highlights when the brightness value is greater than a certain threshold. On the contrary, the remaining specular highlight pixels are set to be unsaturated pixels.

The saturated highlights will directly affect the image quality, especially when the depth data is extracted. So, it is necessary to remove the saturated highlights and repair the surface texture information. The more saturated the highlights, the greater the deviation of the point cloud. The fuel nozzle parts are mainly processed on Bumotec S191, which is a kind of turning and milling composite processing equipment.

The machining requirements of the outer swirl injector are as follows:

  • Turning the outlet injector.

  • Milling the outer cone.

  • Drilling the outlet hole.

  • Milling the swirl.

  • Intermediate inspection, heat treatment, polishing, and grinding of the large end and inner cone are conducted.

  • Clean the inner hole and complete the last inspection.

The roughness of the inner surface of the fuel nozzle directly affects the uniformity of fuel atomization. Therefore, the roughness of the inner surface of the fuel nozzle is require to not exceed 0.1 mm. I The minimum orifice aperture is approximately Ф 0.42 mm. The dimension accuracy and surface morphology of the out-swirl injector will determine the degree of fuel atomization. As shown in Fig. 1, the 3D point cloud extraction method can be explained as follows:

Figure 1. Schematic diagram of 3D point cloud extraction.

Step 1: Put the fuel nozzle on the measurement platform, composed of an industrial camera, a microscope lens, and a light source. Move the camera from bottom to top along the optical axis with a certain step to change the distance between the object and the camera. The image is blurred at the same height as the fuel nozzle, then repeats focused and blurred again.

Step 2: Due to the equipment and environment limitation during the process of image sequence acquisition, there may be low image quality. In particular, when collecting images of inner holes, it is necessary to provide sufficient brightness of the light sources, as there may be highlights in some areas on the metal surface. Unsaturated highlights do not affect the extraction of point clouds, while saturated highlights can lead to deviations in point cloud extraction. Therefore, before performing point cloud extraction, image preprocessing is necessary. If there is an image with saturated highlights, it is necessary to remove the saturated highlights and repair the image.

Step 3: The sequence image of each frame is segmented according to a suitable window size, and an improved variance FMO is used to evaluate the image sharpness. The best focus position is at the image sequence of maximum focus volume. Then the initial point cloud is extracted. Calibrated data is acquired by the image resolution and real height of the z-axis.

Step 4: A certain thickness of the point cloud is obtained as a tangent plane. The point cloud slicing is transformed into 2D grayscale values. Then the Hough transform is used to obtain the fitting circle’s position and the radius, and conical degree and cone angle of the fuel nozzle can be calculated by these.

As shown in Fig. 2, a transparent calibration ruler is used to calculate the image pixel equivalent. The minimum scale of the ruler is 0.1 mm and the size of each milled line is 0.05 mm. The horizontal field of view is 2.77 mm, and the vertical field of view is 1.85 mm. The image resolution is 5,496 × 3,672. The pixel equivalent of the x-axis is 2.77 mm/5,496 pixels, and the pixel equivalent of the y-axis is 1.85 mm/3,672 pixels. So, the average pixel equivalent of the camera field of view is 0.504 μm. The subdivision of the stepper motor is 16, the distance of the screw is 4 mm, and step per motor rotating halfway is 4 mm, due to the fact that the stepper motor normally consists of 200 pulses per cycle. But the subdivision of the driver is set to 16, and when the z-axis sends 1,600 pulses, the stepper motor travel is a half-cycle of 2 mm screw distance. The number of captured images is 201 frames. So, the frame distance is 10 μm.

Figure 2. Field of view: (a) x-axis, (b) y-axis.

Xc=0.504×ω×Xi,
Yc=0.504×ω×Yi,
Zc=10×Zi.

In Eqs. (1)–(3), Xi is the x-axis initial depth data, Yi is the initial data of the y-axis depth data, Zi is the z-axis depth data, and ω is the window size of the FMO. Xc, Yc and Zc are the actual dimensions of each point. The calibrated coordinate unit of the x, y, z-axis is μm.

III. Improved focus measure operator

3.1. Focus Measures Operator

The focus evaluation criterion is the basis for evaluating image sharpness. The performance of the FMO directly affects the reconstruction accuracy of the fuel nozzle. When the object is accurately focused, the edges of the image are clear and the gray contrast of the image is very strong. In the frequency domain, there are more high-frequency components of the image [24]. An ideal focus measure function should have good unimodal properties with a unique maximum at the all-in-focus position. High sensitivity, good stability and less computational complexity are also good indexes [25]. Therefore, the traditional focus evaluation function cannot meet the requirements of high precision.

The FMO can assess the sharpness of each pixel, and can be divided into two main families; One based on the time domain, and the other based on the frequency domain [26]. Traditional image evaluation functions include gray gradient function, gray entropy function and frequency domain function. The calculation time of the gray entropy function and frequency domain function are too long for real-time measurement [27]. As for spatial functions, there are the following typical evaluation functions including the energy of gradient function, Robert function, Tenengrad function, Brenner function, variance function, Laplacian function and Vollath function [28].

The energy of the image gradient (EOG) function finds the sum of squared directional gradients [29]. It is similar to the Tenengrad function, which uses the difference between adjacent points to calculate the gradient value of a point [30]. It is defined as:

FEOG= x=1 M1 y=1 N1[Gx (x,y)2+Gy(x,y)2],
Gx(x,y)=I(x+1,y)I(x,y),Gy(x,y)=I(x,y+1)I(x,y).

The variance function represents the statistical dispersion of the image gray distribution, and the gray value transformation range of a defocused image is small while the statistical dispersion is low [31]. The larger the variance function, the greater the statistical dispersion of image gray distribution. It is defined as:

FVariance=xy{[I(x,y)μ]2},
μ=1M×NxyI(x,y).

In Eq. (7), μ is the mean gray value of the image. M and N are the total number of pixels with x-dimensions and y-dimensions, respectively.

3.2. The Improved Focus Measure Operator

The fuel nozzle topography presents the surface roughness and quality of the process. The sharpness function for extracting the 3D point cloud of the fuel nozzle should satisfy the following requirements, such as better unimodality, high efficiency, and strong robustness. When extracting depth point clouds by the variance function, the evaluation curve’s unimodal performance gets worse. Two relatively close unimodal peaks are shown in Fig. 3(b).

Figure 3. Focus evaluation function curve: (a) Energy of the image gradient (EOG) function, (b) variance function, (c) variance EOG function.

Although the EOG function has a certain degree of volatility, it has good unimodal performance. By combining the advantages of the two functions, a new function is created. In experiments, the weights of the energy function and variance function are determined to be 0.3 and 0.7, respectively. As shown in Fig. 3(c), the newly created function has a good unimodal and the second peak is lower than the variance function. So, the improved sharpness function based on the variance function and the EOG function is created. It is defined as:

FVarianceEOG=xy0.7×(I(x,y)μ)2+0.3×(I(x+1,y)I(x,y))2,μ=1M×NxyI(x,y).

In experiments, it has been confirmed that the point cloud has the least noise when the weights are 0.7 and 0.3 in Eq. (8). Therefore, the combination of these weights is used for the creation of a new clarity evaluation function.

The performance of the new FMO will be evaluated by the sharpness evaluation curve and the accuracy of the 3D point cloud. As shown in Fig. 4, a window size of 24 × 24 at image positions (2029, 2515) is used to extract the 3D point cloud of the fuel nozzle [32]. The energy function (EOG) in Fig. 3(a) has good unimodal performance, but the highest position has two close points. The variance function in Fig. 3(b) has less unimodal performance than the energy function (EOG), but its highest position also has two close points. The new improved variance EOG in Fig. 3(c) has the best unimodal performance, and there are no points at the highest position. In conclusion, as shown in Fig. 3, the best focus frame is 62.

Figure 4. The best focus image frame.

3.3. Experiment of the Focus Measure Operator

The newly created sharpness evaluation function variance EOG was used to extract the point cloud from the inlet fuel nozzle image sequences with different window sizes.

The resolution of the fuel nozzle image is 5,496 × 3,672, and it is advisable to choose a window size that can be divided as much as possible for validation. Window sizes of 6, 12, and 24 can be divided, while 18, 30, and 36 cannot be divided.

The calibrated point cloud shown in Fig. 5 means that window size is very sensitive to the extraction of the 3D point cloud. The noise is minimal when the window size is 24, and the number of point clouds is 229 × 153, which is sufficient and does not require the processing of scattered points. Besides, the noise is extensive when the window size is 6, and then subsequently decreases in proportion to the window size. The point clouds are denser when the window size is 12. The noise continues to decrease and the number of point clouds is moderate when the window size is 18 and 24. The noise of a point cloud is not so high, but some depth information is lost in the point cloud when the window size is 30 and 36. The difference in window size between 18 and 24 lies in the amount of noise, and there is little difference between them. Therefore, 24 × 24 window size is selected.

Figure 5. 3D reconstruction results of different window sizes based on the new focus measure operator. (a) 6 × 6 size, (b) 12 × 12 size, (c) 18 × 18 size, (d) 24 × 24 size, (e) 30 × 30 size, and (f) 36 × 36 size.

To further compare the performance of sharpness evaluation functions in point cloud extraction, eight different sharpness evaluation functions were used to extract point clouds from No. 1 fuel nozzle image sequences. The window size of 24 × 24 was adopted to extract the depth data. This window size can extract enough points and less noise, as shown in Fig. 6.

Figure 6. Point cloud extraction with the window size of 24 × 24 based on the different focus measure operators: (a) Energy of the image gradient (EOG) function, (b) variance function, and (c) variance EOG function.

The newly created function variance EOG has the best point data, while the EOG function is the worst. Other kinds of functions lead to some noise from the point cloud. As shown in Fig. 6, the 3D point cloud is extracted by six FMOs.

The most noise points are when the depth data is extracted by the EOG function, which is shown in Fig. 6(a) and the least noisy point is shown in Fig. 6(b). The best point cloud is the one in Fig. 6(c), which was extracted by the new sharpness function.

The extraction time for point clouds is shown in Table 1. Combining the analysis of the point cloud and calculation time of the eight functions, the improved focus evaluation function is more suitable to extract the point cloud of the fuel nozzle.

TABLE 1. Calculation time.

No.Focus Measure OperatorTime (s)
1Energy of Gradient160.48
2Variance186.08
3Variance EOG171.19

IV. POINT CLOUD MEASUREMENT

4.1. Image Processing and Saturated Highlight Removal

As shown in Fig. 7, heated treatment fuel nozzle No. 1 was used to measure the geometric parameters. The inlet circle diameter and conical degree of the fuel nozzle were measured by the improved measure focus operator. Due to the high smoothness and reflectivity of the inner hole surface, there were some saturated highlights on the surface. The saturated highlights can lead to error or deviation when the 3D point cloud is extracted. So, the saturated highlight image should be repaired for correct depth extraction.

Figure 7. Dimensions of the fuel nozzle.

The highlight focus area of the image sequences should be inpainting, but the defocus highlight image can be ignored. A method based on MRF patch-match copies the patches of highlight-free areas to the highlight area within the same focus region [33]. First, the image is segmented into two main regions, the inlet hole and the entrance annular, respectively. Second, the offset of the two similar patches in each subregion was computed with the size of 12 × 12. Third, after obtaining the optimized image offset map, the approximate offset value was calculated using the nearest-neighbor field (NNF) algorithm [34]. Finally, according to the size and the distribution of the energy labels, the best matching patch of the subregion is copied to the highlight area of the same subregion [35].

The improved variance EOG focus measure operator was used for point cloud extraction. As shown in Fig. 8(a), there are some noisy points in the 3D point cloud. After brightening the image, the noise of the point cloud degreases, as shown in Fig. 8(b). After darkening the image, the noise of the point cloud increases, as shown in Fig. 8(c).

Figure 8. Point cloud extraction from different images: (a) Original image, (b) enhancement image, and (c) darkened image.

It can be concluded that the changes in image contrast and brightness have a very small impact on point cloud extraction. If the sharpness of the image is sufficient and there are no saturated highlights during the image acquisition process, the texture of the fuel nozzle inner hole image can be preserved. Therefore, regardless of the changes in image brightness and contrast, there is little difference when the 3D point cloud is extracted by the sharpness function. Consequently, if a stable light source was used in a sequence image acquisition system, the image sequences would be relatively stable, but the saturated highlights would lead to some texture loss. So, it is necessary to remove the saturated highlights, but the dark or bright areas that have clear textures do not need inpainting.

4.2. Calculation of Inner Hole Taper

There are two main kind methods for measuring 3D point clouds: One is the projection method, and the other is the slicing method [36]. The projection method projects the triangle mesh of the point cloud on the designated projection plane, which needs to check the filling of holes and topology. The slicing method converts complex 3D models into 2D models, which greatly reduces the complexity of space without affecting measurement accuracy [37].

The point cloud of the tangent plane with a certain thickness was obtained as a new point cloud set. The thickness of the point cloud slices should be moderate. If the thickness of the tangent is too large, the measurement deviation of the fitted circle will be large. If the thickness is too small, the number of points will be insufficient.

The point cloud measurement of the fuel nozzle based on the improved sharpness function is shown in Fig. 9:

Figure 9. The flow of point cloud measurement.

Step 1: The sequence image data were acquired at equal distance and processed on the MATLAB 2017 platform.

Step 2: Saturated highlights were removed and the image texture was inpainted by copying the similar patch algorithm.

Step 3: An improved sharpness evaluation operator was used to calculate the focus volume of the patch.

Step 4: The initial depth data were extracted by this new FMO with a suitable window size after traversing each pixel block.

Step 5: The initial depth point to the true depth data of the fuel nozzle was calibrated.

Step 6: Two different heights of the point cloud were selected, and two slicing point clouds were obtained as new point sets. The slicing point should have a certain thickness.

Step 7: The 3D slice was converted into a 2D gray image, and the Hough transform was used to obtain the circle center and radius. Then the taper and the cone angle were calculated according to Eq. (1) and Eq. (3).

The thickness is suitable for the tangent plane. Point cloud slices are obtained along the z-axis of the initial point cloud as shown in Fig. 10, and the direction of the normal vector is consistent with the center axis of the nozzle. The intersecting 3D point cloud was converted into a 2D plane. Then circle fitting was performed to obtain the center and radius of the sliced point cloud after two-dimensional transformation. In Fig. 10, d1 is the diameter of the big circle at the frame and d2 is the diameter of the small circle. The angle of the fuel nozzle cone is 2α, and the distance of the two slice point clouds is h. The center and diameter of the cross-section interception can be obtained by the Hough transform algorithm, so that the angle and the conical degree of the cone can be calculated by Eqs. (9)–(11).

Figure 10. Cross-section circle and cone angle of inner hole.

The conical degree K is shown in Eq. (1). The angle of the fuel nozzle cone is shown in Eq. (2) and Eq. (3).

K=d2d1h.

Then the angle of cone is defined as:

tanα=d2d12h=K2,
2α=2arctanK2.

V. EXPERIMENT

As shown in Fig. 11, a fuel nozzle geometric parameter measurement experimental system was established to measure the diameter and cone angle of the fuel nozzle inner hole. The experimental platform mainly consists of a z-axis stepper motor, an industrial camera, a microscope lens, and a bowl-shaped light source. All experiments were run on a PC with an Intel Core i7 2.9 GHz and 16 G RAM. The model of camera was MER-2000-19U3C with the resolution of 5,496 × 3,672. In order to obtain better image sequences, a microscope lens with a depth range of 120–840 μm was used for testing. The lens model is OPTEM-304310, with an object distance of 89 mm. The microscope has multiple magnifications of 0.7x, 1.0x, 1.5x, 2.0x, 3.0x, 4.0x, and 4.5x. The fuel nozzle image sequences collected in this paper were obtained by selecting a microscope lens with a magnification of 4.5x. The NA numerical aperture of the lens is 0.026–0.070, corresponding to a depth of field of 0.84–0.12 mm.

Figure 11. The fuel nozzle measurement system.

The number of frames is 201. Software implementation and validation were done in LabVIEW and MATLAB on image sequences of the fuel nozzle. The main equipment parameters in the experiment are shown in Table 2.

TABLE 2. Experimental parameters.

Experimental ConditionsParameter
Resolution of Step Motor (μm)0.1
Image Size of CMOS Camera (μm)2.4 × 2.4
DOF of Microscopic Lens (μm)840–120
Magnification of Microscopic Lens0.7x–4.5x
Luminous Diameter of Bowl Light (mm)65.6


5.1. Measurement of the Standard Gauge Block

In order to verify the effectiveness of our method, a standard gauge block with a height of 1,150 μm was used to extract the point cloud and measure its true height. A 10 μm step distance was used to collect image sequences of the standard gauge block, and there were 200 frames of images. The frames are 15, 52 and 167, as shown in Fig. 12(a). Also, 3D point cloud extraction was performed on the image sequences to obtain the initial point cloud, and then the initial point cloud was calibrated. The pixel equivalent was 1.85 μm on the x, y-axis. The window size for sharpness evaluation was 24 × 24, and the pixel equivalent on the z-axis was 10 μm. The calibrated point cloud is shown in Fig. 12(b). Two crossed sections in the middle position of the point cloud were obtained. The height distance between the two point clouds was calculated as hz. The average height of the standard sample is 1,123.3 μm. The error value is 26.7 μm. So, the relative error is 2.32%.

Figure 12. Point cloud of standard sample blocks extracted with a window size of 24 × 24. (a) Sequence images, (b) 3D-point cloud.

5.2. Measuring the Cone Angle and Conical Degree

The No. 1 heated treatment fuel nozzle was used to verify the measurement algorithm. There will be sufficient depth points and less noise with the improved variance EOG with a window size of 24 × 24.

The point cloud was extracted by the improved FMO with a window size of 24 × 24. A slice of point cloud with a thickness of 4 at the 36-frame position in the initial point cloud, correspondence in the height of the calibrated point cloud is 413.5 μm, which is shown in Fig. 13(a). After two-dimensional transformation of the 3D slice point cloud, the fitting circle radius r1 is 212.2 μm. A thickness of 4 is at 53 frame positions in the initial point cloud, which is shown in Fig. 13(b). The calibrated height is 608.8 μm and the radius of the fitted circle r2 is 397.8 μm. The height distance of h between the point slices is 195.3 μm. The cone angles a are calculated as 87° and the conical degree K is 1.90 according to Eq. (9). The measurement result is 89.2° with a Keyence microscope, and the absolute error is 2.2° and the relative error is 2.46%. After two-dimensional transformation of the 3D slice point cloud, the fitting circle radius r1 is 212.2 μm.

Figure 13. The point cloud extracted by a window size of 24 × 24. (a) The first point slice and fitting circle, (b) the second point slice and fitting circle.

VI. CONCLUSION

In this paper, the improved FMO variance EOG is proposed to solve the high-precision measurement problem for the geometric parameters of fuel nozzles. The proposed algorithm can extract the depth point cloud with high accuracy and have the best sharpness evaluation curve. In the image process step, it was concluded that a dark image or a light image cannot affect the depth position of the point cloud except for the saturated highlights. So, if there are saturated highlights, the image sequences need to be repaired. By comparing and analyzing the performance between the traditional sharpness functions and the new FMO, a window size of 24 × 24 was found to be the optimal one. The improved function has good sharpness and a single peak.

The experimental results showed that this new operator has stronger robustness and higher accuracy. The relative error of the standard sample block depth data reached 2.32%. In the measurement of the fuel nozzle cone angle, the relative error is 2.46%. This method can meet the measurement requirements of the fuel nozzle and other complex structure objects.

FUNDING

National Natural Science Foundation of China (Grant no. 51975495); the Fujian Province Science and Technology Innovation Platform Project (Grant no. 2022-P-022); the Guiding Funds of the Central Government to Support the Development of Local Science and Technology (Grant no. 2022L3049).

DISCLOSURES

The authors declare no conflicts of interest.

DATA AVAILABILITY

Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

Fig 1.

Figure 1.Schematic diagram of 3D point cloud extraction.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 2.

Figure 2.Field of view: (a) x-axis, (b) y-axis.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 3.

Figure 3.Focus evaluation function curve: (a) Energy of the image gradient (EOG) function, (b) variance function, (c) variance EOG function.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 4.

Figure 4.The best focus image frame.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 5.

Figure 5.3D reconstruction results of different window sizes based on the new focus measure operator. (a) 6 × 6 size, (b) 12 × 12 size, (c) 18 × 18 size, (d) 24 × 24 size, (e) 30 × 30 size, and (f) 36 × 36 size.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 6.

Figure 6.Point cloud extraction with the window size of 24 × 24 based on the different focus measure operators: (a) Energy of the image gradient (EOG) function, (b) variance function, and (c) variance EOG function.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 7.

Figure 7.Dimensions of the fuel nozzle.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 8.

Figure 8.Point cloud extraction from different images: (a) Original image, (b) enhancement image, and (c) darkened image.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 9.

Figure 9.The flow of point cloud measurement.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 10.

Figure 10.Cross-section circle and cone angle of inner hole.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 11.

Figure 11.The fuel nozzle measurement system.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 12.

Figure 12.Point cloud of standard sample blocks extracted with a window size of 24 × 24. (a) Sequence images, (b) 3D-point cloud.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

Fig 13.

Figure 13.The point cloud extracted by a window size of 24 × 24. (a) The first point slice and fitting circle, (b) the second point slice and fitting circle.
Current Optics and Photonics 2024; 8: 170-182https://doi.org/10.3807/COPP.2024.8.2.170

TABLE 1 Calculation time

No.Focus Measure OperatorTime (s)
1Energy of Gradient160.48
2Variance186.08
3Variance EOG171.19

TABLE 2 Experimental parameters

Experimental ConditionsParameter
Resolution of Step Motor (μm)0.1
Image Size of CMOS Camera (μm)2.4 × 2.4
DOF of Microscopic Lens (μm)840–120
Magnification of Microscopic Lens0.7x–4.5x
Luminous Diameter of Bowl Light (mm)65.6

References

  1. E. Peiner, M. Balke, and L. Doering, “From measurement inside fuel injector nozzle spray holes,” Microelectron. Eng. 86, 984-986 (2009).
    CrossRef
  2. K.-Y. Huang and Y.-T. Ye, “Machine vision system for the inspection of micro-spray nozzle,” Sensors 15, 15326-15338 (2015).
    Pubmed KoreaMed CrossRef
  3. Z. Li, W. Zhao, Z. Wu, H. Gong, Z. Hu, J. Deng, and L. Li, “The measurement of internal surface characteristics of fuel nozzle orifices using the synchrotron X-ray micro-CT technology,” Sci. China Technol. Sci. 61, 1621-1627 (2018).
    CrossRef
  4. L. Hou, J. Zou, W. Zhang, Y. Chen, W. Shao, Y. Li, and S. Chen, “An improved shape from focus method for measurement of three-dimensional features of fuel nozzles,” Sensors 23, 265 (2023).
    Pubmed KoreaMed CrossRef
  5. Y. Li, L. Hou, and Y. Chen, “Fractal analysis of fuel nozzle surface morphology based on the 3D-sandbox method,” Micromachines 14, 904 (2023).
    Pubmed KoreaMed CrossRef
  6. D. Martisek, “Fast shape from focus method for 3D object reconstruction,” Optik 169, 16-26 (2018).
    CrossRef
  7. T. Yah, Z. Hu, Y. Qian, Z. Qiao, and L. Zhang, “3D shape reconstruction from multifocus image fusion using a multidirectional modified Laplacian operator,” Pattern Recognit. 98, 107065 (2020).
    CrossRef
  8. B. Billiot, F. Cointault, L. Journaux, J.-C. Simon, and P. Gouton, “3D image acquisition system based on shape from focus technique,” Sensor 13, 5040-5053 (2013).
    Pubmed KoreaMed CrossRef
  9. I. Helmy and W. Choi, “Machine learning-based automatic focusing for high magnification system,” Eng. Appl. Arti. Intell. 113, 105648 (2023).
    CrossRef
  10. S. Pertuz, D. Puig, and M. A. Garcia, “Analysis of focus measure operators for shape from focus,” Pattern Recognit. 46, 1415-1432 (2013).
    CrossRef
  11. H.-S. Jang, G. Yun, H. Mutahira, and M. S. Muhammad, “A new focus measure operator for enhancing image focus in 3D shape recovery,” Microsc. Res. Tech. 84, 2483-2493 (2021).
    Pubmed CrossRef
  12. I. Lee, M. T. Mahmood, and T.-S. Choi, “Adaptive window selection for 3D shape recovery from image focus,” Opt. Laser Tech. 45, 21-31 (2013).
    CrossRef
  13. I.-H. Lee, S.-O. Shim, and T.-S. Choi, “Improving focus measurement via variable window shape on surface radiance distribution for 3D shape reconstruction,” Opt. Lasers Eng. 51, 520-526 (2013).
    CrossRef
  14. U. Ali and M. T. Mahmood, “Energy minimization for image focus volume in shape from focus,” Pattern Recognit. 126, 108559 (2022).
    CrossRef
  15. I.-H. Lee, M. T. Mahmood, S.-O. Shim, and T.-S. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimed. Tools Appl. 71, 247-262 (2013).
    CrossRef
  16. B. Fu, R. He, Y. Yuan, W. Jia, S. Yang, and F. Liu, “Shape from focus using gradient of focus measure curve,” Opt. Lasers Eng. 160, 107320 (2023).
    CrossRef
  17. J. Gladines, S. Sels, De. Boi, and S. Vanlanduit, “A phase correlation based peak detection method for accurate shape from focus measurement,” Measurement 213, 112726 (2023).
    CrossRef
  18. U. Ali, I. H. Lee, and M. T. Mahmood, “Incorporating structural prior for depth regularization in shape from focus,” Comput. Vis. Image Underst. 227, 103619 (2023).
    CrossRef
  19. T. Fan and H. Yu, “A novel shape from focus method based on 3D steerable filters for improved performance on treating texture less region,” Opt. Commun. 410, 254-261 (2018).
    CrossRef
  20. J. Surh, H.-G. Jeon, Y. Park, S. Im, H. Ha, and I. S. Kweon, “Noise robust depth from focus using a ring difference filter,” in Proc. 2017 IEEE Conference on computer vision and pattern Recognition (CVPR) (Honolulu, Hawaii, USA, Jul. 21-26), pp. 6328-6337.
    CrossRef
  21. Y. Wang, H. Jia, P. Jia, K. Chen, and X. Zhang, “A novel algorithm for three-dimensional shape reconstruction for microscopic objects based on shape from focus,” Opt. Laser Tech. 168, 109931 (2024).
    CrossRef
  22. Y. Li, Z. Li, C. Zheng, and S. Wu, “Adaptive weighted guided image filtering for depth enhancement in shape-from-focus,” Pattern Recognit. 131, 108900 (2022).
    CrossRef
  23. W. Feng, X. Cheng, X. Li, Q. Liu, and Z. Zhai, “Specular highlight removal based on dichromatic reflection model and priority-based adaptive direction with light field camera,” Opt. Lasers Eng. 172, 107856 (2024).
    CrossRef
  24. Z. Li, J. Dong, W. Zhong, G. Wang, X. Liu, Q. Liu, and X. Song, “Motionless shape-from-focus depth measurement via high-speed axial optical scanning,” Opt. Commun. 546, 129756 (2023).
    CrossRef
  25. Z. Ma, D. Kim, and Y.-G. Shin, “Shape from focus reconstruction using nonlocal matting Laplacian prior followed by MRF-based refinement,” Pattern Recognit. 103, 107302 (2020).
    CrossRef
  26. Z. Zhang, F. Liu, Z. Zhou, Y. He, and H. Fang, “Roughness measurement of leaf surface based on shape from focus,” Plant Methods 17, 72 (2021).
    Pubmed KoreaMed CrossRef
  27. K. Xie, D. Lei, W. Du, P. Bai, F. Zhu, and F. Liu, “A new operator based on edge detection for monitoring the cable under different illumination,” Mech. Syst. Signal Process. 187, 109926 (2023).
    CrossRef
  28. Y. Wang, K. Chen, H. Jia, P. Jia, and X. Zhang, “Shape from focus reconstruction using block processing followed by local heat-diffusion-based refinement,” Opt. Lasers Eng. 170, 107754 (2023).
    CrossRef
  29. S. K. Nayar and Y. Nakagawa, “Shape from focus: An effective approach for rough surfaces,” in Proc. IEEE International Conference on Robotics Automation (Cincinnati, OH, USA, May. 13-18, 1990), pp. 218-225.
  30. M. G. Chun and S. G. Kong, “Focusing in thermal imagery using morphological gradient operator,” Pattern Recognit. Lett. 38, 20-25 (2014).
    CrossRef
  31. Y. Tian, H. Cui, Z. Pan, J. Liu, S. Yang, L. Liu, W. Wang, and L. Li, “Improved three-dimensional reconstruction algorithm from a multifocus microscopic image sequence based on a nonsubsampled wavelet transform,” Appl. Opt. 57, 3864-3872 (2018).
    Pubmed CrossRef
  32. Y. Li, L. Hou, and Y. Chen, “3D measurement method for saturated highlight characteristics on surface of fuel nozzle,” Sensors 22, 5661 (2022).
    Pubmed KoreaMed CrossRef
  33. C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “Patch match: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph. 28, 24 (2009).
    CrossRef
  34. K. He and J. Sun, “Image completion approaches using the statistics of similar patches,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 2423-2435 (2014).
    Pubmed CrossRef
  35. J. Cheng and Z. Li, “Markov random field-based image inpainting with direction structure distribution analysis for maintaining structure coherence,” Signal Process. 9, 182-197 (2019).
    CrossRef
  36. S. M. I. Zolanvari and D. F. Laefer, “Slicing method for curved façade and window extraction from point clouds,” ISPRS J. Photogramm. Remote Sens. 119, 334-346 (2016).
    CrossRef
  37. D. Krawczyk and R. Sitnik, “Segmentation of 3D point cloud data representing full human body geometry: A review,” Pattern Recognit. 139, 109444 (2023).
    CrossRef