G-0K8J8ZR168
검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

## Article

Curr. Opt. Photon. 2022; 6(5): 473-478

Published online October 25, 2022 https://doi.org/10.3807/COPP.2022.6.5.473

Copyright © Optical Society of Korea.

## Improving the Capture-range Problem in Phase-diversity Phase Retrieval for Laser-wavefront Measurement Using Geometrical-optics Initial Estimates

Li Jie Li1, Wen Bo Jing1 , Wen Shen2, Yue Weng3, Bing Kun Huang1, Xuan Feng1

1School of Opto-electronic Engineering, Changchun University of Science and Technology, Jilin 130022, China
2Department of Management Engineering, Jilin Communications Polytechnic, Jilin 130012, China
3Chengdu Branch of Software Platform R&D Department, Dahua Technology, Sichuan 610095, China

Corresponding author: *wenbojing@cust.edu.cn, ORCID 0000-0001-9088-3895

Received: April 26, 2022; Revised: July 21, 2022; Accepted: July 18, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

To overcome the capture-range problem in phase-diversity phase retrieval (PDPR), a geometricaloptics initial-estimate method is proposed to avoid a local minimum and to improve the accuracy of laser-wavefront measurement. We calculate the low-order aberrations through the geometrical-optics model, which is based on the two spot images in the propagation path of the laser, and provide it as a starting guess for the PDPR algorithm. Simulations show that this improves the accuracy of wavefront recovery by 62.17% compared to other initial values, and the iteration time with our method is reduced by 28.96%. That is, this approach can solve the capture-range problem.

Keywords: Capture range, Geometric optics, Laser wavefront, Phase diversity phase retrieval

OCIS codes: (080.1010) Aberrations (global); (100.5070) Phase retrieval; (120.5050) Phase measurement

### I. INTRODUCTION

Laser-wavefront measurement is essential in the quality analysis of laser beams [1]. According to the actual measurement, the laser wavefront in general is highly influenced by tilt, defocus, and astigmatism. The phase-diversity phase retrieval (PDPR) [2, 3] algorithm is one of the critical methods of wavefront detection, using a nonlinear optimization algorithm to try to match the collected laser-spot image to a calculated image based on scalar diffraction propagation.

Nonlinear optimization requires reasonable starting guesses. An excellent initial estimate improves the accuracy of the PDPR algorithm. Currently, the starting guesses are usually set to zero or random values [4, 5]. However, this leads to an issue known as the “Capture-range problem”: The nonlinear optimization algorithm falls into a local minimum during iteration when the starting guesses for the wavefront are too far from the ground-truth wavefront [4, 6]. Turman [7] proposed a geometrical-optics method to obtain wavefront tilt data for the phase retrieval (PR) algorithm, and it was modified by Jurling [4, 8, 9] and Carlisle [10]. The method improves the capture range when segment tilt errors are large. Paine [6, 11], Cao et al. [12], and Zhao et al. [13] proposed methods to overcome the capture-range problem by calculating a piston for segmented systems. However, they all ignored the influence of other low-order aberrations. The application of the currently proposed particle-swarm optimization (PSO) [14] may result in an incorrect solution, due to the stagnation problem. Machine-learning methods require large representative datasets for cumbersome model training [1519]. Therefore, the “Capture-range problem” with the PDPR algorithm remains to be solved with high accuracy and less time.

In this paper, we propose a geometrical-optics initial-estimate method to improve the capture-range problem in the normal PDPR algorithm for laser-wavefront detection. Two spot images in the laser’s optical path are collected to build geometrical-optics theoretical models. We calculate the low-order aberration based on the models and set it as the initial estimate for the PDPR algorithm. The proposed method provides a good idea for selecting the starting guesses in the laser-wavefront restoration using the PDPR method, and has strong practicability.

We use the Zernike polynomials of Fringe [20] to represent the laser wavefront. The low-order Zernike terms are extremely important in wavefront aberration. In this section, we present two geometrical-optics theoretical models to calculate the low-order aberrations of the laser spot. These aberrations include x-tilt, y-tilt, defocus, 0° astigmatism, and 45° astigmatism. We analyze the propagation path of the laser in the model and carefully derive the theoretical formula for aberration calculations. It should be noted that we only consider a single aberration and ignore all others in each derivation process.

We take x-tilt aberration as an example for analysis, since the derivation process for x-tilt and y-tilt aberrations are the same. As shown in Fig. 1, the light spot propagates from Fig. 1(a) to Fig. 1(b). Due to wavefront distortion, the position of the light spot in Fig. 1(b) is shifted from its position in Fig. 1(a). According to geometrical optics and the geometric relationships in Fig. 1,

Figure 1.Schematic of the x-tilt term. (a) and (b) laser spot images. l, image width; Tx, x-tilt aberration; WF, the line formed by the intersection of the wavefront and the xz-plane; ∆z, propagation distance; ∆d, the deviation of the centroids of the two light spots on the x-axis.

tanθ=Δd/ΔzTx/ l/2 ,

where θ is the angle between the ray and the z-axis and the angle between the WF (the line formed by the intersection of the wavefront and the xz-plane in Fig. 1) and the x axis, ∆d is the deviation of the centroids of the two light spots on the x axis, ∆z is the propagation distance, Tx is the x-tilt aberration, and l is the image width. We can obtain

Tx=Δd×l2Δz,

and Tx is also the x-tilt term’s coefficient in the Zernike polynomial.

The defocus and astigmatism terms share a geometrical-optics model for derivation and calculation. The astigmatism term is divided into separate terms for 0° astigmatism and 45° astigmatism. The term of highest degree in the Zernike polynomial for defocus and astigmatism is quadratic. So, we use the quadratic model.

We take the derivation of the defocus term as an example to explain the model shown in Fig. 2. The light spot propagates from Fig. 2(a) to Fig. 2(b). The spot radii in Fig. 2(a) and Fig. 2(b) are different due to wavefront distortion. The light is emitted from point A to point B on the xz plane. We set point O of Fig. 2(a) as the origin of the optical system. The term of highest degree in the Zernike polynomial for defocus is quadratic, and there is no first-order term, so WF of Fig. 2 is symmetrical about the z-axis. We set WF of Fig. 2 as

Figure 2.Schematic of the defocus and astigmatism terms. WF, the curve formed by the intersection of the wavefront and xz-plane.

w=ax2+c,

where a and c are coefficients. We set F (w, x) = ax2 + cw and calculate the partial derivative

Fx=2ax,Fw=1.

According to Eq. 4, the normal vector at point (rx1, arx12 + c) is n = (2arx1, − 1), where rx1 is the radius of the light spot on the x axis in Fig. 2(a). Point (rx1, arx12 + c) is not necessarily point A, but the propagation direction of the light AB is the same as the direction of n. The slope of line AB is expressed by k,

k=12arx1.

According to geometrical optics and the geometric relationships in Fig. 2,

k=ΔzΔrx,Δrx=rx2rx1,

where ∆z in Fig. 2 is the propagation distance and rx2 is the radius of the light spot on the x axis in Fig. 2(b). According to Eqs. 5 and 6, we get

a=Δrx2rx1Δz,c=arx12.

So, we obtain wmax and wmin. The defocus aberration on the xz-plane is Dx = (wmaxwmin) / 2. Similarly, the defocus aberration of the yz-plane can be calculated, and is expressed by Dy. We denote the defocus term coefficient as D, and

D=Dx+Dy2.

We denote the 0°-astigmatism term as A0 and calculate A0x and A0y according to the calculation method of Dx and Dy. Astigmatism causes the difference in points of convergence between meridional and sagittal beams.

A0=A0xA0y.

The calculation method for the 45°-astigmatism term has one more step than the calculation method for the 0°-astigmatism term. We need to rotate the two light-spot images counter-clockwise by 45 degrees in advance when calculating the 45°-astigmatism term, and the subsequent calculation steps are the same.

### Ⅲ. SIMULATION AND ANALYSIS

In this section, we experiment with initial estimates for the simulation. The experiment aims to verify the accuracy of the theory presented in the second section.

The principle of the simulation experiment is shown in Fig. 3. We simulate a Gaussian laser spot’s intensity image I1 with resolution of 256 × 256 pixels. The intensity image I2 can be obtained with W1 and I1 by angular spectrum propagation (ASP). We apply the proposed algorithm to I1 and I2 to obtain the predicted wavefront map W2.

Figure 3.Schematic of the aberration model’s validation. W1, wavefront map of simulation; I1, intensity image of simulation; ASP, angular spectrum propagation; I2, intensity image after ASP; W2, predicted wavefront map after the proposed method; λ, wavelength; ∆z, propagation distance; ps, pixel size.

The radius of the I1 laser spot is a random value in the range 30–80 pixels. Six-term Zernike polynomials are used for reconstructing the wavefront W1, with piston set to zero. The second through sixth terms are randomly distributed in the range −0.5 λ~0.5 λ that is detected from the GY-10 He-Ne laser. The purpose of this experiment is to verify the theoretical correctness of the algorithm, so we do not add noise to I1 and I2. In addition, λ = 632.8 nm and ∆z = 5 mm. The pixel size is set to 6.45 μm × 6.45 μm. We take 100 runs of the same simulation experiment and record the difference between each group’s Zernike coefficients W1 and W2. Figure 4 presents the mean absolute value of the difference for each term.

Figure 4.The errors between the low-order aberrations calculated by the proposed method and the true values.

As shown in Fig. 4, the maximum error did not exceed 0.03 λ. The calculated low-order aberrations were closed to the true value, verifying the proposed method’s correctness. In addition, the reason for the large defocus aberration may be that the shape of the laser wavefront is approximated as spherical when calculating the defocus aberration.

After that, we apply the initial value predicted by the proposed method to the PDPR algorithm. Additionally, the accuracy of the wavefront sensors is no less than 0.01, so it is inappropriate to use the value detected by the sensors as the true value. Therefore, simulation experiments are carried out to verify whether the proposed method can improve the PDPR algorithm.

As shown in Fig. 5, W3 could be restored with I1 and I2 by the PDPR algorithm. We calculated the root mean square (RMS) error of W1 and W3 to show the accuracy of PDPR wavefront recovery. We used the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm as the nonlinear optimization algorithm of the PDPR algorithm.

Figure 5.Schematic of the proposed method applied to phase-diversity phase retrieval (PDPR). I1 and I2, intensity map before and after angular spectrum propagation (ASP); W1, wavefront map before ASP; W3, wavefront map recovered by PDPR; IP, initial iteration value of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm [21] is predicted by the proposed method; IZ, initial iteration value of the LBFGS algorithm is zero; IR, initial iteration value of the LBFGS algorithm is random.

Traditionally, zeros or random numbers are generally used as the initial values for the PDPR algorithm. In this article, we use the values calculated by the method as the initial values for PDPR.

The values for λ, ∆z, and ps are consistent with the initial-estimate experiment. Fifteen-term Zernike polynomials are used for reconstructing the wavefront W1, with piston set to zero. Subsequently, we detect the laser wavefront of the GY-10 laser at the different positions using Beam Wave 500 (PhaseView, Verrieres Le Buisson, France) to obtain the range of the various aberrations. According to the test results in Beam Wave 500 (Phaseview), the second through sixth terms are randomly distributed in the range −0.5 λ~0.5 λ, and the others are in −0.05 λ~0.05 λ. The background noise of a camera [Beam Wave 500 (Phaseview)] is added to I1 and I2, to be closer to reality. The simulation experiment is conducted 100 times. All of the experiments are carried out on a computer with Intel i7-6700 CPU (3.40 GHz) and 16 GB of RAM. We record the difference between the Zernike coefficients W1 and W2; Figure 6 presents the mean absolute value of the difference.

Figure 6.Comparison of Zernike-coefficient differences. y-axis: The difference between the true Zernike coefficients and the restored Zernike coefficients from the limited-memory Broyden-Fletcher Goldfarb-Shanno (LBFGS) algorithm with different initial values.

As shown in Fig. 6, the error for initial iteration value of the LBFGS algorithm predicted by the proposed method (IP) is closer to 0 in the noisy environment. We compare the RMS error of the restored wavefront obtained from different initial values and the ground-truth wavefront to verify the influence of different initial values on the wavefront. The Zernike coefficients from 2nd to 15th orders are combined to yield the RMS value. Table 1 shows the mean RMS error for 100 experiments. For a more intuitive comparison, Fig. 7 shows the wavefront map and residuals map in one of the experiments under the background noise of the Phaseview Beam Wave 500’s camera.

TABLE 1 System performance for different initial values in a noisy environment

Initial ValueRMS Residual Error (λ)Total Time(s)
MaxMinMean
IP0.05880.00650.030419.657
IZ0.13190.01290.051624.143
IR0.18020.00570.047026.557

Figure 7.Maps of wavefronts and residuals. (a) Ground-truth wavefront map. (b)–(d) are the restored wavefront maps for IP, IZ, and IR. (e)–(g) are the maps of the residuals between a and (b)–(d). IP, initial iteration value of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm predicted by the proposed method; IZ, initial iteration value of the LBFGS algorithm is zero; IR, initial iteration value of the LBFGS algorithm is random.

From Table 1, it can be seen that the RMS error for IP was smaller than those for initial iteration value of the LBFGS algorithm is zero (IZ) and initial iteration value of the LBFGS algorithm is random (IR). The iteration time for IP is reduced by about 28.96% (5.69 seconds) compared to the other initial values. As shown in Fig. 7, the wavefront maps generated by different initial values in Figs. 7(b)7(d) are similar to the ground-truth wavefront map in Fig. 7(a). However, we can find that our method achieves smaller residuals than others, by comparing to Figs. 7(e)7(g). Therefore, the proposed method can obviously improve the accuracy of wavefront reconstruction.

### Ⅳ. CONCLUSION

In this paper, two geometric models were constructed to calculate the low-order aberration of a laser beam’s wavefront, which provided a high-accuracy initial value for the PDPR method. We conducted simulation experiments. In noise-free simulations, the maximum errors between the calculated low-order aberrations and the true value were less than 0.03 λ. In a noisy environment, the wavefront with the initial value calculated by our method was closer to the ground-truth wavefront, compared to initial values of zero or a random number. Also, the iteration time with our method was reduced by 28.96%. The proposed method was proven to solve the capture-range problem.

The authors declare no conflict of interest.

### DATA AVAILABILITY

All of the data have been listed in this work; no extra data were generated in the presented research.

111 project of China (D21009); Ministry of Science and Technology of the People’s Republic of China (2018YFB 1107600); Jilin Scientific and Technological Development Program (20160204009GX, 20170204014GX).

1. V. Y. Zavalova and A. V. Kudryashov, “Shack-Hartmann wavefront sensor for laser beam analysis,” Proc. SPIE 4493, 277-284 (2002).
2. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072-1085 (1992).
3. L. M. Mugnier, A. Blanc, and J. Idier, “Phase diversity: a technique for wave-front sensing and for diffraction-limited imaging,” Adv. Imaging Electron Phys. 141, 1-76 (2006).
4. A. S. Jurling and J. R. Fienup, “Improved method for solving the capture range problem in focus-diverse phase retrieval for segmented systems,” in Frontiers in Optics 2010/Laser Science XXVI (Optica Publishing Group, 2010), paper FWV4.
5. D. B. Moore and J. R. Fienup, “Extending the capture range of phase retrieval through random starting parameters,” in Frontiers in Optics 2014 (Optica Publishing Group, 2014), paper FTu2C.2.
6. S. W. Paine and J. R. Fienup, “Extending capture range for piston retrieval in segmented systems,” Appl. Opt. 56, 9186-9192 (2017).
7. S. T. Thurman, “Method of obtaining wavefront slope data from through-focus point spread function measurements,” J. Opt. Soc. Am. A 28, 1-7 (2011).
8. A. S. Jurling, “Advances in algorithms for image based wavefront sensing,” Ph. D. Thesis, University of Rochester, NY (2015).
9. A. S. Jurling and J. R. Fienup, “Extended capture range for focus-diverse phase retrieval in segmented aperture systems using geometrical optics,” J. Opt. Soc. Am. A 31, 661-666 (2014).
10. R. E. Carlisle and D. S. Acton, “Demonstration of extended capture range for James Webb Space Telescope phase retrieval,” Appl. Opt. 54, 6454-6460 (2015).
11. S. W. Paine and J. R. Fienup, “Overcoming large piston capture range problems in segmented systems using broadband light,” in Imaging and Applied Optics 2015 (Optica Publishing Group, 2015), paper AOTh1D.2.
12. H. Cao, J. Zhang, F. Yang, Q. An, and Y. Wang, “Extending capture range for piston error in segmented primary mirror telescopes based on wavelet support vector machine with improved particle swarm optimization,” IEEE Access 8, 111585-111597 (2020).
13. W. Zhao, L. Zhang, Y. Zhao, L. Dong, and M. Hui, “High-accuracy piston error measurement with a large capture range based on coherent diffraction,” Proc. SPIE 11056, 110563B (2019).
14. P. G. Zhang, C. L. Yang, Z. H. Xu, Z. L. Cao, Q. Q. Mu, and L. Xuan, “Hybrid particle swarm global optimization algorithm for phase diversity phase retrieval,” Opt. Express 24, 25704-25717 (2016).
15. G. Ju, X. Qi, H. Ma, and C. Yan, “Feature-based phase retrieval wavefront sensing approach using machine learning,” Opt. Express 26, 31767-31783 (2018).
16. S. W. Paine and J. R. Fienup, “Machine learning for improved image-based wavefront sensing,” Opt. Lett. 43, 1235-1238 (2018).
17. S. W. Paine, “Expanding the capture range of image-based wavefront sensing problems,” Ph. D. Thesis, University of Rochester, NY (2019), p. 151.
18. C. Weinberger, F. Guzman, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
19. S. W. Paine and J. R. Fienup, “Machine learning for avoiding stagnation in image-based wavefront sensing,” Proc. SPIE 10980, 109800T (2019).
20. J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” in Applied Optics and Optical Engineering Series, R. R. Shannon and J. C. Wyant, Eds. (Academic Press, USA, 1992), Volume. 11, p. 28.
21. D. C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Math. Program. 45, 503-528 (1989).

### Article

#### Article

Curr. Opt. Photon. 2022; 6(5): 473-478

Published online October 25, 2022 https://doi.org/10.3807/COPP.2022.6.5.473

Copyright © Optical Society of Korea.

## Improving the Capture-range Problem in Phase-diversity Phase Retrieval for Laser-wavefront Measurement Using Geometrical-optics Initial Estimates

Li Jie Li1, Wen Bo Jing1 , Wen Shen2, Yue Weng3, Bing Kun Huang1, Xuan Feng1

1School of Opto-electronic Engineering, Changchun University of Science and Technology, Jilin 130022, China
2Department of Management Engineering, Jilin Communications Polytechnic, Jilin 130012, China
3Chengdu Branch of Software Platform R&D Department, Dahua Technology, Sichuan 610095, China

Correspondence to:*wenbojing@cust.edu.cn, ORCID 0000-0001-9088-3895

Received: April 26, 2022; Revised: July 21, 2022; Accepted: July 18, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

To overcome the capture-range problem in phase-diversity phase retrieval (PDPR), a geometricaloptics initial-estimate method is proposed to avoid a local minimum and to improve the accuracy of laser-wavefront measurement. We calculate the low-order aberrations through the geometrical-optics model, which is based on the two spot images in the propagation path of the laser, and provide it as a starting guess for the PDPR algorithm. Simulations show that this improves the accuracy of wavefront recovery by 62.17% compared to other initial values, and the iteration time with our method is reduced by 28.96%. That is, this approach can solve the capture-range problem.

Keywords: Capture range, Geometric optics, Laser wavefront, Phase diversity phase retrieval

### I. INTRODUCTION

Laser-wavefront measurement is essential in the quality analysis of laser beams [1]. According to the actual measurement, the laser wavefront in general is highly influenced by tilt, defocus, and astigmatism. The phase-diversity phase retrieval (PDPR) [2, 3] algorithm is one of the critical methods of wavefront detection, using a nonlinear optimization algorithm to try to match the collected laser-spot image to a calculated image based on scalar diffraction propagation.

Nonlinear optimization requires reasonable starting guesses. An excellent initial estimate improves the accuracy of the PDPR algorithm. Currently, the starting guesses are usually set to zero or random values [4, 5]. However, this leads to an issue known as the “Capture-range problem”: The nonlinear optimization algorithm falls into a local minimum during iteration when the starting guesses for the wavefront are too far from the ground-truth wavefront [4, 6]. Turman [7] proposed a geometrical-optics method to obtain wavefront tilt data for the phase retrieval (PR) algorithm, and it was modified by Jurling [4, 8, 9] and Carlisle [10]. The method improves the capture range when segment tilt errors are large. Paine [6, 11], Cao et al. [12], and Zhao et al. [13] proposed methods to overcome the capture-range problem by calculating a piston for segmented systems. However, they all ignored the influence of other low-order aberrations. The application of the currently proposed particle-swarm optimization (PSO) [14] may result in an incorrect solution, due to the stagnation problem. Machine-learning methods require large representative datasets for cumbersome model training [1519]. Therefore, the “Capture-range problem” with the PDPR algorithm remains to be solved with high accuracy and less time.

In this paper, we propose a geometrical-optics initial-estimate method to improve the capture-range problem in the normal PDPR algorithm for laser-wavefront detection. Two spot images in the laser’s optical path are collected to build geometrical-optics theoretical models. We calculate the low-order aberration based on the models and set it as the initial estimate for the PDPR algorithm. The proposed method provides a good idea for selecting the starting guesses in the laser-wavefront restoration using the PDPR method, and has strong practicability.

### Ⅱ. METHOD

We use the Zernike polynomials of Fringe [20] to represent the laser wavefront. The low-order Zernike terms are extremely important in wavefront aberration. In this section, we present two geometrical-optics theoretical models to calculate the low-order aberrations of the laser spot. These aberrations include x-tilt, y-tilt, defocus, 0° astigmatism, and 45° astigmatism. We analyze the propagation path of the laser in the model and carefully derive the theoretical formula for aberration calculations. It should be noted that we only consider a single aberration and ignore all others in each derivation process.

We take x-tilt aberration as an example for analysis, since the derivation process for x-tilt and y-tilt aberrations are the same. As shown in Fig. 1, the light spot propagates from Fig. 1(a) to Fig. 1(b). Due to wavefront distortion, the position of the light spot in Fig. 1(b) is shifted from its position in Fig. 1(a). According to geometrical optics and the geometric relationships in Fig. 1,

Figure 1. Schematic of the x-tilt term. (a) and (b) laser spot images. l, image width; Tx, x-tilt aberration; WF, the line formed by the intersection of the wavefront and the xz-plane; ∆z, propagation distance; ∆d, the deviation of the centroids of the two light spots on the x-axis.

$tanθ=Δd/ΔzTx/ l/2 ,$

where θ is the angle between the ray and the z-axis and the angle between the WF (the line formed by the intersection of the wavefront and the xz-plane in Fig. 1) and the x axis, ∆d is the deviation of the centroids of the two light spots on the x axis, ∆z is the propagation distance, Tx is the x-tilt aberration, and l is the image width. We can obtain

$Tx=Δd×l2Δz,$

and Tx is also the x-tilt term’s coefficient in the Zernike polynomial.

The defocus and astigmatism terms share a geometrical-optics model for derivation and calculation. The astigmatism term is divided into separate terms for 0° astigmatism and 45° astigmatism. The term of highest degree in the Zernike polynomial for defocus and astigmatism is quadratic. So, we use the quadratic model.

We take the derivation of the defocus term as an example to explain the model shown in Fig. 2. The light spot propagates from Fig. 2(a) to Fig. 2(b). The spot radii in Fig. 2(a) and Fig. 2(b) are different due to wavefront distortion. The light is emitted from point A to point B on the xz plane. We set point O of Fig. 2(a) as the origin of the optical system. The term of highest degree in the Zernike polynomial for defocus is quadratic, and there is no first-order term, so WF of Fig. 2 is symmetrical about the z-axis. We set WF of Fig. 2 as

Figure 2. Schematic of the defocus and astigmatism terms. WF, the curve formed by the intersection of the wavefront and xz-plane.

$w=ax2+c,$

where a and c are coefficients. We set F (w, x) = ax2 + cw and calculate the partial derivative

$∂F∂x=2ax,∂F∂w=−1.$

According to Eq. 4, the normal vector at point (rx1, $arx12$ + c) is n = (2arx1, − 1), where rx1 is the radius of the light spot on the x axis in Fig. 2(a). Point (rx1, $arx12$ + c) is not necessarily point A, but the propagation direction of the light AB is the same as the direction of n. The slope of line AB is expressed by k,

$k=−12arx1.$

According to geometrical optics and the geometric relationships in Fig. 2,

$k=ΔzΔrx,Δrx=rx2−rx1,$

where ∆z in Fig. 2 is the propagation distance and rx2 is the radius of the light spot on the x axis in Fig. 2(b). According to Eqs. 5 and 6, we get

$a=−Δrx2rx1Δz,c=−arx12.$

So, we obtain wmax and wmin. The defocus aberration on the xz-plane is Dx = (wmaxwmin) / 2. Similarly, the defocus aberration of the yz-plane can be calculated, and is expressed by Dy. We denote the defocus term coefficient as D, and

$D=Dx+Dy2.$

We denote the 0°-astigmatism term as A0 and calculate A0x and A0y according to the calculation method of Dx and Dy. Astigmatism causes the difference in points of convergence between meridional and sagittal beams.

$A0=A0x−A0y.$

The calculation method for the 45°-astigmatism term has one more step than the calculation method for the 0°-astigmatism term. We need to rotate the two light-spot images counter-clockwise by 45 degrees in advance when calculating the 45°-astigmatism term, and the subsequent calculation steps are the same.

### Ⅲ. SIMULATION AND ANALYSIS

In this section, we experiment with initial estimates for the simulation. The experiment aims to verify the accuracy of the theory presented in the second section.

The principle of the simulation experiment is shown in Fig. 3. We simulate a Gaussian laser spot’s intensity image I1 with resolution of 256 × 256 pixels. The intensity image I2 can be obtained with W1 and I1 by angular spectrum propagation (ASP). We apply the proposed algorithm to I1 and I2 to obtain the predicted wavefront map W2.

Figure 3. Schematic of the aberration model’s validation. W1, wavefront map of simulation; I1, intensity image of simulation; ASP, angular spectrum propagation; I2, intensity image after ASP; W2, predicted wavefront map after the proposed method; λ, wavelength; ∆z, propagation distance; ps, pixel size.

The radius of the I1 laser spot is a random value in the range 30–80 pixels. Six-term Zernike polynomials are used for reconstructing the wavefront W1, with piston set to zero. The second through sixth terms are randomly distributed in the range −0.5 λ~0.5 λ that is detected from the GY-10 He-Ne laser. The purpose of this experiment is to verify the theoretical correctness of the algorithm, so we do not add noise to I1 and I2. In addition, λ = 632.8 nm and ∆z = 5 mm. The pixel size is set to 6.45 μm × 6.45 μm. We take 100 runs of the same simulation experiment and record the difference between each group’s Zernike coefficients W1 and W2. Figure 4 presents the mean absolute value of the difference for each term.

Figure 4. The errors between the low-order aberrations calculated by the proposed method and the true values.

As shown in Fig. 4, the maximum error did not exceed 0.03 λ. The calculated low-order aberrations were closed to the true value, verifying the proposed method’s correctness. In addition, the reason for the large defocus aberration may be that the shape of the laser wavefront is approximated as spherical when calculating the defocus aberration.

After that, we apply the initial value predicted by the proposed method to the PDPR algorithm. Additionally, the accuracy of the wavefront sensors is no less than 0.01, so it is inappropriate to use the value detected by the sensors as the true value. Therefore, simulation experiments are carried out to verify whether the proposed method can improve the PDPR algorithm.

As shown in Fig. 5, W3 could be restored with I1 and I2 by the PDPR algorithm. We calculated the root mean square (RMS) error of W1 and W3 to show the accuracy of PDPR wavefront recovery. We used the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm as the nonlinear optimization algorithm of the PDPR algorithm.

Figure 5. Schematic of the proposed method applied to phase-diversity phase retrieval (PDPR). I1 and I2, intensity map before and after angular spectrum propagation (ASP); W1, wavefront map before ASP; W3, wavefront map recovered by PDPR; IP, initial iteration value of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm [21] is predicted by the proposed method; IZ, initial iteration value of the LBFGS algorithm is zero; IR, initial iteration value of the LBFGS algorithm is random.

Traditionally, zeros or random numbers are generally used as the initial values for the PDPR algorithm. In this article, we use the values calculated by the method as the initial values for PDPR.

The values for λ, ∆z, and ps are consistent with the initial-estimate experiment. Fifteen-term Zernike polynomials are used for reconstructing the wavefront W1, with piston set to zero. Subsequently, we detect the laser wavefront of the GY-10 laser at the different positions using Beam Wave 500 (PhaseView, Verrieres Le Buisson, France) to obtain the range of the various aberrations. According to the test results in Beam Wave 500 (Phaseview), the second through sixth terms are randomly distributed in the range −0.5 λ~0.5 λ, and the others are in −0.05 λ~0.05 λ. The background noise of a camera [Beam Wave 500 (Phaseview)] is added to I1 and I2, to be closer to reality. The simulation experiment is conducted 100 times. All of the experiments are carried out on a computer with Intel i7-6700 CPU (3.40 GHz) and 16 GB of RAM. We record the difference between the Zernike coefficients W1 and W2; Figure 6 presents the mean absolute value of the difference.

Figure 6. Comparison of Zernike-coefficient differences. y-axis: The difference between the true Zernike coefficients and the restored Zernike coefficients from the limited-memory Broyden-Fletcher Goldfarb-Shanno (LBFGS) algorithm with different initial values.

As shown in Fig. 6, the error for initial iteration value of the LBFGS algorithm predicted by the proposed method (IP) is closer to 0 in the noisy environment. We compare the RMS error of the restored wavefront obtained from different initial values and the ground-truth wavefront to verify the influence of different initial values on the wavefront. The Zernike coefficients from 2nd to 15th orders are combined to yield the RMS value. Table 1 shows the mean RMS error for 100 experiments. For a more intuitive comparison, Fig. 7 shows the wavefront map and residuals map in one of the experiments under the background noise of the Phaseview Beam Wave 500’s camera.

TABLE 1. System performance for different initial values in a noisy environment.

Initial ValueRMS Residual Error (λ)Total Time(s)
MaxMinMean
IP0.05880.00650.030419.657
IZ0.13190.01290.051624.143
IR0.18020.00570.047026.557

Figure 7. Maps of wavefronts and residuals. (a) Ground-truth wavefront map. (b)–(d) are the restored wavefront maps for IP, IZ, and IR. (e)–(g) are the maps of the residuals between a and (b)–(d). IP, initial iteration value of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm predicted by the proposed method; IZ, initial iteration value of the LBFGS algorithm is zero; IR, initial iteration value of the LBFGS algorithm is random.

From Table 1, it can be seen that the RMS error for IP was smaller than those for initial iteration value of the LBFGS algorithm is zero (IZ) and initial iteration value of the LBFGS algorithm is random (IR). The iteration time for IP is reduced by about 28.96% (5.69 seconds) compared to the other initial values. As shown in Fig. 7, the wavefront maps generated by different initial values in Figs. 7(b)7(d) are similar to the ground-truth wavefront map in Fig. 7(a). However, we can find that our method achieves smaller residuals than others, by comparing to Figs. 7(e)7(g). Therefore, the proposed method can obviously improve the accuracy of wavefront reconstruction.

### Ⅳ. CONCLUSION

In this paper, two geometric models were constructed to calculate the low-order aberration of a laser beam’s wavefront, which provided a high-accuracy initial value for the PDPR method. We conducted simulation experiments. In noise-free simulations, the maximum errors between the calculated low-order aberrations and the true value were less than 0.03 λ. In a noisy environment, the wavefront with the initial value calculated by our method was closer to the ground-truth wavefront, compared to initial values of zero or a random number. Also, the iteration time with our method was reduced by 28.96%. The proposed method was proven to solve the capture-range problem.

### DISCLOSURES

The authors declare no conflict of interest.

### DATA AVAILABILITY

All of the data have been listed in this work; no extra data were generated in the presented research.

### FUNDING

111 project of China (D21009); Ministry of Science and Technology of the People’s Republic of China (2018YFB 1107600); Jilin Scientific and Technological Development Program (20160204009GX, 20170204014GX).

### Fig 1.

Figure 1.Schematic of the x-tilt term. (a) and (b) laser spot images. l, image width; Tx, x-tilt aberration; WF, the line formed by the intersection of the wavefront and the xz-plane; ∆z, propagation distance; ∆d, the deviation of the centroids of the two light spots on the x-axis.
Current Optics and Photonics 2022; 6: 473-478https://doi.org/10.3807/COPP.2022.6.5.473

### Fig 2.

Figure 2.Schematic of the defocus and astigmatism terms. WF, the curve formed by the intersection of the wavefront and xz-plane.
Current Optics and Photonics 2022; 6: 473-478https://doi.org/10.3807/COPP.2022.6.5.473

### Fig 3.

Figure 3.Schematic of the aberration model’s validation. W1, wavefront map of simulation; I1, intensity image of simulation; ASP, angular spectrum propagation; I2, intensity image after ASP; W2, predicted wavefront map after the proposed method; λ, wavelength; ∆z, propagation distance; ps, pixel size.
Current Optics and Photonics 2022; 6: 473-478https://doi.org/10.3807/COPP.2022.6.5.473

### Fig 4.

Figure 4.The errors between the low-order aberrations calculated by the proposed method and the true values.
Current Optics and Photonics 2022; 6: 473-478https://doi.org/10.3807/COPP.2022.6.5.473

### Fig 5.

Figure 5.Schematic of the proposed method applied to phase-diversity phase retrieval (PDPR). I1 and I2, intensity map before and after angular spectrum propagation (ASP); W1, wavefront map before ASP; W3, wavefront map recovered by PDPR; IP, initial iteration value of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm [21] is predicted by the proposed method; IZ, initial iteration value of the LBFGS algorithm is zero; IR, initial iteration value of the LBFGS algorithm is random.
Current Optics and Photonics 2022; 6: 473-478https://doi.org/10.3807/COPP.2022.6.5.473

### Fig 6.

Figure 6.Comparison of Zernike-coefficient differences. y-axis: The difference between the true Zernike coefficients and the restored Zernike coefficients from the limited-memory Broyden-Fletcher Goldfarb-Shanno (LBFGS) algorithm with different initial values.
Current Optics and Photonics 2022; 6: 473-478https://doi.org/10.3807/COPP.2022.6.5.473

### Fig 7.

Figure 7.Maps of wavefronts and residuals. (a) Ground-truth wavefront map. (b)–(d) are the restored wavefront maps for IP, IZ, and IR. (e)–(g) are the maps of the residuals between a and (b)–(d). IP, initial iteration value of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm predicted by the proposed method; IZ, initial iteration value of the LBFGS algorithm is zero; IR, initial iteration value of the LBFGS algorithm is random.
Current Optics and Photonics 2022; 6: 473-478https://doi.org/10.3807/COPP.2022.6.5.473

TABLE 1 System performance for different initial values in a noisy environment

Initial ValueRMS Residual Error (λ)Total Time(s)
MaxMinMean
IP0.05880.00650.030419.657
IZ0.13190.01290.051624.143
IR0.18020.00570.047026.557

### References

1. V. Y. Zavalova and A. V. Kudryashov, “Shack-Hartmann wavefront sensor for laser beam analysis,” Proc. SPIE 4493, 277-284 (2002).
2. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072-1085 (1992).
3. L. M. Mugnier, A. Blanc, and J. Idier, “Phase diversity: a technique for wave-front sensing and for diffraction-limited imaging,” Adv. Imaging Electron Phys. 141, 1-76 (2006).
4. A. S. Jurling and J. R. Fienup, “Improved method for solving the capture range problem in focus-diverse phase retrieval for segmented systems,” in Frontiers in Optics 2010/Laser Science XXVI (Optica Publishing Group, 2010), paper FWV4.
5. D. B. Moore and J. R. Fienup, “Extending the capture range of phase retrieval through random starting parameters,” in Frontiers in Optics 2014 (Optica Publishing Group, 2014), paper FTu2C.2.
6. S. W. Paine and J. R. Fienup, “Extending capture range for piston retrieval in segmented systems,” Appl. Opt. 56, 9186-9192 (2017).
7. S. T. Thurman, “Method of obtaining wavefront slope data from through-focus point spread function measurements,” J. Opt. Soc. Am. A 28, 1-7 (2011).
8. A. S. Jurling, “Advances in algorithms for image based wavefront sensing,” Ph. D. Thesis, University of Rochester, NY (2015).
9. A. S. Jurling and J. R. Fienup, “Extended capture range for focus-diverse phase retrieval in segmented aperture systems using geometrical optics,” J. Opt. Soc. Am. A 31, 661-666 (2014).
10. R. E. Carlisle and D. S. Acton, “Demonstration of extended capture range for James Webb Space Telescope phase retrieval,” Appl. Opt. 54, 6454-6460 (2015).
11. S. W. Paine and J. R. Fienup, “Overcoming large piston capture range problems in segmented systems using broadband light,” in Imaging and Applied Optics 2015 (Optica Publishing Group, 2015), paper AOTh1D.2.
12. H. Cao, J. Zhang, F. Yang, Q. An, and Y. Wang, “Extending capture range for piston error in segmented primary mirror telescopes based on wavelet support vector machine with improved particle swarm optimization,” IEEE Access 8, 111585-111597 (2020).
13. W. Zhao, L. Zhang, Y. Zhao, L. Dong, and M. Hui, “High-accuracy piston error measurement with a large capture range based on coherent diffraction,” Proc. SPIE 11056, 110563B (2019).
14. P. G. Zhang, C. L. Yang, Z. H. Xu, Z. L. Cao, Q. Q. Mu, and L. Xuan, “Hybrid particle swarm global optimization algorithm for phase diversity phase retrieval,” Opt. Express 24, 25704-25717 (2016).
15. G. Ju, X. Qi, H. Ma, and C. Yan, “Feature-based phase retrieval wavefront sensing approach using machine learning,” Opt. Express 26, 31767-31783 (2018).
16. S. W. Paine and J. R. Fienup, “Machine learning for improved image-based wavefront sensing,” Opt. Lett. 43, 1235-1238 (2018).
17. S. W. Paine, “Expanding the capture range of image-based wavefront sensing problems,” Ph. D. Thesis, University of Rochester, NY (2019), p. 151.
18. C. Weinberger, F. Guzman, and E. Vera, “Improved training for the deep learning wavefront sensor,” Proc. SPIE 11448, 114484G (2020).
19. S. W. Paine and J. R. Fienup, “Machine learning for avoiding stagnation in image-based wavefront sensing,” Proc. SPIE 10980, 109800T (2019).
20. J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” in Applied Optics and Optical Engineering Series, R. R. Shannon and J. C. Wyant, Eds. (Academic Press, USA, 1992), Volume. 11, p. 28.
21. D. C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Math. Program. 45, 503-528 (1989).

Wonshik Choi,
Editor-in-chief