검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2024; 8(3): 300-306

Published online June 25, 2024 https://doi.org/10.3807/COPP.2024.8.3.300

Copyright © Optical Society of Korea.

Nozzle Swing Angle Measurement Involving Weighted Uncertainty of Feature Points Based on Rotation Parameters

Liang Wei1, Ju Huo1,2 , Chen Cai3

1School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001, China
2National Key Laboratory of Modeling and Simulation for Complex Systems, Harbin 150001, China
3Signal and Communication Research Institute, China Academy of Railway Sciences, Beijing 100081, China

Corresponding author: *huoju_ee@126.com, ORCID 0009-0006-4734-9295

Received: March 11, 2024; Revised: May 12, 2024; Accepted: May 13, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

To solve the nozzle swing angle non-contact measurement problem, we present a nozzle pose estimation algorithm involving weighted measurement uncertainty based on rotation parameters. Firstly, the instantaneous axis of the rocket nozzle is constructed and used to model the pivot point and the nozzle coordinate system. Then, the rotation matrix and translation vector are parameterized by Cayley-Gibbs-Rodriguez parameters, and the novel object space collinearity error equation involving weighted measurement uncertainty of feature points is constructed. The nozzle pose is obtained at this step by the Gröbner basis method. Finally, the swing angle is calculated based on the conversion relationship between the nozzle static coordinate system and the nozzle dynamic coordinate system. Experimental results prove the high accuracy and robustness of the proposed method. In the space of 1.5 m × 1.5 m × 1.5 m, the maximum angle error of nozzle swing is 0.103°.

Keywords: Pose estimation, Rocket nozzle, Swing angle, Vision measurement

OCIS codes: (110.0110) Imaging systems; (120.0120) Instrumentation, measurement, and metrology; (150.0150) Machine vision

In order to meet the multi-tasking requirements of modern warfare, air defense missiles generally use high-performance solid rocket engines with swing nozzles to provide thrust vector control. However, due to the influence of machining accuracy and assembly error, the control accuracy of the nozzle swing angle cannot meet the actual requirements, which will eventually affect the attitude of the missile. Therefore, the accurate measurement of nozzle swing angle is of great significance to rocket flight control [14].

Vision measurement can complete non-contact motion parameter estimation, and has broad application prospects in the measurement field [57]. Nozzle swing angle measurement depends on the accuracy of the nozzle pose estimation. Therefore, it is necessary to study a high-precision and robust pose estimation method to solve the problem of nozzle swing angle measurement. Nozzle pose estimation is also referred to as the Perspective-n-Point (PnP) issue, whose goal is to estimate the nozzle’s position and orientation relative to the world coordinate system [8, 9]. Lu et al. [10] proposed an iterative pose estimation algorithm that minimizes the collinearity error in object space. This algorithm ensures orthogonality in rotation and global convergence during iteration. [11] used four virtual control points to represent each 3D point, reducing computation and increasing convergence speed. However, this method may result in lower accuracy and unstable results. Li et al. [12] clustered feature points and adjusted the coordinate system to find the optimal target pose with a seventh-order polynomial. Although effective, this method lacks global optimality. To address this issue, Zheng et al. [13] used non-unit quaternions for rotation representation, leveraging the Gröbner basis technique. While accurate, the quaternion’s sign ambiguity may lead to higher computational complexity. In addition, existing methods assume that all feature points share the same perturbance error model.

This research presents a method for measuring the rocket nozzle swing angle based on discrete feature points. The primary contribution of this study lies in creating a nozzle swing angle measurement system and establishing the relationship for converting coordinate systems. The depth factor is removed by creating an error function for object space collinearity, and a weighted matrix is developed to enable the pose estimation method to handle uncertainty in perturbance error. Moreover, parameterization of rotation and translation is conducted, transforming the estimation of nozzle pose into a problem of solving rotation parameters. The swing angle is calculated based on the quaternion.

Figure 1 shows the rocket nozzle swing angle measurement system, including camera coordinate systems OclXclYclZcl, OcrXcrYcrZcr, and nozzle coordinate systems OmXmYmZm and OmXnYnZn. Circular feature points are arranged on two cross-section circles that are parallel to the bottom of the nozzle.

Figure 1.Diagram of nozzle swing angle measurement system.

At the initial moment, the nozzle dynamic coordinate system OmXnYnZn coincides with the static coordinate system OmXmYmZm, and its origin is located at the pivot point. The z-axis coincides with the instantaneous axis, the x-axis is parallel with the direction towards the center of one cross-section circle to a feature point, and the y-axis is determined by the right-hand rule. The nozzle dynamic coordinate system moves with the nozzle swing.

In this system, the left camera coordinate system is set as the world coordinate system, and the coordinates of feature points can be obtained by the stereo cameras [14]. The centers of cross-section circles are fitted by the feature points that belong to the same circle, with the instantaneous axis being the connecting line of centers. The pivot point is the least square intersection point of instantaneous axes at different times.

Let the feature points on the same cross-section circle be P1(x1, y1, z1), P1(x1, y1, z1), ..., Pn(xn, yn, zn) and the intercepts of the circle plane on the world coordinate system are h, k, p. The center of the cross-section circle can be calculated as follows:

OT= BT B1BT1 x 12+y 12+z 12x22y22z22/2 x 12+y 12+z 12x32y32z32/2 x 12+y 12+z 12xn2yn2zn2/2,

where,

B=1/h1/k1/px1x2y1y2z1z2x1x3y1y3z1z3x1xny1ynz1zn.

Assume that O1(xo1, yo1, zo1), O2(xo2, yo2, zo2) and O1t(xo1t, yo1t, zo1t), O2t(xo2t, yo2t, zo2t) are the centers of two parallel-section circles at an initial time and t time. The direction vectors of instantaneous axes are as follows:

s0=l0,m0,n0=xo1xo2,yo1yo2,zo1zo2st=lt,mt,nt=xo1txo2t,yo1tyo2t,zo1tzo2t.

Then, the vector of the common vertical line between the two instantaneous axes is s0t = s0 × st. The normal vectors of the plane determined by the common vertical line and instantaneous axes are s00t = s0 × s0t = (l00t, m00t, n00t) and st0t = st × s0t = (lt0t, mt0t, nt0t), respectively. Therefore, the common vertical line can be expressed as:

l00txxo1+m00tyyo1+n00tzzo1=0lt0txxo1t+mt0tyyo1t+nt0tzzo1t=0.

When z = 0, the point on the common vertical line can be obtained, and the pivot point is located at the center of the intersection of the common vertical line and the two instantaneous axes.

To measure the nozzle swing angle, it is essential to determine the rigid body transformation Tmn between the nozzle dynamic coordinate system and the nozzle static coordinate system:

Tmn=TwnTwm1,

where Twm is the pose relationship between the nozzle static coordinate system and the world coordinate system, and Twm represents the pose relationship between the nozzle dynamic coordinate system and the world coordinate system.

Figure 2 displays the object space collinearity error of feature points. Assume the coordinates of n points in the world frame and the nozzle coordinate system are Pi = (xi, yi, zi)T and Qi = (xi, yi, zi)T:

Figure 2.The object space collinearity error of feature points.

Qi=RPi+t,

where R=r11r12r13r21r22r23r31r32r33 and t = (tx ty tz)T are the rotation and translation between the nozzle frame and the world frame.

pi = Qi / zi is the projection of Pi on the image plane. Ideally, the orthogonal projection vector OclQi⊥ of OclQi in the direction of Ocl pi is equal to itself:

RPi+t=ViRPi+t,

where Vi=PiPiT/PiTPi is the line-of-sight projection matrix.

Affected by lens distortion, there is a deviation di between OclQi⊥ and OclQi, where di represents the error in object space collinearity. Thus, the estimation of nozzle pose can be described as the minimization of the following function:

E= i=1n di2= i=1n IVi RPi+t2.

In the actual measurement, every feature point exhibits a unique model related to perturbance error [15], which stems from the anisotropic and correlated grayscale distribution. Neglecting the uncertainty of perturbance errors may lead to a significant discrepancy between the result and the true value. To address this issue, the perturbance error uncertainty can be characterized using the inverse covariance matrix, as depicted below:

Ai1= u,vωu,v,1IuIuIvIu0IuIvIvIv0001,

where Ai refers to the covariance matrix of the i-th feature point; ℵ is the area centered on the feature point, ω represents the sum of grayscale values in this area, and Iu and Iv stand for the gradient values in u and v directions.

Since the covariance matrix is a semi-definite symmetric matrix, Ai1 can be decomposed:

Ai1=Uidiag1/ σ i12,  1/ σ i2 2,  1UiT.

The elliptical region Ai1 is centered at the feature point, with the orientation and value of the semi-major a and semi-minor axis b indicating the direction and magnitude of the perturbance error.

Obtaining an affine matrix Wi involves converting the raw data into the uncertainty weighted covariance data space, where perturbance errors are isotropic and uncorrelated:

Wi=diag1/ σ i1 ,1/ σ i2 ,1UiT.

The affine matrix is then substituted into Eq. (7). The new weighted objective function that takes into account the uncertainty of perturbance errors is:

E= i=1n IWiVi RPi+t2.

The rotation matrix is parameterized by Cayley-Gibbs-Rodriguez parameters:

R=1K1+s12s22s322s1s22s32s1s3+2s22s1s2+2s31s12+s22s322s2s32s12s1s32s22s2s3+2s11s12s22+s32,

where K = 1 + s12 + s22 + s32 and s1, s2, s3 are three unknown rotation parameters.

By introducing the Kronecker product ⊗ and the vectorization function Vec(∙), we have:

RPi=PiTI3×3VecR=MiVecR,

where, Mi=xiyizi000000000xiyizi000000000xiyizi.

When the rotation is known, the translation can be determined:

t=1nI1n i=1nWiVi1 i=1 nWiViIRPi.

Substituting Eq. (13) and Eq. (14) into Eq. (11):

E=VecR TGVecR,

where, G= i=1n Mi+B  T IWiVi  T IWiVi Mi+B, B=1nI1n i=1nWiVi1 i=1 nWiViIMi.

To achieve the optimal solution, the Gröbner basis technique [16] is used to compute the partial derivative of the objective function:

Esi=0,i1,2,3.

Rotation parameters are brought into Eq. (12) and Eq. (14):

Twm=Rt01.

When the nozzle swings, Twn is calculated and the quaternion is used to decompose Tmn to obtain the angle [13].

Synthetic experiments were conducted to verify the performance of the proposed pose estimation method and compared with LHM [10], EPnP [11], RPnP [12], and OPnP [13] approaches. In practical experiments, the pose estimation algorithm was used to measure the nozzle swing angle.

4.1. Synthetic Experiments

The focal length of the synthetic camera was f = 800 pixels, and the image size was 640 × 480 pixels, with the principal point located at the center of the image. We conducted multiple independent tests for each experiment and then averaged the results. Feature points in the camera frame were generated randomly within the range [−2, 2] × [−2, 2] × [4, 8] (unit: m). Subsequently, these points were transformed into the nozzle coordinate system through a combination of randomly generated rotation Rtrue and translation ttrue. Finally, points were projected onto the image plane. The errors associated with rotation and translation were specifically defined as:

erotdegrees=maxk=13arccosrtrue,kTrk×180/πetrans(%)= t true t/t×100,

where R and t were the actual estimated values; rtrue,k and rk denoted the k-th column of Rtrue and R, respectively.

The number of feature points was set to 10, and the perturbation error uncertainty value was denoted by ellipticity r= σmax/ σmin, with σmin=0.01 being fixed, while σmax ranged from 0.01 to 0.2. The direction of each ellipse was randomly determined from 0° to 180°.

According to Fig. 3, as the ellipticity increases, the error of the LHM, EPnP, OPnP, and RPnP methods increases significantly. The LHM, EPnP, and OPnP methods show similar levels of accuracy, while the RPnP method demonstrates the largest error. The error of the proposed algorithm increases slightly, even in the case of large uncertainty. This is due to the fact that the proposed method takes the collinearity error of feature points as the objective function and introduces a weighted matrix. The larger the perturbance error, the smaller the impact on the objective function.

Figure 3.Pose estimation errors with varying ellipticity: (a) Mean rotation error and (b) mean translation error.

The second experiment verified the influence of point number on the accuracy of different methods, where the number of points was incrementally raised from four to 20. Additionally, zero-mean Gaussian noise with a standard deviation of σ = 2 pixels was added, as illustrated in Fig. 4. We can see that with smaller numbers, the precision of LHM and EPnP is not accurate enough. As the point sets increase in number, all algorithms deliver high accuracy results.

Figure 4.Pose estimation errors with varying point number: (a) Mean rotation error and (b) mean translation error.

In order to research the effect of anti-noise, the number of points was set as 10. Gaussian noises with a mean value of 0 pixels and standard deviation of σ pixel were added, ranging from 0.5 pixel to 5 pixels. The average errors can be seen in Fig. 5.

Figure 5.Pose estimation errors with varying noise level: (a) Mean rotation error and (b) mean translation error.

As the standard deviation of noise increases, the estimation errors of the five methods increase linearly. The proposed method demonstrates superior performance compared to the others.

4.2. Experiments with Real Images

In practical experiments, a binocular vision measurement system was constructed with a space of 1.5 m × 1.5 m × 1.5 m. The binocular visual measurement system used 4M140MCX digital cameras (Multipix Imaging, Hampshire, UK) with a resolution of 2,048 × 2,048 pixels, the pixel size was dx = dy = 5.5 μm and the focal length f was = 35 mm. The height of the nozzle h was 236 mm, and the diameter of the bottom circle d was 200 mm. The swing angle variation range of the nozzle was [−12° +12°], and the angle and location accuracy were 0.010 ° and 0.1 mm, as shown in Fig. 6.

Figure 6.Nozzle motion simulation device.

We controlled the nozzle to swing and pause every 2°. The measured errors are shown in Table 1. The rise in error is evident as the swing angle increases, and the method presented in this paper effectively limits the maximum error of the swing angle to 0.103°.

TABLE 1 Nozzle swing angle measurement

Actual Value (°)Measured Data (°)Absolute Swing Error (°)
−12−11.9060.094
−10−10.0750.075
−8−8.060.06
−6−6.0520.052
−4−3.9730.027
−2−1.9630.037
22.0620.062
43.9480.052
65.9670.033
88.0820.082
109.9160.084
1211.8970.103

In this paper, we present a novel non-iterative method for measuring the nozzle swing angle by introducing the weighted measurement uncertainty of feature points based on Cayley-Gibbs-Rodriguez parameterization. The uncertainty ellipse models of perturbation errors are established, and a weighted matrix is introduced into the object space collinearity error function. By employing the Gröbner basis technique, the rotation parameters are solved, and the swing angle is obtained in the transformation of the nozzle coordinate system. The accuracy and anti-noise ability of the developed approach are validated by the experimental results, demonstrating its capability to fulfill the requirements for measuring the rocket nozzle swing angle.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

  1. K. W. Zhu, J. Fu, C. Fang, and B. L. Ji, “Study on a new power by wire thrust vector control system with high reliability,” IEEE Aerosp. Electron. Syst. Mag. 32, 18-27 (2017).
    CrossRef
  2. H.-W. Qin and H. Wang, “Dynamic inverse control of feedback linearization in ballistic correction based on nose cone swinging,” J. Cent. South Univ. 20, 2447-2453 (2013).
    CrossRef
  3. C. G. Wang, G. Y. Xu, and J. Gong, “Buckling instability of flexible joint under high pressure in solid rocket motor,” Int. J. Aerosp. Eng. 2020, 8503194 (2020).
    CrossRef
  4. D. Swain, S. K. Biswal, B. P. Thomas, S. S. Babu, and J. Philip, “Performance characterization of a flexible nozzle system (FNS) of a large solid rocket booster using 3-D DIC,” Exp. Tech. 43, 429-443 (2019).
    CrossRef
  5. W. He, J. Wang, and Y. Fu, “Creepage distance measurement using binocular stereo vision on hot-line for high voltage insulator,” Curr. Opt. Photonics 2, 348-355 (2018).
  6. Y. Guo, G. Chen, D. Ye, X. Yu, and F. Yuan, “2-DOF angle measurement of rocket nozzle with multivision,” Adv. Mech. Eng. 5, 942580 (2013).
    CrossRef
  7. Y. F. Qu and H. J. Yang, “High-speed measurement of nozzle swing angle of rocket engine based on monocular vision,” Proc. SPIE 9446, 944647 (2015).
  8. J. Cui, D. Feng, C. Min, and Q. Tian, “Novel method of rocket nozzle motion parameters non-contact consistency measurement based on stereo vision,” Optik 195, 163049 (2019).
    CrossRef
  9. J. Huo, G. Zhang, and M. Yang, “Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface,” Appl. Opt. 57, 3306-3315 (2018).
    Pubmed CrossRef
  10. C. P. Lu, G. D. Hager, and E. Mjolsness, “Fast and globally convergent pose estimation from video images,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 610-622 (2000).
    CrossRef
  11. V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An accurate O(n) solution to the PnP problem,” Int. J. Comput. Vis. 81, 155-166 (2009).
  12. S. Li, C. Xu, and M. Xie, “A robust O(n) solution to the Perspective-n-Point problem,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1444-1450 (2012).
    Pubmed CrossRef
  13. Y. Zheng, Y. Kuang, S. Sugimoto, K. Åström, and M. Okutomi, “Revisiting the PnP problem: A fast, general and optimal solution,” in Proc. 2013 IEEE International Conference on Computer Vision (Sydney, Australia, Dec. 1-8, 2013), pp. 2344-2351.
    Pubmed CrossRef
  14. L. Wei, G. Zhang, J. Huo, and M. Xue, “Novel camera calibration method based on invariance of collinear points and pole-polar constraint,” J. Syst. Eng. Electron. 34, 744-753 (2023).
    CrossRef
  15. L. Wei and J. Huo, “Camera pose estimation algorithm involving weighted measurement uncertainty of feature points based on rotation parameters,” Appl. Opt. 62, 2200-2206 (2023).
    Pubmed CrossRef
  16. Z. Kukelova, M. Bujnak, and T. Pajdla, “Automatic generator of minimal problem solvers,” in Proc. 10th European Conference on Computer Vision Conference (Marseille, France, Oct. 12-18, 2008), pp. 302-315.
    CrossRef

Article

Research Paper

Curr. Opt. Photon. 2024; 8(3): 300-306

Published online June 25, 2024 https://doi.org/10.3807/COPP.2024.8.3.300

Copyright © Optical Society of Korea.

Nozzle Swing Angle Measurement Involving Weighted Uncertainty of Feature Points Based on Rotation Parameters

Liang Wei1, Ju Huo1,2 , Chen Cai3

1School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001, China
2National Key Laboratory of Modeling and Simulation for Complex Systems, Harbin 150001, China
3Signal and Communication Research Institute, China Academy of Railway Sciences, Beijing 100081, China

Correspondence to:*huoju_ee@126.com, ORCID 0009-0006-4734-9295

Received: March 11, 2024; Revised: May 12, 2024; Accepted: May 13, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

To solve the nozzle swing angle non-contact measurement problem, we present a nozzle pose estimation algorithm involving weighted measurement uncertainty based on rotation parameters. Firstly, the instantaneous axis of the rocket nozzle is constructed and used to model the pivot point and the nozzle coordinate system. Then, the rotation matrix and translation vector are parameterized by Cayley-Gibbs-Rodriguez parameters, and the novel object space collinearity error equation involving weighted measurement uncertainty of feature points is constructed. The nozzle pose is obtained at this step by the Gröbner basis method. Finally, the swing angle is calculated based on the conversion relationship between the nozzle static coordinate system and the nozzle dynamic coordinate system. Experimental results prove the high accuracy and robustness of the proposed method. In the space of 1.5 m × 1.5 m × 1.5 m, the maximum angle error of nozzle swing is 0.103°.

Keywords: Pose estimation, Rocket nozzle, Swing angle, Vision measurement

I. INTRODUCTION

In order to meet the multi-tasking requirements of modern warfare, air defense missiles generally use high-performance solid rocket engines with swing nozzles to provide thrust vector control. However, due to the influence of machining accuracy and assembly error, the control accuracy of the nozzle swing angle cannot meet the actual requirements, which will eventually affect the attitude of the missile. Therefore, the accurate measurement of nozzle swing angle is of great significance to rocket flight control [14].

Vision measurement can complete non-contact motion parameter estimation, and has broad application prospects in the measurement field [57]. Nozzle swing angle measurement depends on the accuracy of the nozzle pose estimation. Therefore, it is necessary to study a high-precision and robust pose estimation method to solve the problem of nozzle swing angle measurement. Nozzle pose estimation is also referred to as the Perspective-n-Point (PnP) issue, whose goal is to estimate the nozzle’s position and orientation relative to the world coordinate system [8, 9]. Lu et al. [10] proposed an iterative pose estimation algorithm that minimizes the collinearity error in object space. This algorithm ensures orthogonality in rotation and global convergence during iteration. [11] used four virtual control points to represent each 3D point, reducing computation and increasing convergence speed. However, this method may result in lower accuracy and unstable results. Li et al. [12] clustered feature points and adjusted the coordinate system to find the optimal target pose with a seventh-order polynomial. Although effective, this method lacks global optimality. To address this issue, Zheng et al. [13] used non-unit quaternions for rotation representation, leveraging the Gröbner basis technique. While accurate, the quaternion’s sign ambiguity may lead to higher computational complexity. In addition, existing methods assume that all feature points share the same perturbance error model.

This research presents a method for measuring the rocket nozzle swing angle based on discrete feature points. The primary contribution of this study lies in creating a nozzle swing angle measurement system and establishing the relationship for converting coordinate systems. The depth factor is removed by creating an error function for object space collinearity, and a weighted matrix is developed to enable the pose estimation method to handle uncertainty in perturbance error. Moreover, parameterization of rotation and translation is conducted, transforming the estimation of nozzle pose into a problem of solving rotation parameters. The swing angle is calculated based on the quaternion.

II. ESTABLISHMENT OF COORDINATE SYSTEMS IN NOZZLE SWING ANGLE MEASUREMENT SYSTEM

Figure 1 shows the rocket nozzle swing angle measurement system, including camera coordinate systems OclXclYclZcl, OcrXcrYcrZcr, and nozzle coordinate systems OmXmYmZm and OmXnYnZn. Circular feature points are arranged on two cross-section circles that are parallel to the bottom of the nozzle.

Figure 1. Diagram of nozzle swing angle measurement system.

At the initial moment, the nozzle dynamic coordinate system OmXnYnZn coincides with the static coordinate system OmXmYmZm, and its origin is located at the pivot point. The z-axis coincides with the instantaneous axis, the x-axis is parallel with the direction towards the center of one cross-section circle to a feature point, and the y-axis is determined by the right-hand rule. The nozzle dynamic coordinate system moves with the nozzle swing.

In this system, the left camera coordinate system is set as the world coordinate system, and the coordinates of feature points can be obtained by the stereo cameras [14]. The centers of cross-section circles are fitted by the feature points that belong to the same circle, with the instantaneous axis being the connecting line of centers. The pivot point is the least square intersection point of instantaneous axes at different times.

Let the feature points on the same cross-section circle be P1(x1, y1, z1), P1(x1, y1, z1), ..., Pn(xn, yn, zn) and the intercepts of the circle plane on the world coordinate system are h, k, p. The center of the cross-section circle can be calculated as follows:

OT= BT B1BT1 x 12+y 12+z 12x22y22z22/2 x 12+y 12+z 12x32y32z32/2 x 12+y 12+z 12xn2yn2zn2/2,

where,

B=1/h1/k1/px1x2y1y2z1z2x1x3y1y3z1z3x1xny1ynz1zn.

Assume that O1(xo1, yo1, zo1), O2(xo2, yo2, zo2) and O1t(xo1t, yo1t, zo1t), O2t(xo2t, yo2t, zo2t) are the centers of two parallel-section circles at an initial time and t time. The direction vectors of instantaneous axes are as follows:

s0=l0,m0,n0=xo1xo2,yo1yo2,zo1zo2st=lt,mt,nt=xo1txo2t,yo1tyo2t,zo1tzo2t.

Then, the vector of the common vertical line between the two instantaneous axes is s0t = s0 × st. The normal vectors of the plane determined by the common vertical line and instantaneous axes are s00t = s0 × s0t = (l00t, m00t, n00t) and st0t = st × s0t = (lt0t, mt0t, nt0t), respectively. Therefore, the common vertical line can be expressed as:

l00txxo1+m00tyyo1+n00tzzo1=0lt0txxo1t+mt0tyyo1t+nt0tzzo1t=0.

When z = 0, the point on the common vertical line can be obtained, and the pivot point is located at the center of the intersection of the common vertical line and the two instantaneous axes.

III. NOZZLE SWING ANGLE MEASUREMENT BASED ON ROTATION PARAMETERS

To measure the nozzle swing angle, it is essential to determine the rigid body transformation Tmn between the nozzle dynamic coordinate system and the nozzle static coordinate system:

Tmn=TwnTwm1,

where Twm is the pose relationship between the nozzle static coordinate system and the world coordinate system, and Twm represents the pose relationship between the nozzle dynamic coordinate system and the world coordinate system.

Figure 2 displays the object space collinearity error of feature points. Assume the coordinates of n points in the world frame and the nozzle coordinate system are Pi = (xi, yi, zi)T and Qi = (xi, yi, zi)T:

Figure 2. The object space collinearity error of feature points.

Qi=RPi+t,

where R=r11r12r13r21r22r23r31r32r33 and t = (tx ty tz)T are the rotation and translation between the nozzle frame and the world frame.

pi = Qi / zi is the projection of Pi on the image plane. Ideally, the orthogonal projection vector OclQi⊥ of OclQi in the direction of Ocl pi is equal to itself:

RPi+t=ViRPi+t,

where Vi=PiPiT/PiTPi is the line-of-sight projection matrix.

Affected by lens distortion, there is a deviation di between OclQi⊥ and OclQi, where di represents the error in object space collinearity. Thus, the estimation of nozzle pose can be described as the minimization of the following function:

E= i=1n di2= i=1n IVi RPi+t2.

In the actual measurement, every feature point exhibits a unique model related to perturbance error [15], which stems from the anisotropic and correlated grayscale distribution. Neglecting the uncertainty of perturbance errors may lead to a significant discrepancy between the result and the true value. To address this issue, the perturbance error uncertainty can be characterized using the inverse covariance matrix, as depicted below:

Ai1= u,vωu,v,1IuIuIvIu0IuIvIvIv0001,

where Ai refers to the covariance matrix of the i-th feature point; ℵ is the area centered on the feature point, ω represents the sum of grayscale values in this area, and Iu and Iv stand for the gradient values in u and v directions.

Since the covariance matrix is a semi-definite symmetric matrix, Ai1 can be decomposed:

Ai1=Uidiag1/ σ i12,  1/ σ i2 2,  1UiT.

The elliptical region Ai1 is centered at the feature point, with the orientation and value of the semi-major a and semi-minor axis b indicating the direction and magnitude of the perturbance error.

Obtaining an affine matrix Wi involves converting the raw data into the uncertainty weighted covariance data space, where perturbance errors are isotropic and uncorrelated:

Wi=diag1/ σ i1 ,1/ σ i2 ,1UiT.

The affine matrix is then substituted into Eq. (7). The new weighted objective function that takes into account the uncertainty of perturbance errors is:

E= i=1n IWiVi RPi+t2.

The rotation matrix is parameterized by Cayley-Gibbs-Rodriguez parameters:

R=1K1+s12s22s322s1s22s32s1s3+2s22s1s2+2s31s12+s22s322s2s32s12s1s32s22s2s3+2s11s12s22+s32,

where K = 1 + s12 + s22 + s32 and s1, s2, s3 are three unknown rotation parameters.

By introducing the Kronecker product ⊗ and the vectorization function Vec(∙), we have:

RPi=PiTI3×3VecR=MiVecR,

where, Mi=xiyizi000000000xiyizi000000000xiyizi.

When the rotation is known, the translation can be determined:

t=1nI1n i=1nWiVi1 i=1 nWiViIRPi.

Substituting Eq. (13) and Eq. (14) into Eq. (11):

E=VecR TGVecR,

where, G= i=1n Mi+B  T IWiVi  T IWiVi Mi+B, B=1nI1n i=1nWiVi1 i=1 nWiViIMi.

To achieve the optimal solution, the Gröbner basis technique [16] is used to compute the partial derivative of the objective function:

Esi=0,i1,2,3.

Rotation parameters are brought into Eq. (12) and Eq. (14):

Twm=Rt01.

When the nozzle swings, Twn is calculated and the quaternion is used to decompose Tmn to obtain the angle [13].

IV. EXPERIMENTAL RESULTS AND ANALYSIS

Synthetic experiments were conducted to verify the performance of the proposed pose estimation method and compared with LHM [10], EPnP [11], RPnP [12], and OPnP [13] approaches. In practical experiments, the pose estimation algorithm was used to measure the nozzle swing angle.

4.1. Synthetic Experiments

The focal length of the synthetic camera was f = 800 pixels, and the image size was 640 × 480 pixels, with the principal point located at the center of the image. We conducted multiple independent tests for each experiment and then averaged the results. Feature points in the camera frame were generated randomly within the range [−2, 2] × [−2, 2] × [4, 8] (unit: m). Subsequently, these points were transformed into the nozzle coordinate system through a combination of randomly generated rotation Rtrue and translation ttrue. Finally, points were projected onto the image plane. The errors associated with rotation and translation were specifically defined as:

erotdegrees=maxk=13arccosrtrue,kTrk×180/πetrans(%)= t true t/t×100,

where R and t were the actual estimated values; rtrue,k and rk denoted the k-th column of Rtrue and R, respectively.

The number of feature points was set to 10, and the perturbation error uncertainty value was denoted by ellipticity r= σmax/ σmin, with σmin=0.01 being fixed, while σmax ranged from 0.01 to 0.2. The direction of each ellipse was randomly determined from 0° to 180°.

According to Fig. 3, as the ellipticity increases, the error of the LHM, EPnP, OPnP, and RPnP methods increases significantly. The LHM, EPnP, and OPnP methods show similar levels of accuracy, while the RPnP method demonstrates the largest error. The error of the proposed algorithm increases slightly, even in the case of large uncertainty. This is due to the fact that the proposed method takes the collinearity error of feature points as the objective function and introduces a weighted matrix. The larger the perturbance error, the smaller the impact on the objective function.

Figure 3. Pose estimation errors with varying ellipticity: (a) Mean rotation error and (b) mean translation error.

The second experiment verified the influence of point number on the accuracy of different methods, where the number of points was incrementally raised from four to 20. Additionally, zero-mean Gaussian noise with a standard deviation of σ = 2 pixels was added, as illustrated in Fig. 4. We can see that with smaller numbers, the precision of LHM and EPnP is not accurate enough. As the point sets increase in number, all algorithms deliver high accuracy results.

Figure 4. Pose estimation errors with varying point number: (a) Mean rotation error and (b) mean translation error.

In order to research the effect of anti-noise, the number of points was set as 10. Gaussian noises with a mean value of 0 pixels and standard deviation of σ pixel were added, ranging from 0.5 pixel to 5 pixels. The average errors can be seen in Fig. 5.

Figure 5. Pose estimation errors with varying noise level: (a) Mean rotation error and (b) mean translation error.

As the standard deviation of noise increases, the estimation errors of the five methods increase linearly. The proposed method demonstrates superior performance compared to the others.

4.2. Experiments with Real Images

In practical experiments, a binocular vision measurement system was constructed with a space of 1.5 m × 1.5 m × 1.5 m. The binocular visual measurement system used 4M140MCX digital cameras (Multipix Imaging, Hampshire, UK) with a resolution of 2,048 × 2,048 pixels, the pixel size was dx = dy = 5.5 μm and the focal length f was = 35 mm. The height of the nozzle h was 236 mm, and the diameter of the bottom circle d was 200 mm. The swing angle variation range of the nozzle was [−12° +12°], and the angle and location accuracy were 0.010 ° and 0.1 mm, as shown in Fig. 6.

Figure 6. Nozzle motion simulation device.

We controlled the nozzle to swing and pause every 2°. The measured errors are shown in Table 1. The rise in error is evident as the swing angle increases, and the method presented in this paper effectively limits the maximum error of the swing angle to 0.103°.

TABLE 1. Nozzle swing angle measurement.

Actual Value (°)Measured Data (°)Absolute Swing Error (°)
−12−11.9060.094
−10−10.0750.075
−8−8.060.06
−6−6.0520.052
−4−3.9730.027
−2−1.9630.037
22.0620.062
43.9480.052
65.9670.033
88.0820.082
109.9160.084
1211.8970.103

V. CONCLUSION

In this paper, we present a novel non-iterative method for measuring the nozzle swing angle by introducing the weighted measurement uncertainty of feature points based on Cayley-Gibbs-Rodriguez parameterization. The uncertainty ellipse models of perturbation errors are established, and a weighted matrix is introduced into the object space collinearity error function. By employing the Gröbner basis technique, the rotation parameters are solved, and the swing angle is obtained in the transformation of the nozzle coordinate system. The accuracy and anti-noise ability of the developed approach are validated by the experimental results, demonstrating its capability to fulfill the requirements for measuring the rocket nozzle swing angle.

FUNDING

The scientific research project of the China Academy of Railway Sciences Group Co., Ltd. (Grant no. 2022YJ135).

DISCLOSURES

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

DATA AVAILABILITY

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Fig 1.

Figure 1.Diagram of nozzle swing angle measurement system.
Current Optics and Photonics 2024; 8: 300-306https://doi.org/10.3807/COPP.2024.8.3.300

Fig 2.

Figure 2.The object space collinearity error of feature points.
Current Optics and Photonics 2024; 8: 300-306https://doi.org/10.3807/COPP.2024.8.3.300

Fig 3.

Figure 3.Pose estimation errors with varying ellipticity: (a) Mean rotation error and (b) mean translation error.
Current Optics and Photonics 2024; 8: 300-306https://doi.org/10.3807/COPP.2024.8.3.300

Fig 4.

Figure 4.Pose estimation errors with varying point number: (a) Mean rotation error and (b) mean translation error.
Current Optics and Photonics 2024; 8: 300-306https://doi.org/10.3807/COPP.2024.8.3.300

Fig 5.

Figure 5.Pose estimation errors with varying noise level: (a) Mean rotation error and (b) mean translation error.
Current Optics and Photonics 2024; 8: 300-306https://doi.org/10.3807/COPP.2024.8.3.300

Fig 6.

Figure 6.Nozzle motion simulation device.
Current Optics and Photonics 2024; 8: 300-306https://doi.org/10.3807/COPP.2024.8.3.300

TABLE 1 Nozzle swing angle measurement

Actual Value (°)Measured Data (°)Absolute Swing Error (°)
−12−11.9060.094
−10−10.0750.075
−8−8.060.06
−6−6.0520.052
−4−3.9730.027
−2−1.9630.037
22.0620.062
43.9480.052
65.9670.033
88.0820.082
109.9160.084
1211.8970.103

References

  1. K. W. Zhu, J. Fu, C. Fang, and B. L. Ji, “Study on a new power by wire thrust vector control system with high reliability,” IEEE Aerosp. Electron. Syst. Mag. 32, 18-27 (2017).
    CrossRef
  2. H.-W. Qin and H. Wang, “Dynamic inverse control of feedback linearization in ballistic correction based on nose cone swinging,” J. Cent. South Univ. 20, 2447-2453 (2013).
    CrossRef
  3. C. G. Wang, G. Y. Xu, and J. Gong, “Buckling instability of flexible joint under high pressure in solid rocket motor,” Int. J. Aerosp. Eng. 2020, 8503194 (2020).
    CrossRef
  4. D. Swain, S. K. Biswal, B. P. Thomas, S. S. Babu, and J. Philip, “Performance characterization of a flexible nozzle system (FNS) of a large solid rocket booster using 3-D DIC,” Exp. Tech. 43, 429-443 (2019).
    CrossRef
  5. W. He, J. Wang, and Y. Fu, “Creepage distance measurement using binocular stereo vision on hot-line for high voltage insulator,” Curr. Opt. Photonics 2, 348-355 (2018).
  6. Y. Guo, G. Chen, D. Ye, X. Yu, and F. Yuan, “2-DOF angle measurement of rocket nozzle with multivision,” Adv. Mech. Eng. 5, 942580 (2013).
    CrossRef
  7. Y. F. Qu and H. J. Yang, “High-speed measurement of nozzle swing angle of rocket engine based on monocular vision,” Proc. SPIE 9446, 944647 (2015).
  8. J. Cui, D. Feng, C. Min, and Q. Tian, “Novel method of rocket nozzle motion parameters non-contact consistency measurement based on stereo vision,” Optik 195, 163049 (2019).
    CrossRef
  9. J. Huo, G. Zhang, and M. Yang, “Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface,” Appl. Opt. 57, 3306-3315 (2018).
    Pubmed CrossRef
  10. C. P. Lu, G. D. Hager, and E. Mjolsness, “Fast and globally convergent pose estimation from video images,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 610-622 (2000).
    CrossRef
  11. V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: An accurate O(n) solution to the PnP problem,” Int. J. Comput. Vis. 81, 155-166 (2009).
  12. S. Li, C. Xu, and M. Xie, “A robust O(n) solution to the Perspective-n-Point problem,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1444-1450 (2012).
    Pubmed CrossRef
  13. Y. Zheng, Y. Kuang, S. Sugimoto, K. Åström, and M. Okutomi, “Revisiting the PnP problem: A fast, general and optimal solution,” in Proc. 2013 IEEE International Conference on Computer Vision (Sydney, Australia, Dec. 1-8, 2013), pp. 2344-2351.
    Pubmed CrossRef
  14. L. Wei, G. Zhang, J. Huo, and M. Xue, “Novel camera calibration method based on invariance of collinear points and pole-polar constraint,” J. Syst. Eng. Electron. 34, 744-753 (2023).
    CrossRef
  15. L. Wei and J. Huo, “Camera pose estimation algorithm involving weighted measurement uncertainty of feature points based on rotation parameters,” Appl. Opt. 62, 2200-2206 (2023).
    Pubmed CrossRef
  16. Z. Kukelova, M. Bujnak, and T. Pajdla, “Automatic generator of minimal problem solvers,” in Proc. 10th European Conference on Computer Vision Conference (Marseille, France, Oct. 12-18, 2008), pp. 302-315.
    CrossRef
Optical Society of Korea

Current Optics
and Photonics


Min-Kyo Seo,
Editor-in-chief

Share this article on :

  • line