Ex) Article Title, Author, Keywords
Current Optics
and Photonics
Ex) Article Title, Author, Keywords
Curr. Opt. Photon. 2024; 8(3): 300-306
Published online June 25, 2024 https://doi.org/10.3807/COPP.2024.8.3.300
Copyright © Optical Society of Korea.
Liang Wei^{1}, Ju Huo^{1,2} , Chen Cai^{3}
Corresponding author: ^{*}huoju_ee@126.com, ORCID 0009-0006-4734-9295
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
To solve the nozzle swing angle non-contact measurement problem, we present a nozzle pose estimation algorithm involving weighted measurement uncertainty based on rotation parameters. Firstly, the instantaneous axis of the rocket nozzle is constructed and used to model the pivot point and the nozzle coordinate system. Then, the rotation matrix and translation vector are parameterized by Cayley-Gibbs-Rodriguez parameters, and the novel object space collinearity error equation involving weighted measurement uncertainty of feature points is constructed. The nozzle pose is obtained at this step by the Gröbner basis method. Finally, the swing angle is calculated based on the conversion relationship between the nozzle static coordinate system and the nozzle dynamic coordinate system. Experimental results prove the high accuracy and robustness of the proposed method. In the space of 1.5 m × 1.5 m × 1.5 m, the maximum angle error of nozzle swing is 0.103°.
Keywords: Pose estimation, Rocket nozzle, Swing angle, Vision measurement
OCIS codes: (110.0110) Imaging systems; (120.0120) Instrumentation, measurement, and metrology; (150.0150) Machine vision
In order to meet the multi-tasking requirements of modern warfare, air defense missiles generally use high-performance solid rocket engines with swing nozzles to provide thrust vector control. However, due to the influence of machining accuracy and assembly error, the control accuracy of the nozzle swing angle cannot meet the actual requirements, which will eventually affect the attitude of the missile. Therefore, the accurate measurement of nozzle swing angle is of great significance to rocket flight control [1–4].
Vision measurement can complete non-contact motion parameter estimation, and has broad application prospects in the measurement field [5–7]. Nozzle swing angle measurement depends on the accuracy of the nozzle pose estimation. Therefore, it is necessary to study a high-precision and robust pose estimation method to solve the problem of nozzle swing angle measurement. Nozzle pose estimation is also referred to as the Perspective-n-Point (PnP) issue, whose goal is to estimate the nozzle’s position and orientation relative to the world coordinate system [8, 9]. Lu et al. [10] proposed an iterative pose estimation algorithm that minimizes the collinearity error in object space. This algorithm ensures orthogonality in rotation and global convergence during iteration. [11] used four virtual control points to represent each 3D point, reducing computation and increasing convergence speed. However, this method may result in lower accuracy and unstable results. Li et al. [12] clustered feature points and adjusted the coordinate system to find the optimal target pose with a seventh-order polynomial. Although effective, this method lacks global optimality. To address this issue, Zheng et al. [13] used non-unit quaternions for rotation representation, leveraging the Gröbner basis technique. While accurate, the quaternion’s sign ambiguity may lead to higher computational complexity. In addition, existing methods assume that all feature points share the same perturbance error model.
This research presents a method for measuring the rocket nozzle swing angle based on discrete feature points. The primary contribution of this study lies in creating a nozzle swing angle measurement system and establishing the relationship for converting coordinate systems. The depth factor is removed by creating an error function for object space collinearity, and a weighted matrix is developed to enable the pose estimation method to handle uncertainty in perturbance error. Moreover, parameterization of rotation and translation is conducted, transforming the estimation of nozzle pose into a problem of solving rotation parameters. The swing angle is calculated based on the quaternion.
Figure 1 shows the rocket nozzle swing angle measurement system, including camera coordinate systems O_{cl} − X_{cl}Y_{cl}Z_{cl}, O_{cr} − X_{cr}Y_{cr}Z_{cr}, and nozzle coordinate systems O_{m} − X_{m}Y_{m}Z_{m} and O_{m} − X_{n}Y_{n}Z_{n}. Circular feature points are arranged on two cross-section circles that are parallel to the bottom of the nozzle.
At the initial moment, the nozzle dynamic coordinate system O_{m} − X_{n}Y_{n}Z_{n} coincides with the static coordinate system O_{m} − X_{m}Y_{m}Z_{m}, and its origin is located at the pivot point. The z-axis coincides with the instantaneous axis, the x-axis is parallel with the direction towards the center of one cross-section circle to a feature point, and the y-axis is determined by the right-hand rule. The nozzle dynamic coordinate system moves with the nozzle swing.
In this system, the left camera coordinate system is set as the world coordinate system, and the coordinates of feature points can be obtained by the stereo cameras [14]. The centers of cross-section circles are fitted by the feature points that belong to the same circle, with the instantaneous axis being the connecting line of centers. The pivot point is the least square intersection point of instantaneous axes at different times.
Let the feature points on the same cross-section circle be P_{1}(x_{1}, y_{1}, z_{1}), P_{1}(x_{1}, y_{1}, z_{1}), ..., P_{n}(x_{n}, y_{n}, z_{n}) and the intercepts of the circle plane on the world coordinate system are h, k, p. The center of the cross-section circle can be calculated as follows:
where,
Assume that O_{1}(x_{o1}, y_{o1}, z_{o1}), O_{2}(x_{o2}, y_{o2}, z_{o2}) and O_{1t}(x_{o1t}, y_{o1t}, z_{o1t}), O_{2t}(x_{o2t}, y_{o2t}, z_{o2t}) are the centers of two parallel-section circles at an initial time and t time. The direction vectors of instantaneous axes are as follows:
Then, the vector of the common vertical line between the two instantaneous axes is
When z = 0, the point on the common vertical line can be obtained, and the pivot point is located at the center of the intersection of the common vertical line and the two instantaneous axes.
To measure the nozzle swing angle, it is essential to determine the rigid body transformation T_{mn} between the nozzle dynamic coordinate system and the nozzle static coordinate system:
where T_{wm} is the pose relationship between the nozzle static coordinate system and the world coordinate system, and T_{wm} represents the pose relationship between the nozzle dynamic coordinate system and the world coordinate system.
Figure 2 displays the object space collinearity error of feature points. Assume the coordinates of n points in the world frame and the nozzle coordinate system are P_{i} = (x_{i}, y_{i}, z_{i})^{T} and Q_{i} = (x′_{i}, y′_{i}, z′_{i})^{T}:
where
p_{i} = Q_{i} / z′_{i} is the projection of P_{i} on the image plane. Ideally, the orthogonal projection vector O_{cl}Q_{i}⊥ of O_{cl}Q_{i} in the direction of O_{cl} p_{i} is equal to itself:
where
Affected by lens distortion, there is a deviation d_{i} between O_{cl}Q_{i}⊥ and O_{cl}Q_{i}, where d_{i} represents the error in object space collinearity. Thus, the estimation of nozzle pose can be described as the minimization of the following function:
In the actual measurement, every feature point exhibits a unique model related to perturbance error [15], which stems from the anisotropic and correlated grayscale distribution. Neglecting the uncertainty of perturbance errors may lead to a significant discrepancy between the result and the true value. To address this issue, the perturbance error uncertainty can be characterized using the inverse covariance matrix, as depicted below:
where A_{i} refers to the covariance matrix of the i-th feature point; ℵ is the area centered on the feature point, ω represents the sum of grayscale values in this area, and I_{u} and I_{v} stand for the gradient values in u and v directions.
Since the covariance matrix is a semi-definite symmetric matrix,
The elliptical region
Obtaining an affine matrix W_{i} involves converting the raw data into the uncertainty weighted covariance data space, where perturbance errors are isotropic and uncorrelated:
The affine matrix is then substituted into Eq. (7). The new weighted objective function that takes into account the uncertainty of perturbance errors is:
The rotation matrix is parameterized by Cayley-Gibbs-Rodriguez parameters:
where K = 1 +
By introducing the Kronecker product ⊗ and the vectorization function Vec(∙), we have:
where,
When the rotation is known, the translation can be determined:
Substituting Eq. (13) and Eq. (14) into Eq. (11):
where,
To achieve the optimal solution, the Gröbner basis technique [16] is used to compute the partial derivative of the objective function:
Rotation parameters are brought into Eq. (12) and Eq. (14):
When the nozzle swings, T_{wn} is calculated and the quaternion is used to decompose T_{mn} to obtain the angle [13].
Synthetic experiments were conducted to verify the performance of the proposed pose estimation method and compared with LHM [10], EPnP [11], RPnP [12], and OPnP [13] approaches. In practical experiments, the pose estimation algorithm was used to measure the nozzle swing angle.
The focal length of the synthetic camera was f = 800 pixels, and the image size was 640 × 480 pixels, with the principal point located at the center of the image. We conducted multiple independent tests for each experiment and then averaged the results. Feature points in the camera frame were generated randomly within the range [−2, 2] × [−2, 2] × [4, 8] (unit: m). Subsequently, these points were transformed into the nozzle coordinate system through a combination of randomly generated rotation R_{true} and translation t_{true}. Finally, points were projected onto the image plane. The errors associated with rotation and translation were specifically defined as:
where R and t were the actual estimated values; r_{true,k} and r_{k} denoted the k-th column of R_{true} and R, respectively.
The number of feature points was set to 10, and the perturbation error uncertainty value was denoted by ellipticity
According to Fig. 3, as the ellipticity increases, the error of the LHM, EPnP, OPnP, and RPnP methods increases significantly. The LHM, EPnP, and OPnP methods show similar levels of accuracy, while the RPnP method demonstrates the largest error. The error of the proposed algorithm increases slightly, even in the case of large uncertainty. This is due to the fact that the proposed method takes the collinearity error of feature points as the objective function and introduces a weighted matrix. The larger the perturbance error, the smaller the impact on the objective function.
The second experiment verified the influence of point number on the accuracy of different methods, where the number of points was incrementally raised from four to 20. Additionally, zero-mean Gaussian noise with a standard deviation of σ = 2 pixels was added, as illustrated in Fig. 4. We can see that with smaller numbers, the precision of LHM and EPnP is not accurate enough. As the point sets increase in number, all algorithms deliver high accuracy results.
In order to research the effect of anti-noise, the number of points was set as 10. Gaussian noises with a mean value of 0 pixels and standard deviation of σ pixel were added, ranging from 0.5 pixel to 5 pixels. The average errors can be seen in Fig. 5.
As the standard deviation of noise increases, the estimation errors of the five methods increase linearly. The proposed method demonstrates superior performance compared to the others.
In practical experiments, a binocular vision measurement system was constructed with a space of 1.5 m × 1.5 m × 1.5 m. The binocular visual measurement system used 4M140MCX digital cameras (Multipix Imaging, Hampshire, UK) with a resolution of 2,048 × 2,048 pixels, the pixel size was dx = dy = 5.5 μm and the focal length f was = 35 mm. The height of the nozzle h was 236 mm, and the diameter of the bottom circle d was 200 mm. The swing angle variation range of the nozzle was [−12° +12°], and the angle and location accuracy were 0.010 ° and 0.1 mm, as shown in Fig. 6.
We controlled the nozzle to swing and pause every 2°. The measured errors are shown in Table 1. The rise in error is evident as the swing angle increases, and the method presented in this paper effectively limits the maximum error of the swing angle to 0.103°.
TABLE 1 Nozzle swing angle measurement
Actual Value (°) | Measured Data (°) | Absolute Swing Error (°) |
---|---|---|
−12 | −11.906 | 0.094 |
−10 | −10.075 | 0.075 |
−8 | −8.06 | 0.06 |
−6 | −6.052 | 0.052 |
−4 | −3.973 | 0.027 |
−2 | −1.963 | 0.037 |
2 | 2.062 | 0.062 |
4 | 3.948 | 0.052 |
6 | 5.967 | 0.033 |
8 | 8.082 | 0.082 |
10 | 9.916 | 0.084 |
12 | 11.897 | 0.103 |
In this paper, we present a novel non-iterative method for measuring the nozzle swing angle by introducing the weighted measurement uncertainty of feature points based on Cayley-Gibbs-Rodriguez parameterization. The uncertainty ellipse models of perturbation errors are established, and a weighted matrix is introduced into the object space collinearity error function. By employing the Gröbner basis technique, the rotation parameters are solved, and the swing angle is obtained in the transformation of the nozzle coordinate system. The accuracy and anti-noise ability of the developed approach are validated by the experimental results, demonstrating its capability to fulfill the requirements for measuring the rocket nozzle swing angle.
The scientific research project of the China Academy of Railway Sciences Group Co., Ltd. (Grant no. 2022YJ135).
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Curr. Opt. Photon. 2024; 8(3): 300-306
Published online June 25, 2024 https://doi.org/10.3807/COPP.2024.8.3.300
Copyright © Optical Society of Korea.
Liang Wei^{1}, Ju Huo^{1,2} , Chen Cai^{3}
^{1}School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001, China
^{2}National Key Laboratory of Modeling and Simulation for Complex Systems, Harbin 150001, China
^{3}Signal and Communication Research Institute, China Academy of Railway Sciences, Beijing 100081, China
Correspondence to:^{*}huoju_ee@126.com, ORCID 0009-0006-4734-9295
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
To solve the nozzle swing angle non-contact measurement problem, we present a nozzle pose estimation algorithm involving weighted measurement uncertainty based on rotation parameters. Firstly, the instantaneous axis of the rocket nozzle is constructed and used to model the pivot point and the nozzle coordinate system. Then, the rotation matrix and translation vector are parameterized by Cayley-Gibbs-Rodriguez parameters, and the novel object space collinearity error equation involving weighted measurement uncertainty of feature points is constructed. The nozzle pose is obtained at this step by the Gröbner basis method. Finally, the swing angle is calculated based on the conversion relationship between the nozzle static coordinate system and the nozzle dynamic coordinate system. Experimental results prove the high accuracy and robustness of the proposed method. In the space of 1.5 m × 1.5 m × 1.5 m, the maximum angle error of nozzle swing is 0.103°.
Keywords: Pose estimation, Rocket nozzle, Swing angle, Vision measurement
In order to meet the multi-tasking requirements of modern warfare, air defense missiles generally use high-performance solid rocket engines with swing nozzles to provide thrust vector control. However, due to the influence of machining accuracy and assembly error, the control accuracy of the nozzle swing angle cannot meet the actual requirements, which will eventually affect the attitude of the missile. Therefore, the accurate measurement of nozzle swing angle is of great significance to rocket flight control [1–4].
Vision measurement can complete non-contact motion parameter estimation, and has broad application prospects in the measurement field [5–7]. Nozzle swing angle measurement depends on the accuracy of the nozzle pose estimation. Therefore, it is necessary to study a high-precision and robust pose estimation method to solve the problem of nozzle swing angle measurement. Nozzle pose estimation is also referred to as the Perspective-n-Point (PnP) issue, whose goal is to estimate the nozzle’s position and orientation relative to the world coordinate system [8, 9]. Lu et al. [10] proposed an iterative pose estimation algorithm that minimizes the collinearity error in object space. This algorithm ensures orthogonality in rotation and global convergence during iteration. [11] used four virtual control points to represent each 3D point, reducing computation and increasing convergence speed. However, this method may result in lower accuracy and unstable results. Li et al. [12] clustered feature points and adjusted the coordinate system to find the optimal target pose with a seventh-order polynomial. Although effective, this method lacks global optimality. To address this issue, Zheng et al. [13] used non-unit quaternions for rotation representation, leveraging the Gröbner basis technique. While accurate, the quaternion’s sign ambiguity may lead to higher computational complexity. In addition, existing methods assume that all feature points share the same perturbance error model.
This research presents a method for measuring the rocket nozzle swing angle based on discrete feature points. The primary contribution of this study lies in creating a nozzle swing angle measurement system and establishing the relationship for converting coordinate systems. The depth factor is removed by creating an error function for object space collinearity, and a weighted matrix is developed to enable the pose estimation method to handle uncertainty in perturbance error. Moreover, parameterization of rotation and translation is conducted, transforming the estimation of nozzle pose into a problem of solving rotation parameters. The swing angle is calculated based on the quaternion.
Figure 1 shows the rocket nozzle swing angle measurement system, including camera coordinate systems O_{cl} − X_{cl}Y_{cl}Z_{cl}, O_{cr} − X_{cr}Y_{cr}Z_{cr}, and nozzle coordinate systems O_{m} − X_{m}Y_{m}Z_{m} and O_{m} − X_{n}Y_{n}Z_{n}. Circular feature points are arranged on two cross-section circles that are parallel to the bottom of the nozzle.
At the initial moment, the nozzle dynamic coordinate system O_{m} − X_{n}Y_{n}Z_{n} coincides with the static coordinate system O_{m} − X_{m}Y_{m}Z_{m}, and its origin is located at the pivot point. The z-axis coincides with the instantaneous axis, the x-axis is parallel with the direction towards the center of one cross-section circle to a feature point, and the y-axis is determined by the right-hand rule. The nozzle dynamic coordinate system moves with the nozzle swing.
In this system, the left camera coordinate system is set as the world coordinate system, and the coordinates of feature points can be obtained by the stereo cameras [14]. The centers of cross-section circles are fitted by the feature points that belong to the same circle, with the instantaneous axis being the connecting line of centers. The pivot point is the least square intersection point of instantaneous axes at different times.
Let the feature points on the same cross-section circle be P_{1}(x_{1}, y_{1}, z_{1}), P_{1}(x_{1}, y_{1}, z_{1}), ..., P_{n}(x_{n}, y_{n}, z_{n}) and the intercepts of the circle plane on the world coordinate system are h, k, p. The center of the cross-section circle can be calculated as follows:
where,
Assume that O_{1}(x_{o1}, y_{o1}, z_{o1}), O_{2}(x_{o2}, y_{o2}, z_{o2}) and O_{1t}(x_{o1t}, y_{o1t}, z_{o1t}), O_{2t}(x_{o2t}, y_{o2t}, z_{o2t}) are the centers of two parallel-section circles at an initial time and t time. The direction vectors of instantaneous axes are as follows:
Then, the vector of the common vertical line between the two instantaneous axes is
When z = 0, the point on the common vertical line can be obtained, and the pivot point is located at the center of the intersection of the common vertical line and the two instantaneous axes.
To measure the nozzle swing angle, it is essential to determine the rigid body transformation T_{mn} between the nozzle dynamic coordinate system and the nozzle static coordinate system:
where T_{wm} is the pose relationship between the nozzle static coordinate system and the world coordinate system, and T_{wm} represents the pose relationship between the nozzle dynamic coordinate system and the world coordinate system.
Figure 2 displays the object space collinearity error of feature points. Assume the coordinates of n points in the world frame and the nozzle coordinate system are P_{i} = (x_{i}, y_{i}, z_{i})^{T} and Q_{i} = (x′_{i}, y′_{i}, z′_{i})^{T}:
where
p_{i} = Q_{i} / z′_{i} is the projection of P_{i} on the image plane. Ideally, the orthogonal projection vector O_{cl}Q_{i}⊥ of O_{cl}Q_{i} in the direction of O_{cl} p_{i} is equal to itself:
where
Affected by lens distortion, there is a deviation d_{i} between O_{cl}Q_{i}⊥ and O_{cl}Q_{i}, where d_{i} represents the error in object space collinearity. Thus, the estimation of nozzle pose can be described as the minimization of the following function:
In the actual measurement, every feature point exhibits a unique model related to perturbance error [15], which stems from the anisotropic and correlated grayscale distribution. Neglecting the uncertainty of perturbance errors may lead to a significant discrepancy between the result and the true value. To address this issue, the perturbance error uncertainty can be characterized using the inverse covariance matrix, as depicted below:
where A_{i} refers to the covariance matrix of the i-th feature point; ℵ is the area centered on the feature point, ω represents the sum of grayscale values in this area, and I_{u} and I_{v} stand for the gradient values in u and v directions.
Since the covariance matrix is a semi-definite symmetric matrix,
The elliptical region
Obtaining an affine matrix W_{i} involves converting the raw data into the uncertainty weighted covariance data space, where perturbance errors are isotropic and uncorrelated:
The affine matrix is then substituted into Eq. (7). The new weighted objective function that takes into account the uncertainty of perturbance errors is:
The rotation matrix is parameterized by Cayley-Gibbs-Rodriguez parameters:
where K = 1 +
By introducing the Kronecker product ⊗ and the vectorization function Vec(∙), we have:
where,
When the rotation is known, the translation can be determined:
Substituting Eq. (13) and Eq. (14) into Eq. (11):
where,
To achieve the optimal solution, the Gröbner basis technique [16] is used to compute the partial derivative of the objective function:
Rotation parameters are brought into Eq. (12) and Eq. (14):
When the nozzle swings, T_{wn} is calculated and the quaternion is used to decompose T_{mn} to obtain the angle [13].
Synthetic experiments were conducted to verify the performance of the proposed pose estimation method and compared with LHM [10], EPnP [11], RPnP [12], and OPnP [13] approaches. In practical experiments, the pose estimation algorithm was used to measure the nozzle swing angle.
The focal length of the synthetic camera was f = 800 pixels, and the image size was 640 × 480 pixels, with the principal point located at the center of the image. We conducted multiple independent tests for each experiment and then averaged the results. Feature points in the camera frame were generated randomly within the range [−2, 2] × [−2, 2] × [4, 8] (unit: m). Subsequently, these points were transformed into the nozzle coordinate system through a combination of randomly generated rotation R_{true} and translation t_{true}. Finally, points were projected onto the image plane. The errors associated with rotation and translation were specifically defined as:
where R and t were the actual estimated values; r_{true,k} and r_{k} denoted the k-th column of R_{true} and R, respectively.
The number of feature points was set to 10, and the perturbation error uncertainty value was denoted by ellipticity
According to Fig. 3, as the ellipticity increases, the error of the LHM, EPnP, OPnP, and RPnP methods increases significantly. The LHM, EPnP, and OPnP methods show similar levels of accuracy, while the RPnP method demonstrates the largest error. The error of the proposed algorithm increases slightly, even in the case of large uncertainty. This is due to the fact that the proposed method takes the collinearity error of feature points as the objective function and introduces a weighted matrix. The larger the perturbance error, the smaller the impact on the objective function.
The second experiment verified the influence of point number on the accuracy of different methods, where the number of points was incrementally raised from four to 20. Additionally, zero-mean Gaussian noise with a standard deviation of σ = 2 pixels was added, as illustrated in Fig. 4. We can see that with smaller numbers, the precision of LHM and EPnP is not accurate enough. As the point sets increase in number, all algorithms deliver high accuracy results.
In order to research the effect of anti-noise, the number of points was set as 10. Gaussian noises with a mean value of 0 pixels and standard deviation of σ pixel were added, ranging from 0.5 pixel to 5 pixels. The average errors can be seen in Fig. 5.
As the standard deviation of noise increases, the estimation errors of the five methods increase linearly. The proposed method demonstrates superior performance compared to the others.
In practical experiments, a binocular vision measurement system was constructed with a space of 1.5 m × 1.5 m × 1.5 m. The binocular visual measurement system used 4M140MCX digital cameras (Multipix Imaging, Hampshire, UK) with a resolution of 2,048 × 2,048 pixels, the pixel size was dx = dy = 5.5 μm and the focal length f was = 35 mm. The height of the nozzle h was 236 mm, and the diameter of the bottom circle d was 200 mm. The swing angle variation range of the nozzle was [−12° +12°], and the angle and location accuracy were 0.010 ° and 0.1 mm, as shown in Fig. 6.
We controlled the nozzle to swing and pause every 2°. The measured errors are shown in Table 1. The rise in error is evident as the swing angle increases, and the method presented in this paper effectively limits the maximum error of the swing angle to 0.103°.
TABLE 1. Nozzle swing angle measurement.
Actual Value (°) | Measured Data (°) | Absolute Swing Error (°) |
---|---|---|
−12 | −11.906 | 0.094 |
−10 | −10.075 | 0.075 |
−8 | −8.06 | 0.06 |
−6 | −6.052 | 0.052 |
−4 | −3.973 | 0.027 |
−2 | −1.963 | 0.037 |
2 | 2.062 | 0.062 |
4 | 3.948 | 0.052 |
6 | 5.967 | 0.033 |
8 | 8.082 | 0.082 |
10 | 9.916 | 0.084 |
12 | 11.897 | 0.103 |
In this paper, we present a novel non-iterative method for measuring the nozzle swing angle by introducing the weighted measurement uncertainty of feature points based on Cayley-Gibbs-Rodriguez parameterization. The uncertainty ellipse models of perturbation errors are established, and a weighted matrix is introduced into the object space collinearity error function. By employing the Gröbner basis technique, the rotation parameters are solved, and the swing angle is obtained in the transformation of the nozzle coordinate system. The accuracy and anti-noise ability of the developed approach are validated by the experimental results, demonstrating its capability to fulfill the requirements for measuring the rocket nozzle swing angle.
The scientific research project of the China Academy of Railway Sciences Group Co., Ltd. (Grant no. 2022YJ135).
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
TABLE 1 Nozzle swing angle measurement
Actual Value (°) | Measured Data (°) | Absolute Swing Error (°) |
---|---|---|
−12 | −11.906 | 0.094 |
−10 | −10.075 | 0.075 |
−8 | −8.06 | 0.06 |
−6 | −6.052 | 0.052 |
−4 | −3.973 | 0.027 |
−2 | −1.963 | 0.037 |
2 | 2.062 | 0.062 |
4 | 3.948 | 0.052 |
6 | 5.967 | 0.033 |
8 | 8.082 | 0.082 |
10 | 9.916 | 0.084 |
12 | 11.897 | 0.103 |