검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2023; 7(5): 545-556

Published online October 25, 2023 https://doi.org/10.3807/COPP.2023.7.5.545

Copyright © Optical Society of Korea.

Volume-sharing Multi-aperture Imaging (VMAI): A Potential Approach for Volume Reduction for Space-borne Imagers

Jun Ho Lee1,2 , Seok Gi Han1, Do Hee Kim1, Seokyoung Ju1, Tae Kyung Lee3, Chang Hoon Song3, Myoungjoo Kang3, Seonghui Kim4, Seohyun Seong4

1Department of Optical Engineering, Kongju National University, Cheonan 31080, Korea
2Institute of Application and Fusion for Light, Kongju National University, Cheonan 31080, Korea
3Department of Mathematical Sciences, Seoul National University, Seoul 08826, Korea
4Telepix Ltd., Daejeon 34013, Korea

Corresponding author: *jhlsat@kongju.ac.kr, ORCID 0000-0002-4075-3504

Received: August 1, 2023; Revised: August 31, 2023; Accepted: September 2, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper introduces volume-sharing multi-aperture imaging (VMAI), a potential approach proposed for volume reduction in space-borne imagers, with the aim of achieving high-resolution ground spatial imagery using deep learning methods, with reduced volume compared to conventional approaches. As an intermediate step in the VMAI payload development, we present a phase-1 design targeting a 1-meter ground sampling distance (GSD) at 500 km altitude. Although its optical imaging capability does not surpass conventional approaches, it remains attractive for specific applications on small satellite platforms, particularly surveillance missions. The design integrates one wide-field and three narrow-field cameras with volume sharing and no optical interference. Capturing independent images from the four cameras, the payload emulates a large circular aperture to address diffraction and synthesizes high-resolution images using deep learning. Computational simulations validated the VMAI approach, while addressing challenges like lower signal-to-noise (SNR) values resulting from aperture segmentation. Future work will focus on further reducing the volume and refining SNR management.

Keywords: Deep-learning, Earth observation, Image fusion, Volume-sharing multi-aperture imaging

OCIS codes: (110.3010) Image reconstruction techniques; (120.3620) Lens system design; (120.4640) Optical instruments; (220.4830) Systems design; (220.4991) Passive remote sensing

Space-borne optical payloads play a critical role in Earth observation and remote sensing applications, providing valuable information for environmental monitoring, disaster management, resource management, and national security [1]. However, the diffraction limit ultimately determines the achievable resolution of imaging optics, including those of space-borne optical payloads [2]. The Rayleigh criterion defines the diffraction limit of the spatial resolution of an optical sensor as the radius of the airy disk (R), which is proportional to the wavelength of light (λ), the focal length of the optics ( f ) and the reciprocal of the aperture diameter (D), as given by Eq. (1) below:

R=1.22λ×fD.

To achieve high spatial resolution in space-based imaging, a commonly used approach is to couple a single large aperture optics with a high-quality imaging detector or detectors [3]. With this approach it is possible to determine the two key system design parameters of the payload, i.e. the ground sampling distance (GSD) and the swath width (SW) or its angular extent, field of view (FOV). This is accomplished using the geometrical relationships between the size of the imaging detector in pixels or pitch (p), the number of pixels (N), the focal length of the payload ( f ) and the altitude (H) at which the payload is operating, as given by Eqs. (2)(4).

GSD=H×pf,
SW=N×GSD,
R=1.22λ×fD.

Here, the ground sampling distance (GSD) is the distance between the centers of adjacent pixels on the ground in an image, and the swath width (SW) indicates the width of the imaged area on the ground covered by a single optics or payload.

Although GSD represents a critical aspect of the payload’s imaging performance, it differs somewhat from the smallest resolvable distance between two separate ground features in an image, which is often referred to as the ground resolving distance (GRD). Practitioners often make GSD equivalent to GRD by matching the diffraction limit (R) to the detector pitch (p). Using this approach, the primary optics design properties of an optical payload for Earth observation can be easily determined, including the focal length ( f ), minimum aperture diameter (D), maximum f-number ( f /#), and the FOV which is the angular extent of the swath width (SW), as in Eqs. (5) to (7) below.

f=H×pGSD,
D=1.22λ×HGSD,
f/#=p1.22λ.

Given the constraints of space-proven imaging detectors and operating altitudes, there is limited flexibility when determining the parameters presented above, particularly in the selection of optics types. Table 1 provides illustrative system specifications of an imaging payload targeting a 1 m GSD at 500 km. It is important to highlight the gradual reduction in pixel pitch, which leads to a decrease in the required focal length [4, 5]. The values for 2025 and 2030 are projected based on this ongoing trend. This ongoing trend towards miniaturization in pixel size necessitates careful investigation of its potential impact on imaging performance, notably in terms of maintaining sufficient full well capacity values to capture varying light conditions effectively.


System specifications of an optical payload for 1 m GSDa)


YearPixel (μm)Focal Length (m)Aperture Diameter (m)
200015.07.500.37
200512.56.250.37
201010.05.000.37
20157.53.750.37
20206.53.250.37
2025b)5.42.700.37
2030b)4.52.250.37

a)At 500 km altitude. b)Projected on the ongoing trend.



Once the system parameters are determined, optical designers have limited freedom in selecting among three available types of optics: Cassegrain, which includes Ritchey-Chretien (RC), Three-mirror anastigmatic (TMA), and Korsh [613], as depicted in Fig. 1. When it comes to compactness or shortness, Cassegrain and Korsh configurations are highly favored, with Korsh providing the shortest configuration. Regarding the telephoto-ratio (TR), defined as the ratio of total track length (T) to focal length ( f ), Cassegrain offers a range of 0.4 to 0.6, while Korsh provides a narrower range of 0.15 to 0.2. Notably, achieving the shortest TR value in the Korsh design involves utilizing a complex configuration comprising off-axis aspherical mirrors, following a fast Cassegrain telescope.

Figure 1.Three most used optics types in remote sensing: (a) Cassegrain, (b) TMA, and (c) Korsh.

Recently the use of small satellites has increased, especially for remote sensing. As a result, there have been growing demands for miniaturization or more compact optical payloads for high resolution remote sensing. Researchers have proposed various methods to address these issues, including the Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) [14], multi-aperture or rotating-aperture imaging [1518], meta-surface optics [19, 20], and super-resolution algorithms [21]. However, these methods, on their own, have limited value for long-distance imaging, or exhibit potential errors or artifacts. Accordingly, alternative methods are needed to provide compact imaging for space-borne payloads.

To tackle this challenge, we present a novel concept called volume-sharing multi-aperture imaging (VMAI), which leverages conventional optics fabrication and assembly techniques to achieve high ground spatial resolutions with reduced volume, compared to conventional approaches. Our approach combines advances in multi-camera image fusion [2224], rotating multi-aperture imaging [16, 17], and folded optics. With this approach, we aim to reach a telephoto-ratio of approximately 0.1, surpassing the conventionally achievable limit of 0.15–0.2. As an intermediate step in this VMAI payload development, we are currently developing a phase-1 design, and attempting to achieve a 1-meter GSD at 500 km altitude, with a telephoto-ratio of 0.15, equivalent to the shortest value attainable with the Korsh configuration.

In this paper, we first present the phase-1 design. Subsequently, we present the synthetic image generation process, which uses deep learning, and present computational simulation results that confirm the effectiveness of the proposed method. Finally, we address challenges like lower SNR values.

2.1. Concept

When considering optical payloads, the principles of Fourier optics [2, 25, 26] remind us that the amount of information an optical system can capture is dictated by the size and shape of its aperture, often referred to as the entrance pupil or Fourier Plane. As seen in Table 1, the minimum aperture required to achieve 1-meter GSD at 500 km altitude remains consistent, regardless of the focal length.

This insight leads us to propose VMAI, a novel approach that aims to drastically enhance compactness and shorten the imaging payload below current limitations. Leveraging recent advances in multi-camera fusion and deep-learning algorithms, VMAI addresses these challenges by ingeniously partitioning a single circular or annular aperture into multiple smaller apertures, each equipped with dedicated imaging optics and a separate imaging detector. The intelligence of deep learning is then harnessed to effectively combine all the individual images into a high-resolution composite. By ingeniously folding these individual apertures and ensuring no optical interference, VMAI achieves a remarkable level of compactness, paving the way for breakthroughs in space-borne imagers.

In our current implementation, referred to as the level-1 design, the VMAI payload design targets to produce a 1 m GSD at 500 km altitude. Optical payloads of around 1 m GSD are extensively used in various remote sensing applications, including urban area mapping and agricultural studies [27]. The design integrates one annular-aperture wide-field camera (or wide-field optics module) along with three narrow-field cameras (or narrow-field optics modules) equipped with rectangular apertures. These narrow-field cameras are strategically positioned around the centered annular aperture. To ensure the proper alignment of each image line with the centered wide-field cameras in response to satellite movement, we have employed line-of-sight rotating prisms, specifically Risely prisms, in each narrow-field camera. Figure 2 provides a visual representation of the VMAI system, (a) depicting the unfolded configuration and (b) illustrating the folded configuration, showcasing the compactness achieved through this innovative design.

Figure 2.Illustration of volume-sharing multi-aperture imaging (VMAI) with one wide-field and three narrow-field cameras: (a) Unfolded and (b) compactly folded.

The wide-field camera in our proposed system is equipped with conventional telescope optics, which captures a broad but less detailed view of the scene. It plays a crucial role as the foundation for the subsequent image fusion process. On the other hand, the three narrow-field cameras (or narrow-field optics modules) operate collectively as rotating rectangular aperture imaging (RRAI) optics with three rotations [16, 17]. To overcome the resolution limit imposed by diffraction, the narrow-field optics in our system employ an over-sampling approach [28]. We can represent the information from the payload by two-dimensional modulation transfer function (MTF) as shown in Fig. 3. Figure 4 shows exemplary images taken by the four optical modules.

Figure 3.Figure of optical modules and Fourier domain information for each module.

Figure 4.Exemplary images taken by the four optical modules (one wide field + three narrow fields). The target image is a reference 1 m GSD image that would be taken by a conventional optical payload.

2.2. Level-1 Design

In our level-1 study, we meticulously designed the wide-field camera to achieve a GSD of 5 m at an altitude of 500 km, employing detectors with a pixel pitch of 4.5 μm. Similarly, we applied a uniform design to the three narrow-field cameras, each positioned with an axis rotation of 120 degrees, enabling them to achieve a GSD of 1 meter at the same altitude. As depicted in Fig. 4, it is evident that the imaging resolution of the three narrow-field cameras falls short compared to a conventional optical payload designed for 1 m GSD due to the diffraction limit induced by their aperture segmentation. Nevertheless, the narrow-field cameras still capture some information at the 1m GSD level, particularly along each of the long-aperture directions. Table 2 provides an overview of the design parameters for both the wide-field and narrow-field cameras. Additionally, Fig. 5 illustrates the optical layout of the wide-field camera, unfolded narrow-field camera, and the combined system. As shown in Figs. 6 and 7, each optic was carefully designed within the diffraction limit, ensuring excellent imaging capabilities.

Figure 5.Optical layouts: (a) Wide-field camera, (b) unfolded narrow-field camera, and (c) combined (not in scale).

Figure 6.Design performances of the wide-field optics design: (a) Spot diagram, (b) modulation transfer function (MTF).

Figure 7.Design performances of the narrow-field optics design: (a) Spot diagram, (b) modulation transfer function (MTF).


Parameters of the level-1 volume-sharing multi-aperture imaging (VMAI) payload


CameraParametersValues
Wide-fieldGSD (m)5
Focal Length (m)0.45
Aperture TypeAnnular
Aperture (mm)Φ 140
F-number3.2
Obstruction Ratio0.3
Narrow-fieldGSD (m)1
Focal Length (m)2.25
Aperture TypeRectangular
Aperture Dimension (mm)110 × 70
F-number20.5 × 32.1
Aspect Ratio1.6:1


In addition to our design considerations, we attempted to utilize optical components that were readily available and straightforward to fabricate and align. In particular, we used mostly spherical surfaces, which are easier to manufacture and align. Employing these easily accessible components simplifies the overall manufacturing and alignment processes, making the proposed system more practical and feasible to implement. Figure 8 provides a 3-dimensional view of the VMAI payload and illustrates how it fits within the satellite structure.

Figure 8.3D models of the proposed design: (a) The volume-sharing multi-aperture imaging (VMAI) payload, and (b) a satellite with the VMAI payload.

In this study, the telephoto ratio offers key insights into the concept of miniaturization and its broader implications. While commonly associated with size reduction, miniaturization encompasses volume and weight considerations. The telephoto ratio allows us to delve into these dimensions, addressing not only physical size but also the comprehensive optimization of space and weight. This holistic approach is crucial for efficient space-borne imaging systems. The level-1 design aims for a small satellite volume of under 80 U and a mass below 50 kg, with U representing a standard micro-satellite volume (1,000 cm3). Figure 9 visually illustrates the lightness in comparison to some reported satellites.

Figure 9.Ground sampling distance (GSD) vs. satellite volume in unit (1 U, 1,000 cm3).

2.3. Image Fusion/reconstruction

Our main objective is to merge the four independently captured images from the respective cameras into a single reconstructed or fused image with the same ground sampling distance (GSD) as the three narrow-field optics. Additionally, we aim to enhance the ground resolution distance (GRD) of the reconstructed images to match the GSD of the narrow-field optics. By employing an image fusion process, we can effectively integrate the information from the different cameras, resulting in a comprehensive and high-resolution image. It is worth noting that the image fusion process considers the unique characteristics of each narrow-field image, including its rectangular aperture and specific rotation angles, thereby combining the more detailed information from each image.

The image reconstruction process can be accomplished using various techniques, such as a Fourier inversion of the image formation, or using specific methods like rectangular-aperture folded-optics imaging (RAFOI) [16] and rotating rectangular aperture imaging (RRAI) [17]. However, in our specific case, even though we assumed ideal optics in this study, we chose to employ deep learning approaches for the fusion process [29, 30]. By utilizing deep learning, we can effectively address residual errors and discrepancies that may arise from each optics system, including potential issues like line-of-sight mismatches. Furthermore, deep learning offers the advantage of potential on-chip implementation, allowing for in-situ processing directly on the satellite. This makes deep learning a favorable and practical choice for our application. Figure 10 shows the schematic flowchart of the deep learning process. For a comprehensive mathematical understanding of the process, further details can be found in a separate paper [31].

Figure 10.Schematic flowchart of the deep learning process.

3.1. Overall

We are currently in the process of developing the optical hardware to validate the feasibility of VMAI. As of now, all the optical components have been manufactured, and we are progressing with the optical alignment. Once the optical assembly, integration, and testing (AIT) are completed, we will proceed with experimental testing to verify the effectiveness of this approach.

In the meantime, to gain initial insights and assess the viability of VMAI, we have conducted comprehensive computational simulations. These simulations allowed us to explore and evaluate the performance of the proposed approach under various conditions. While the experimental validation will be crucial to confirm practical implementation, the simulation results have been encouraging, providing valuable data to support the ability of VMAI to achieve our objectives.

3.2. Computational Validation

We first trained our deep learning model utilizing simulated images from the URBAN3D dataset [32, 33]. These images were generated by applying rescaling and image degradation techniques, incorporating the respective point spread functions (PSFs).

The process begins by generating simulated images from the 278 original images, each of size 1,024 × 1,024. These training data are then sampled by randomly cropping 128 square patches, rotating them, and applying flipping transformations. By training the model on this diverse range of input data, it can effectively learn and adapt to the characteristics of the wide-field and narrow-field cameras employed in our proposed system. Figure 4 provides representative samples of the simulated images.

To verify the effectiveness of our proposed image fusion process, we conducted an initial experiment using standard image targets, including ISO 12233 as in Fig. 11. Figure 12 illustrates the results of this experiment, including the original ground truth (GT) image, the camera images, and the reconstructed image obtained using the trained deep learning algorithm described earlier. While the reconstructed image may exhibit some loss of edges, it is noteworthy that the algorithm successfully restores the edges in all directions. Overall, this initial experiment demonstrates the potential of our image fusion approach to effectively combine information from multiple cameras and reconstruct high-resolution images.

Figure 11.Simulation results with ISO 12233 test chart (horizontal direction).

Figure 12.Simulation images with peak signal-to-noise ratio/structural similarity indexs (PSNR/SSIMs) using a Google Earth image.

To evaluate the quality of the restored images using real satellite data, we utilized the Google Earth dataset [34]. For this evaluation, we employed two widely used image quality metrics: peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [35]. These metrics enable us to quantitatively measure the performance of our algorithm in terms of image quality and fidelity by comparing the restored images to their corresponding ground truth images. By utilizing these evaluation metrics, we can comprehensively assess the effectiveness of our algorithm for preserving image details and maintaining similarity to the ground truth images.

In our simulation, we assumed the absence of unknown effects on the images, such as noise, residual aberrations, distortion, and other potential factors, which will be investigated further in future studies. Nevertheless, we are confident that our algorithm can effectively handle these effects with sufficient augmented data. The preliminary analysis presented in the subsequent section showcases the algorithm’s functionality even when certain image noises are introduced. This robustness and adaptability underline the potential of our VMAI approach in space-borne imagers. As we continue to refine and optimize the system, we are committed to addressing real-world imaging challenges and expanding its applications. Our work seeks to advance high-resolution imaging technologies, paving the way for novel opportunities in remote sensing and surveillance missions.

4.1. Low SNR

The proposed VMAI method presents some notable challenges, particularly concerning irradiance collection and the resulting low SNR ratio, especially for the narrow-field cameras, due to aperture segmentation. We recognize these drawbacks and are fully committed to overcoming them through extensive research and development efforts.

As an initial step, we have conducted estimations of the SNR for the wide-field cameras based on a first-order approximation using time-delay-and-integration (TDI) image sensors [36]. The results are presented in Table 3, which illustrates the SNR estimation with an increasing number of TDI steps.


Signal-to-noise ratio (SNR) estimation for wide-field camerasa)


Number of TDI StepsSignal-to-noise Ratio (SNR)
18.9
212.5
417.7
825.0
1635.4
3250.1
6470.8
128100.2
256141.7

a)Standard flux: 85.9 w/m2/μm/Sr applied.



We are encouraged by the potential implementation of TDI sensors to significantly improve the SNR in our VMAI system. Specifically, by incorporating TDI technology and performing 128 integration steps, we anticipate achieving an SNR enhancement of approximately ~100, which is required for many space applications. This promising approach has the potential to mitigate the SNR limitations arising from aperture segmentation in the narrow-field cameras.

As part of our initial validation, we conducted tests to assess how well our deep learning algorithm handles noisy VMAI images. We trained and tested the algorithm using simulated data with added normalized Gaussian noise N(m, σ), specifically two cases: N(0.02, 0.01) and N(0.1, 0.05). Figure 13 illustrates the results obtained from these tests.

Figure 13.Simulation images with peak signal-to-noise ratio/structural similarity indexs (PSNR/SSIMs) using a Google Earth image.

Our preliminary investigation demonstrates that the network can be effectively trained under the first Gaussian noise N(0.02, 0.01), which corresponds to the 3% noise level, equivalent to the SNR value achievable with 16 TDI steps. This outcome indicates that the algorithm performs well at handling noise at this level and can accurately reconstruct high-resolution images in such conditions. However, it is important to note that as the noise level in the training data increases, the network’s learning behavior tends to prioritize de-noising rather than de-blurring. Consequently, the network may face challenges when trying to reconstruct high-frequency signals without the use of TDI technology, as indicated in Fig. 13.

While TDI enhances SNR, it is essential to comprehensively investigate potential image degradation factors that may arise due to its application. One significant aspect we intend to explore is the potential impact on the MTF caused by satellite jitter. As the payload is subject to satellite movement and vibrations during operation, TDI can interact with these motion-induced effects, potentially affecting the system’s ability to capture fine spatial details accurately. Our ongoing research aims to analyze and quantify the degradation in MTF resulting from satellite jitter when employing TDI. This investigation will enable us to develop strategies to minimize any adverse effects and optimize the implementation of TDI to achieve the desired high-resolution imaging performance while effectively managing image quality in dynamic operational scenarios.

To overcome this limitation, our future research will focus on refining the network’s training and testing process to make it more robust in handling higher noise levels, and further improving its performance for a broader range of practical scenarios. By continuously exploring innovative approaches and conducting in-depth investigations, we aim to optimize the deep learning algorithm and unlock its full potential to advance the capabilities of VMAI for space-borne applications.

4.2. Optical Assembly and Alignment of Multi-cameras

The optical assembly and alignment of multi-camera systems constitute a critical aspect of the VMAI approach, as integrating multiple cameras within a compact space presents unique challenges. While conventional single-camera optical systems have well-established alignment procedures, the introduction of multiple cameras introduces increased complexities that demand innovative solutions.

Precise alignment of the wide-field camera and three narrow-field cameras in the VMAI system is imperative to ensure optimal performance. Unlike traditional single-camera setups, the alignment process must meticulously account for the interplay of multiple optical paths. As the number of cameras increases, the potential for misalignment and optical interference escalates, heightening the challenges faced during assembly.

These increased difficulties in optical alignment arise from factors such as varying optical characteristics among cameras, potential mechanical deviations, and the need to align multiple apertures coherently. Furthermore, the alignment process must not compromise the payload’s compactness, necessitating the development of specialized alignment tools and techniques. Prioritizing considerations such as achieving consistent focal planes, minimizing parallax errors, and preserving the relative positions of apertures is paramount in this context.

To address these challenges, the optical assembly and alignment process employs a comprehensive approach that combines precision manufacturing, sophisticated calibration methodologies, and advanced simulation tools. This ensures alignment of each camera’s optical axis with predefined precision, mitigating optical interference and preserving the integrity of combined imagery. Computational simulations demonstrate the capability of the proposed image fusion algorithm to manage residual aberrations and optical misalignment. As of the time of writing, optical fabrication has been completed, and the AIT process is underway, to be detailed in a forthcoming publication.

In conclusion, this paper introduces VMAI as a promising and innovative approach for reducing the volume of space-borne imagers. The proposed VMAI system aims to achieve high-resolution ground spatial imagery using deep learning methods while significantly reducing the overall payload volume, compared to conventional approaches.

The presented phase-1 design targets a commendable 1-meter GSD at an altitude of 500 km, making it suitable for specific applications on small satellite platforms, especially for surveillance missions. The level-1 design successfully achieved a telephoto-ratio equivalent to the shortest optics type, Korsh, employing conventional on-axis optical elements, primarily spherical surfaces.

As a first step in the validation of this approach, we assessed the VMAI system using computational simulations, highlighting its potential to achieve the desired 1-meter GSD performance while effectively managing challenges related to lower SNR values resulting from aperture segmentation. While the optical imaging capability may not surpass conventional approaches, the VMAI system’s compactness and high-resolution imaging capabilities open new possibilities for future space-borne imagers.

Currently, we are planning an experimental verification using a prototype of the level-1 VMAI payload. Additionally, our ongoing research will focus on further reducing the telephoto-ratio while optimizing SNR management to maximize the system’s imaging capabilities. As VMAI represents a promising solution for compact, high-resolution imaging in micro or small satellite platforms, future developments hold the potential of advancing space imaging technologies and creating new opportunities in the field of remote sensing and surveillance missions.

Challengeable Future Defense Technology Research and Development Program through the Agency for Defense Development (ADD) funded by the Defense Acquisition Program Administration in 2021 (No. 915020201).

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

  1. Q. Zhao, L. Yu, Z. Du, D. Peng, P. Hao, Y. Zhang, and P. Gong, “An overview of the applications of Earth observation satellite data: Impacts and future trends,” Remote Sens. 14, 1863 (2022).
    CrossRef
  2. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Co., USA, 2005).
  3. S. E. Qian, Optical Payloads for Space Missions (Wiley, USA, 2015).
    CrossRef
  4. C. Toth and G. Jóźków, “Remote sensing platforms and sensors: A survey,” ISPRS J. Photogramm. Remote Sens. 115, 22-36 (2016).
    CrossRef
  5. R. Roy and J. Miller, “Miniaturization of image sensors: The role of innovations in complementary technologies in overcoming technological trade-offs associated with product innovation,” J. Eng. Technol. Manag. 44, 58-69 (2017).
    CrossRef
  6. R. N. Wilson, Reflecting telescope optics I: Basic design theory and its historical development, 2nd Ed. (Springer Berlin, Germany, 2004).
  7. D. Koresh, Reflective Optics (Academic Press, USA, 1991).
  8. V. Costes, G. Cassar, and L. Escarrat, “Optical design of a compact telescope for the next generation Earth observation system,” Proc. SPIE 10564, 1056516 (2017).
  9. S. Grabarnik, M. Taccola, L. Maresi, V. Moreau, L. de Vos, J. Versluys, and G. Gubbels, “Compact multispectral and hyperspectral imagers based on a wide field of view TMA,” Proc. SPIE 10565, 105605 (2017).
    CrossRef
  10. B. Fan, W.-J. Cai, and Y. Huang, “Design and test of a high performance off-axis TMA telescope,” Proc. SPIE 10564, 1056417 (2017).
    CrossRef
  11. S.-T. Chang, Y.-C. Lin, C.-C. Lien, T.-M. Huang, H.-L. Tsay, and J.-J. Miau, “The design and assembly of a long-focal-length telescope with aluminum mirrors,” Proc. SPIE 11180, 111806U (2019).
    CrossRef
  12. M. Metwally, T. M. Bazan, and F. Eltehamy, “Design of very high-resolution satellite telescopes part I: Optical system design,” IEEE Trans. Aerosp. Electron. Syst. 56, 1202-1208 (2020).
    CrossRef
  13. J.-I. Bae, H.-B. Lee, J.-W. Kim, and M.-W. Kim, “Design of all-SiC lightweight secondary and tertiary mirrors for use in spaceborne telescopes,” Curr. Opt. Photonics 6, 60-68 (2022).
  14. R. L. Kendrick, A. Duncan, C. Ogden, J. Wilm, and S.T. Thurman, “Segmented planar imaging detector for EO reconnaissance,” in Computational Optical Sensing and Imaging 2013 (Optica Publishing Group, 2013), p. paper CM4C.1.
    CrossRef
  15. G. Carles, G. Muyo, N. Bustin, A. Wood, and A. R. Harvey, “Compact multi-aperture imaging with high angular resolution,” J. Opt. Soc. Am. A 32, 411-419 (2015).
    Pubmed CrossRef
  16. G. Carles and A. R. Harvey, “Multi-aperture imaging for flat cameras,” Opt. Lett. 45, 6182-6185 (2020).
    Pubmed CrossRef
  17. G. Lv, H. Xu, H. Feng, Z. Xu, H. Zhou, Q. Li, and Y. Chen, “A full-aperture imaging synthesis method for the rotating rectangular aperture system using Fourier spectrum restoration,” Photonics 8, 522 (2021).
    CrossRef
  18. D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127-137 (2018).
    CrossRef
  19. E. Tseng, S. Colburn, J. Whitehead, L. Huang, S.-H. Baek, A. Majumdar, and F. Heide, “Neural nano-optics for high-quality thin lens imaging,” Nat. Commun. 12, 6493 (2021).
    Pubmed KoreaMed CrossRef
  20. X. Liu, J. Deng, K. F. Li, M. Jin, Y. Tang, X. Zhang, X. Cheng, H. Wang, W. Liu, and G. Li, “Optical telescope with Cassegrain metasurfaces,” Nanophotonics 9, 3263-3269 (2020).
    CrossRef
  21. K. Zhang, C. Yang, X. Li, C. Zhou, and R. Zhong, “High-efficiency microsatellite-using super-resolution algorithm based on the multi-modality super-CMOS sensor,” Sensors 20, 4019 (2020).
    Pubmed KoreaMed CrossRef
  22. L. Ma, Y. Liu, X. Zhang, Y. Ye, G. Yin, and B. A. Johnson, “Deep learning in remote sensing applications: A meta-analysis and review,” ISPRS J. Photogramm. Remote Sens. 152, 166-177 (2019).
    CrossRef
  23. S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” Inform. Fusion 33, 100-112 (2017).
    CrossRef
  24. H. Kaur, D. Koundal, and V. Kdyan, “Image Fusion chnqieus: A survey,” Arch. Computat. Methods Eng. 28, 4425-4447 (2021).
    Pubmed KoreaMed CrossRef
  25. R. D. Flete and B. D. Paul, “Modelling the optical transfer function in the imaging chain,” Opt. Eng. 53, 083013 (2014).
    CrossRef
  26. A. M. John, K. Khanna, R. R. Prasad, and L. G. Pillai, “A review on application of fourier transform in image restoration,” in Proc. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) (Palladam, India, 2020), pp. 389-397.
  27. J. Su, B. Xu, and H. Yin, “A survey of deep learning approaches to image restoration,” Neurocomputing 487, 46-65 (2022).
    CrossRef
  28. H.-W. Chen, “Spatial resolution enhancement of oversmapled images using regression decomposition and systhesis,” ISPRS Archives XLVI-4/W3-2021, 71-77 (2021).
    CrossRef
  29. J. Benediksson, J. Chanussot, and W. Moon, “Advances in very-high-resolution remote sensing,” Proc. IEEE. 101, 566-569 (2013).
    CrossRef
  30. J. Liang, G. Sun, K. Zhang, L. van Gool, and R. Timofte, “Mutual affine network for spatially variant kernel estimation in blind image super-resolution,” in Proc. IEEE/CVF International Conference on Computer Vision (Montréal, Canada, Oct. 11-17, 2021), pp. 4096-4105.
    CrossRef
  31. G. Hwang, C. Song, T. Lee, H. Na, and M. Kang, “Multi-aperture image processing using deep learning,” J. Korean Soc. Indust. Appl. Math. 27, 56-74 (2023).
  32. L. Lin, Y. Liu, Y. Hu, X. Yan, K. Xie, and H. Huang, “Capturing, reconstructing, and simulating: The urbanscene3D dataset,” (Visual Computing Research Center, Shenzhen University, 2022), https://vcc.tech/UrbanScene3D (Accessed Date: May. 1, 2022).
  33. L. Lin, U. Liu, Y. Hu, X. Yan, K. Xie, and H. Huang, “Capturing, reconstructing, and simulating: The urbanscene3D dataset,” in Proc. Computer Vision - ECCV 2022: 17th European Conference (Tel Aviv, Israel, October 23-27, 2022), Part VIII, pp. 93-109.
    CrossRef
  34. Earth Engine Data Catalog, “A planetary-scale platform for Earth science data & analysis,” (Google), https://developers.google.com/earth-engine/datasets (Accessed Date: May. 1, 2022).
  35. A. Horé and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proc. 20th International Conference on Pattern Recognition (Istanbul, Turkey, Aug. 23-26, 2010), pp. 2366-2369.
    CrossRef
  36. Teledyne Imaging, “TDI imagers for space,” (Teledyne Technologies Inc.), https://www.teledyneimaging.com/en/aerospace-and-defense/products/tdi-imagers-for-space/ (Accessed Date: Jul. 1, 2023).

Article

Research Paper

Curr. Opt. Photon. 2023; 7(5): 545-556

Published online October 25, 2023 https://doi.org/10.3807/COPP.2023.7.5.545

Copyright © Optical Society of Korea.

Volume-sharing Multi-aperture Imaging (VMAI): A Potential Approach for Volume Reduction for Space-borne Imagers

Jun Ho Lee1,2 , Seok Gi Han1, Do Hee Kim1, Seokyoung Ju1, Tae Kyung Lee3, Chang Hoon Song3, Myoungjoo Kang3, Seonghui Kim4, Seohyun Seong4

1Department of Optical Engineering, Kongju National University, Cheonan 31080, Korea
2Institute of Application and Fusion for Light, Kongju National University, Cheonan 31080, Korea
3Department of Mathematical Sciences, Seoul National University, Seoul 08826, Korea
4Telepix Ltd., Daejeon 34013, Korea

Correspondence to:*jhlsat@kongju.ac.kr, ORCID 0000-0002-4075-3504

Received: August 1, 2023; Revised: August 31, 2023; Accepted: September 2, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper introduces volume-sharing multi-aperture imaging (VMAI), a potential approach proposed for volume reduction in space-borne imagers, with the aim of achieving high-resolution ground spatial imagery using deep learning methods, with reduced volume compared to conventional approaches. As an intermediate step in the VMAI payload development, we present a phase-1 design targeting a 1-meter ground sampling distance (GSD) at 500 km altitude. Although its optical imaging capability does not surpass conventional approaches, it remains attractive for specific applications on small satellite platforms, particularly surveillance missions. The design integrates one wide-field and three narrow-field cameras with volume sharing and no optical interference. Capturing independent images from the four cameras, the payload emulates a large circular aperture to address diffraction and synthesizes high-resolution images using deep learning. Computational simulations validated the VMAI approach, while addressing challenges like lower signal-to-noise (SNR) values resulting from aperture segmentation. Future work will focus on further reducing the volume and refining SNR management.

Keywords: Deep-learning, Earth observation, Image fusion, Volume-sharing multi-aperture imaging

I. INTRODUCTION

Space-borne optical payloads play a critical role in Earth observation and remote sensing applications, providing valuable information for environmental monitoring, disaster management, resource management, and national security [1]. However, the diffraction limit ultimately determines the achievable resolution of imaging optics, including those of space-borne optical payloads [2]. The Rayleigh criterion defines the diffraction limit of the spatial resolution of an optical sensor as the radius of the airy disk (R), which is proportional to the wavelength of light (λ), the focal length of the optics ( f ) and the reciprocal of the aperture diameter (D), as given by Eq. (1) below:

R=1.22λ×fD.

To achieve high spatial resolution in space-based imaging, a commonly used approach is to couple a single large aperture optics with a high-quality imaging detector or detectors [3]. With this approach it is possible to determine the two key system design parameters of the payload, i.e. the ground sampling distance (GSD) and the swath width (SW) or its angular extent, field of view (FOV). This is accomplished using the geometrical relationships between the size of the imaging detector in pixels or pitch (p), the number of pixels (N), the focal length of the payload ( f ) and the altitude (H) at which the payload is operating, as given by Eqs. (2)(4).

GSD=H×pf,
SW=N×GSD,
R=1.22λ×fD.

Here, the ground sampling distance (GSD) is the distance between the centers of adjacent pixels on the ground in an image, and the swath width (SW) indicates the width of the imaged area on the ground covered by a single optics or payload.

Although GSD represents a critical aspect of the payload’s imaging performance, it differs somewhat from the smallest resolvable distance between two separate ground features in an image, which is often referred to as the ground resolving distance (GRD). Practitioners often make GSD equivalent to GRD by matching the diffraction limit (R) to the detector pitch (p). Using this approach, the primary optics design properties of an optical payload for Earth observation can be easily determined, including the focal length ( f ), minimum aperture diameter (D), maximum f-number ( f /#), and the FOV which is the angular extent of the swath width (SW), as in Eqs. (5) to (7) below.

f=H×pGSD,
D=1.22λ×HGSD,
f/#=p1.22λ.

Given the constraints of space-proven imaging detectors and operating altitudes, there is limited flexibility when determining the parameters presented above, particularly in the selection of optics types. Table 1 provides illustrative system specifications of an imaging payload targeting a 1 m GSD at 500 km. It is important to highlight the gradual reduction in pixel pitch, which leads to a decrease in the required focal length [4, 5]. The values for 2025 and 2030 are projected based on this ongoing trend. This ongoing trend towards miniaturization in pixel size necessitates careful investigation of its potential impact on imaging performance, notably in terms of maintaining sufficient full well capacity values to capture varying light conditions effectively.


System specifications of an optical payload for 1 m GSDa).


YearPixel (μm)Focal Length (m)Aperture Diameter (m)
200015.07.500.37
200512.56.250.37
201010.05.000.37
20157.53.750.37
20206.53.250.37
2025b)5.42.700.37
2030b)4.52.250.37

a)At 500 km altitude. b)Projected on the ongoing trend..



Once the system parameters are determined, optical designers have limited freedom in selecting among three available types of optics: Cassegrain, which includes Ritchey-Chretien (RC), Three-mirror anastigmatic (TMA), and Korsh [613], as depicted in Fig. 1. When it comes to compactness or shortness, Cassegrain and Korsh configurations are highly favored, with Korsh providing the shortest configuration. Regarding the telephoto-ratio (TR), defined as the ratio of total track length (T) to focal length ( f ), Cassegrain offers a range of 0.4 to 0.6, while Korsh provides a narrower range of 0.15 to 0.2. Notably, achieving the shortest TR value in the Korsh design involves utilizing a complex configuration comprising off-axis aspherical mirrors, following a fast Cassegrain telescope.

Figure 1. Three most used optics types in remote sensing: (a) Cassegrain, (b) TMA, and (c) Korsh.

Recently the use of small satellites has increased, especially for remote sensing. As a result, there have been growing demands for miniaturization or more compact optical payloads for high resolution remote sensing. Researchers have proposed various methods to address these issues, including the Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) [14], multi-aperture or rotating-aperture imaging [1518], meta-surface optics [19, 20], and super-resolution algorithms [21]. However, these methods, on their own, have limited value for long-distance imaging, or exhibit potential errors or artifacts. Accordingly, alternative methods are needed to provide compact imaging for space-borne payloads.

To tackle this challenge, we present a novel concept called volume-sharing multi-aperture imaging (VMAI), which leverages conventional optics fabrication and assembly techniques to achieve high ground spatial resolutions with reduced volume, compared to conventional approaches. Our approach combines advances in multi-camera image fusion [2224], rotating multi-aperture imaging [16, 17], and folded optics. With this approach, we aim to reach a telephoto-ratio of approximately 0.1, surpassing the conventionally achievable limit of 0.15–0.2. As an intermediate step in this VMAI payload development, we are currently developing a phase-1 design, and attempting to achieve a 1-meter GSD at 500 km altitude, with a telephoto-ratio of 0.15, equivalent to the shortest value attainable with the Korsh configuration.

In this paper, we first present the phase-1 design. Subsequently, we present the synthetic image generation process, which uses deep learning, and present computational simulation results that confirm the effectiveness of the proposed method. Finally, we address challenges like lower SNR values.

II. METHOD

2.1. Concept

When considering optical payloads, the principles of Fourier optics [2, 25, 26] remind us that the amount of information an optical system can capture is dictated by the size and shape of its aperture, often referred to as the entrance pupil or Fourier Plane. As seen in Table 1, the minimum aperture required to achieve 1-meter GSD at 500 km altitude remains consistent, regardless of the focal length.

This insight leads us to propose VMAI, a novel approach that aims to drastically enhance compactness and shorten the imaging payload below current limitations. Leveraging recent advances in multi-camera fusion and deep-learning algorithms, VMAI addresses these challenges by ingeniously partitioning a single circular or annular aperture into multiple smaller apertures, each equipped with dedicated imaging optics and a separate imaging detector. The intelligence of deep learning is then harnessed to effectively combine all the individual images into a high-resolution composite. By ingeniously folding these individual apertures and ensuring no optical interference, VMAI achieves a remarkable level of compactness, paving the way for breakthroughs in space-borne imagers.

In our current implementation, referred to as the level-1 design, the VMAI payload design targets to produce a 1 m GSD at 500 km altitude. Optical payloads of around 1 m GSD are extensively used in various remote sensing applications, including urban area mapping and agricultural studies [27]. The design integrates one annular-aperture wide-field camera (or wide-field optics module) along with three narrow-field cameras (or narrow-field optics modules) equipped with rectangular apertures. These narrow-field cameras are strategically positioned around the centered annular aperture. To ensure the proper alignment of each image line with the centered wide-field cameras in response to satellite movement, we have employed line-of-sight rotating prisms, specifically Risely prisms, in each narrow-field camera. Figure 2 provides a visual representation of the VMAI system, (a) depicting the unfolded configuration and (b) illustrating the folded configuration, showcasing the compactness achieved through this innovative design.

Figure 2. Illustration of volume-sharing multi-aperture imaging (VMAI) with one wide-field and three narrow-field cameras: (a) Unfolded and (b) compactly folded.

The wide-field camera in our proposed system is equipped with conventional telescope optics, which captures a broad but less detailed view of the scene. It plays a crucial role as the foundation for the subsequent image fusion process. On the other hand, the three narrow-field cameras (or narrow-field optics modules) operate collectively as rotating rectangular aperture imaging (RRAI) optics with three rotations [16, 17]. To overcome the resolution limit imposed by diffraction, the narrow-field optics in our system employ an over-sampling approach [28]. We can represent the information from the payload by two-dimensional modulation transfer function (MTF) as shown in Fig. 3. Figure 4 shows exemplary images taken by the four optical modules.

Figure 3. Figure of optical modules and Fourier domain information for each module.

Figure 4. Exemplary images taken by the four optical modules (one wide field + three narrow fields). The target image is a reference 1 m GSD image that would be taken by a conventional optical payload.

2.2. Level-1 Design

In our level-1 study, we meticulously designed the wide-field camera to achieve a GSD of 5 m at an altitude of 500 km, employing detectors with a pixel pitch of 4.5 μm. Similarly, we applied a uniform design to the three narrow-field cameras, each positioned with an axis rotation of 120 degrees, enabling them to achieve a GSD of 1 meter at the same altitude. As depicted in Fig. 4, it is evident that the imaging resolution of the three narrow-field cameras falls short compared to a conventional optical payload designed for 1 m GSD due to the diffraction limit induced by their aperture segmentation. Nevertheless, the narrow-field cameras still capture some information at the 1m GSD level, particularly along each of the long-aperture directions. Table 2 provides an overview of the design parameters for both the wide-field and narrow-field cameras. Additionally, Fig. 5 illustrates the optical layout of the wide-field camera, unfolded narrow-field camera, and the combined system. As shown in Figs. 6 and 7, each optic was carefully designed within the diffraction limit, ensuring excellent imaging capabilities.

Figure 5. Optical layouts: (a) Wide-field camera, (b) unfolded narrow-field camera, and (c) combined (not in scale).

Figure 6. Design performances of the wide-field optics design: (a) Spot diagram, (b) modulation transfer function (MTF).

Figure 7. Design performances of the narrow-field optics design: (a) Spot diagram, (b) modulation transfer function (MTF).


Parameters of the level-1 volume-sharing multi-aperture imaging (VMAI) payload.


CameraParametersValues
Wide-fieldGSD (m)5
Focal Length (m)0.45
Aperture TypeAnnular
Aperture (mm)Φ 140
F-number3.2
Obstruction Ratio0.3
Narrow-fieldGSD (m)1
Focal Length (m)2.25
Aperture TypeRectangular
Aperture Dimension (mm)110 × 70
F-number20.5 × 32.1
Aspect Ratio1.6:1


In addition to our design considerations, we attempted to utilize optical components that were readily available and straightforward to fabricate and align. In particular, we used mostly spherical surfaces, which are easier to manufacture and align. Employing these easily accessible components simplifies the overall manufacturing and alignment processes, making the proposed system more practical and feasible to implement. Figure 8 provides a 3-dimensional view of the VMAI payload and illustrates how it fits within the satellite structure.

Figure 8. 3D models of the proposed design: (a) The volume-sharing multi-aperture imaging (VMAI) payload, and (b) a satellite with the VMAI payload.

In this study, the telephoto ratio offers key insights into the concept of miniaturization and its broader implications. While commonly associated with size reduction, miniaturization encompasses volume and weight considerations. The telephoto ratio allows us to delve into these dimensions, addressing not only physical size but also the comprehensive optimization of space and weight. This holistic approach is crucial for efficient space-borne imaging systems. The level-1 design aims for a small satellite volume of under 80 U and a mass below 50 kg, with U representing a standard micro-satellite volume (1,000 cm3). Figure 9 visually illustrates the lightness in comparison to some reported satellites.

Figure 9. Ground sampling distance (GSD) vs. satellite volume in unit (1 U, 1,000 cm3).

2.3. Image Fusion/reconstruction

Our main objective is to merge the four independently captured images from the respective cameras into a single reconstructed or fused image with the same ground sampling distance (GSD) as the three narrow-field optics. Additionally, we aim to enhance the ground resolution distance (GRD) of the reconstructed images to match the GSD of the narrow-field optics. By employing an image fusion process, we can effectively integrate the information from the different cameras, resulting in a comprehensive and high-resolution image. It is worth noting that the image fusion process considers the unique characteristics of each narrow-field image, including its rectangular aperture and specific rotation angles, thereby combining the more detailed information from each image.

The image reconstruction process can be accomplished using various techniques, such as a Fourier inversion of the image formation, or using specific methods like rectangular-aperture folded-optics imaging (RAFOI) [16] and rotating rectangular aperture imaging (RRAI) [17]. However, in our specific case, even though we assumed ideal optics in this study, we chose to employ deep learning approaches for the fusion process [29, 30]. By utilizing deep learning, we can effectively address residual errors and discrepancies that may arise from each optics system, including potential issues like line-of-sight mismatches. Furthermore, deep learning offers the advantage of potential on-chip implementation, allowing for in-situ processing directly on the satellite. This makes deep learning a favorable and practical choice for our application. Figure 10 shows the schematic flowchart of the deep learning process. For a comprehensive mathematical understanding of the process, further details can be found in a separate paper [31].

Figure 10. Schematic flowchart of the deep learning process.

III. COMPUTATIONAL VALIDATION

3.1. Overall

We are currently in the process of developing the optical hardware to validate the feasibility of VMAI. As of now, all the optical components have been manufactured, and we are progressing with the optical alignment. Once the optical assembly, integration, and testing (AIT) are completed, we will proceed with experimental testing to verify the effectiveness of this approach.

In the meantime, to gain initial insights and assess the viability of VMAI, we have conducted comprehensive computational simulations. These simulations allowed us to explore and evaluate the performance of the proposed approach under various conditions. While the experimental validation will be crucial to confirm practical implementation, the simulation results have been encouraging, providing valuable data to support the ability of VMAI to achieve our objectives.

3.2. Computational Validation

We first trained our deep learning model utilizing simulated images from the URBAN3D dataset [32, 33]. These images were generated by applying rescaling and image degradation techniques, incorporating the respective point spread functions (PSFs).

The process begins by generating simulated images from the 278 original images, each of size 1,024 × 1,024. These training data are then sampled by randomly cropping 128 square patches, rotating them, and applying flipping transformations. By training the model on this diverse range of input data, it can effectively learn and adapt to the characteristics of the wide-field and narrow-field cameras employed in our proposed system. Figure 4 provides representative samples of the simulated images.

To verify the effectiveness of our proposed image fusion process, we conducted an initial experiment using standard image targets, including ISO 12233 as in Fig. 11. Figure 12 illustrates the results of this experiment, including the original ground truth (GT) image, the camera images, and the reconstructed image obtained using the trained deep learning algorithm described earlier. While the reconstructed image may exhibit some loss of edges, it is noteworthy that the algorithm successfully restores the edges in all directions. Overall, this initial experiment demonstrates the potential of our image fusion approach to effectively combine information from multiple cameras and reconstruct high-resolution images.

Figure 11. Simulation results with ISO 12233 test chart (horizontal direction).

Figure 12. Simulation images with peak signal-to-noise ratio/structural similarity indexs (PSNR/SSIMs) using a Google Earth image.

To evaluate the quality of the restored images using real satellite data, we utilized the Google Earth dataset [34]. For this evaluation, we employed two widely used image quality metrics: peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [35]. These metrics enable us to quantitatively measure the performance of our algorithm in terms of image quality and fidelity by comparing the restored images to their corresponding ground truth images. By utilizing these evaluation metrics, we can comprehensively assess the effectiveness of our algorithm for preserving image details and maintaining similarity to the ground truth images.

In our simulation, we assumed the absence of unknown effects on the images, such as noise, residual aberrations, distortion, and other potential factors, which will be investigated further in future studies. Nevertheless, we are confident that our algorithm can effectively handle these effects with sufficient augmented data. The preliminary analysis presented in the subsequent section showcases the algorithm’s functionality even when certain image noises are introduced. This robustness and adaptability underline the potential of our VMAI approach in space-borne imagers. As we continue to refine and optimize the system, we are committed to addressing real-world imaging challenges and expanding its applications. Our work seeks to advance high-resolution imaging technologies, paving the way for novel opportunities in remote sensing and surveillance missions.

IV. ISSUES

4.1. Low SNR

The proposed VMAI method presents some notable challenges, particularly concerning irradiance collection and the resulting low SNR ratio, especially for the narrow-field cameras, due to aperture segmentation. We recognize these drawbacks and are fully committed to overcoming them through extensive research and development efforts.

As an initial step, we have conducted estimations of the SNR for the wide-field cameras based on a first-order approximation using time-delay-and-integration (TDI) image sensors [36]. The results are presented in Table 3, which illustrates the SNR estimation with an increasing number of TDI steps.


Signal-to-noise ratio (SNR) estimation for wide-field camerasa).


Number of TDI StepsSignal-to-noise Ratio (SNR)
18.9
212.5
417.7
825.0
1635.4
3250.1
6470.8
128100.2
256141.7

a)Standard flux: 85.9 w/m2/μm/Sr applied..



We are encouraged by the potential implementation of TDI sensors to significantly improve the SNR in our VMAI system. Specifically, by incorporating TDI technology and performing 128 integration steps, we anticipate achieving an SNR enhancement of approximately ~100, which is required for many space applications. This promising approach has the potential to mitigate the SNR limitations arising from aperture segmentation in the narrow-field cameras.

As part of our initial validation, we conducted tests to assess how well our deep learning algorithm handles noisy VMAI images. We trained and tested the algorithm using simulated data with added normalized Gaussian noise N(m, σ), specifically two cases: N(0.02, 0.01) and N(0.1, 0.05). Figure 13 illustrates the results obtained from these tests.

Figure 13. Simulation images with peak signal-to-noise ratio/structural similarity indexs (PSNR/SSIMs) using a Google Earth image.

Our preliminary investigation demonstrates that the network can be effectively trained under the first Gaussian noise N(0.02, 0.01), which corresponds to the 3% noise level, equivalent to the SNR value achievable with 16 TDI steps. This outcome indicates that the algorithm performs well at handling noise at this level and can accurately reconstruct high-resolution images in such conditions. However, it is important to note that as the noise level in the training data increases, the network’s learning behavior tends to prioritize de-noising rather than de-blurring. Consequently, the network may face challenges when trying to reconstruct high-frequency signals without the use of TDI technology, as indicated in Fig. 13.

While TDI enhances SNR, it is essential to comprehensively investigate potential image degradation factors that may arise due to its application. One significant aspect we intend to explore is the potential impact on the MTF caused by satellite jitter. As the payload is subject to satellite movement and vibrations during operation, TDI can interact with these motion-induced effects, potentially affecting the system’s ability to capture fine spatial details accurately. Our ongoing research aims to analyze and quantify the degradation in MTF resulting from satellite jitter when employing TDI. This investigation will enable us to develop strategies to minimize any adverse effects and optimize the implementation of TDI to achieve the desired high-resolution imaging performance while effectively managing image quality in dynamic operational scenarios.

To overcome this limitation, our future research will focus on refining the network’s training and testing process to make it more robust in handling higher noise levels, and further improving its performance for a broader range of practical scenarios. By continuously exploring innovative approaches and conducting in-depth investigations, we aim to optimize the deep learning algorithm and unlock its full potential to advance the capabilities of VMAI for space-borne applications.

4.2. Optical Assembly and Alignment of Multi-cameras

The optical assembly and alignment of multi-camera systems constitute a critical aspect of the VMAI approach, as integrating multiple cameras within a compact space presents unique challenges. While conventional single-camera optical systems have well-established alignment procedures, the introduction of multiple cameras introduces increased complexities that demand innovative solutions.

Precise alignment of the wide-field camera and three narrow-field cameras in the VMAI system is imperative to ensure optimal performance. Unlike traditional single-camera setups, the alignment process must meticulously account for the interplay of multiple optical paths. As the number of cameras increases, the potential for misalignment and optical interference escalates, heightening the challenges faced during assembly.

These increased difficulties in optical alignment arise from factors such as varying optical characteristics among cameras, potential mechanical deviations, and the need to align multiple apertures coherently. Furthermore, the alignment process must not compromise the payload’s compactness, necessitating the development of specialized alignment tools and techniques. Prioritizing considerations such as achieving consistent focal planes, minimizing parallax errors, and preserving the relative positions of apertures is paramount in this context.

To address these challenges, the optical assembly and alignment process employs a comprehensive approach that combines precision manufacturing, sophisticated calibration methodologies, and advanced simulation tools. This ensures alignment of each camera’s optical axis with predefined precision, mitigating optical interference and preserving the integrity of combined imagery. Computational simulations demonstrate the capability of the proposed image fusion algorithm to manage residual aberrations and optical misalignment. As of the time of writing, optical fabrication has been completed, and the AIT process is underway, to be detailed in a forthcoming publication.

V. CONCLUSIONS

In conclusion, this paper introduces VMAI as a promising and innovative approach for reducing the volume of space-borne imagers. The proposed VMAI system aims to achieve high-resolution ground spatial imagery using deep learning methods while significantly reducing the overall payload volume, compared to conventional approaches.

The presented phase-1 design targets a commendable 1-meter GSD at an altitude of 500 km, making it suitable for specific applications on small satellite platforms, especially for surveillance missions. The level-1 design successfully achieved a telephoto-ratio equivalent to the shortest optics type, Korsh, employing conventional on-axis optical elements, primarily spherical surfaces.

As a first step in the validation of this approach, we assessed the VMAI system using computational simulations, highlighting its potential to achieve the desired 1-meter GSD performance while effectively managing challenges related to lower SNR values resulting from aperture segmentation. While the optical imaging capability may not surpass conventional approaches, the VMAI system’s compactness and high-resolution imaging capabilities open new possibilities for future space-borne imagers.

Currently, we are planning an experimental verification using a prototype of the level-1 VMAI payload. Additionally, our ongoing research will focus on further reducing the telephoto-ratio while optimizing SNR management to maximize the system’s imaging capabilities. As VMAI represents a promising solution for compact, high-resolution imaging in micro or small satellite platforms, future developments hold the potential of advancing space imaging technologies and creating new opportunities in the field of remote sensing and surveillance missions.

FUNDING

Challengeable Future Defense Technology Research and Development Program through the Agency for Defense Development (ADD) funded by the Defense Acquisition Program Administration in 2021 (No. 915020201).

DISCLOSURES

The authors declare no conflicts of interest.

DATA AVAILABILITY

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Fig 1.

Figure 1.Three most used optics types in remote sensing: (a) Cassegrain, (b) TMA, and (c) Korsh.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 2.

Figure 2.Illustration of volume-sharing multi-aperture imaging (VMAI) with one wide-field and three narrow-field cameras: (a) Unfolded and (b) compactly folded.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 3.

Figure 3.Figure of optical modules and Fourier domain information for each module.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 4.

Figure 4.Exemplary images taken by the four optical modules (one wide field + three narrow fields). The target image is a reference 1 m GSD image that would be taken by a conventional optical payload.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 5.

Figure 5.Optical layouts: (a) Wide-field camera, (b) unfolded narrow-field camera, and (c) combined (not in scale).
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 6.

Figure 6.Design performances of the wide-field optics design: (a) Spot diagram, (b) modulation transfer function (MTF).
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 7.

Figure 7.Design performances of the narrow-field optics design: (a) Spot diagram, (b) modulation transfer function (MTF).
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 8.

Figure 8.3D models of the proposed design: (a) The volume-sharing multi-aperture imaging (VMAI) payload, and (b) a satellite with the VMAI payload.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 9.

Figure 9.Ground sampling distance (GSD) vs. satellite volume in unit (1 U, 1,000 cm3).
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 10.

Figure 10.Schematic flowchart of the deep learning process.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 11.

Figure 11.Simulation results with ISO 12233 test chart (horizontal direction).
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 12.

Figure 12.Simulation images with peak signal-to-noise ratio/structural similarity indexs (PSNR/SSIMs) using a Google Earth image.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

Fig 13.

Figure 13.Simulation images with peak signal-to-noise ratio/structural similarity indexs (PSNR/SSIMs) using a Google Earth image.
Current Optics and Photonics 2023; 7: 545-556https://doi.org/10.3807/COPP.2023.7.5.545

System specifications of an optical payload for 1 m GSDa)


YearPixel (μm)Focal Length (m)Aperture Diameter (m)
200015.07.500.37
200512.56.250.37
201010.05.000.37
20157.53.750.37
20206.53.250.37
2025b)5.42.700.37
2030b)4.52.250.37

a)At 500 km altitude. b)Projected on the ongoing trend.



Parameters of the level-1 volume-sharing multi-aperture imaging (VMAI) payload


CameraParametersValues
Wide-fieldGSD (m)5
Focal Length (m)0.45
Aperture TypeAnnular
Aperture (mm)Φ 140
F-number3.2
Obstruction Ratio0.3
Narrow-fieldGSD (m)1
Focal Length (m)2.25
Aperture TypeRectangular
Aperture Dimension (mm)110 × 70
F-number20.5 × 32.1
Aspect Ratio1.6:1


Signal-to-noise ratio (SNR) estimation for wide-field camerasa)


Number of TDI StepsSignal-to-noise Ratio (SNR)
18.9
212.5
417.7
825.0
1635.4
3250.1
6470.8
128100.2
256141.7

a)Standard flux: 85.9 w/m2/μm/Sr applied.


References

  1. Q. Zhao, L. Yu, Z. Du, D. Peng, P. Hao, Y. Zhang, and P. Gong, “An overview of the applications of Earth observation satellite data: Impacts and future trends,” Remote Sens. 14, 1863 (2022).
    CrossRef
  2. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Co., USA, 2005).
  3. S. E. Qian, Optical Payloads for Space Missions (Wiley, USA, 2015).
    CrossRef
  4. C. Toth and G. Jóźków, “Remote sensing platforms and sensors: A survey,” ISPRS J. Photogramm. Remote Sens. 115, 22-36 (2016).
    CrossRef
  5. R. Roy and J. Miller, “Miniaturization of image sensors: The role of innovations in complementary technologies in overcoming technological trade-offs associated with product innovation,” J. Eng. Technol. Manag. 44, 58-69 (2017).
    CrossRef
  6. R. N. Wilson, Reflecting telescope optics I: Basic design theory and its historical development, 2nd Ed. (Springer Berlin, Germany, 2004).
  7. D. Koresh, Reflective Optics (Academic Press, USA, 1991).
  8. V. Costes, G. Cassar, and L. Escarrat, “Optical design of a compact telescope for the next generation Earth observation system,” Proc. SPIE 10564, 1056516 (2017).
  9. S. Grabarnik, M. Taccola, L. Maresi, V. Moreau, L. de Vos, J. Versluys, and G. Gubbels, “Compact multispectral and hyperspectral imagers based on a wide field of view TMA,” Proc. SPIE 10565, 105605 (2017).
    CrossRef
  10. B. Fan, W.-J. Cai, and Y. Huang, “Design and test of a high performance off-axis TMA telescope,” Proc. SPIE 10564, 1056417 (2017).
    CrossRef
  11. S.-T. Chang, Y.-C. Lin, C.-C. Lien, T.-M. Huang, H.-L. Tsay, and J.-J. Miau, “The design and assembly of a long-focal-length telescope with aluminum mirrors,” Proc. SPIE 11180, 111806U (2019).
    CrossRef
  12. M. Metwally, T. M. Bazan, and F. Eltehamy, “Design of very high-resolution satellite telescopes part I: Optical system design,” IEEE Trans. Aerosp. Electron. Syst. 56, 1202-1208 (2020).
    CrossRef
  13. J.-I. Bae, H.-B. Lee, J.-W. Kim, and M.-W. Kim, “Design of all-SiC lightweight secondary and tertiary mirrors for use in spaceborne telescopes,” Curr. Opt. Photonics 6, 60-68 (2022).
  14. R. L. Kendrick, A. Duncan, C. Ogden, J. Wilm, and S.T. Thurman, “Segmented planar imaging detector for EO reconnaissance,” in Computational Optical Sensing and Imaging 2013 (Optica Publishing Group, 2013), p. paper CM4C.1.
    CrossRef
  15. G. Carles, G. Muyo, N. Bustin, A. Wood, and A. R. Harvey, “Compact multi-aperture imaging with high angular resolution,” J. Opt. Soc. Am. A 32, 411-419 (2015).
    Pubmed CrossRef
  16. G. Carles and A. R. Harvey, “Multi-aperture imaging for flat cameras,” Opt. Lett. 45, 6182-6185 (2020).
    Pubmed CrossRef
  17. G. Lv, H. Xu, H. Feng, Z. Xu, H. Zhou, Q. Li, and Y. Chen, “A full-aperture imaging synthesis method for the rotating rectangular aperture system using Fourier spectrum restoration,” Photonics 8, 522 (2021).
    CrossRef
  18. D. J. Brady, W. Pang, H. Li, Z. Ma, Y. Tao, and X. Cao, “Parallel cameras,” Optica 5, 127-137 (2018).
    CrossRef
  19. E. Tseng, S. Colburn, J. Whitehead, L. Huang, S.-H. Baek, A. Majumdar, and F. Heide, “Neural nano-optics for high-quality thin lens imaging,” Nat. Commun. 12, 6493 (2021).
    Pubmed KoreaMed CrossRef
  20. X. Liu, J. Deng, K. F. Li, M. Jin, Y. Tang, X. Zhang, X. Cheng, H. Wang, W. Liu, and G. Li, “Optical telescope with Cassegrain metasurfaces,” Nanophotonics 9, 3263-3269 (2020).
    CrossRef
  21. K. Zhang, C. Yang, X. Li, C. Zhou, and R. Zhong, “High-efficiency microsatellite-using super-resolution algorithm based on the multi-modality super-CMOS sensor,” Sensors 20, 4019 (2020).
    Pubmed KoreaMed CrossRef
  22. L. Ma, Y. Liu, X. Zhang, Y. Ye, G. Yin, and B. A. Johnson, “Deep learning in remote sensing applications: A meta-analysis and review,” ISPRS J. Photogramm. Remote Sens. 152, 166-177 (2019).
    CrossRef
  23. S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” Inform. Fusion 33, 100-112 (2017).
    CrossRef
  24. H. Kaur, D. Koundal, and V. Kdyan, “Image Fusion chnqieus: A survey,” Arch. Computat. Methods Eng. 28, 4425-4447 (2021).
    Pubmed KoreaMed CrossRef
  25. R. D. Flete and B. D. Paul, “Modelling the optical transfer function in the imaging chain,” Opt. Eng. 53, 083013 (2014).
    CrossRef
  26. A. M. John, K. Khanna, R. R. Prasad, and L. G. Pillai, “A review on application of fourier transform in image restoration,” in Proc. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) (Palladam, India, 2020), pp. 389-397.
  27. J. Su, B. Xu, and H. Yin, “A survey of deep learning approaches to image restoration,” Neurocomputing 487, 46-65 (2022).
    CrossRef
  28. H.-W. Chen, “Spatial resolution enhancement of oversmapled images using regression decomposition and systhesis,” ISPRS Archives XLVI-4/W3-2021, 71-77 (2021).
    CrossRef
  29. J. Benediksson, J. Chanussot, and W. Moon, “Advances in very-high-resolution remote sensing,” Proc. IEEE. 101, 566-569 (2013).
    CrossRef
  30. J. Liang, G. Sun, K. Zhang, L. van Gool, and R. Timofte, “Mutual affine network for spatially variant kernel estimation in blind image super-resolution,” in Proc. IEEE/CVF International Conference on Computer Vision (Montréal, Canada, Oct. 11-17, 2021), pp. 4096-4105.
    CrossRef
  31. G. Hwang, C. Song, T. Lee, H. Na, and M. Kang, “Multi-aperture image processing using deep learning,” J. Korean Soc. Indust. Appl. Math. 27, 56-74 (2023).
  32. L. Lin, Y. Liu, Y. Hu, X. Yan, K. Xie, and H. Huang, “Capturing, reconstructing, and simulating: The urbanscene3D dataset,” (Visual Computing Research Center, Shenzhen University, 2022), https://vcc.tech/UrbanScene3D (Accessed Date: May. 1, 2022).
  33. L. Lin, U. Liu, Y. Hu, X. Yan, K. Xie, and H. Huang, “Capturing, reconstructing, and simulating: The urbanscene3D dataset,” in Proc. Computer Vision - ECCV 2022: 17th European Conference (Tel Aviv, Israel, October 23-27, 2022), Part VIII, pp. 93-109.
    CrossRef
  34. Earth Engine Data Catalog, “A planetary-scale platform for Earth science data & analysis,” (Google), https://developers.google.com/earth-engine/datasets (Accessed Date: May. 1, 2022).
  35. A. Horé and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in Proc. 20th International Conference on Pattern Recognition (Istanbul, Turkey, Aug. 23-26, 2010), pp. 2366-2369.
    CrossRef
  36. Teledyne Imaging, “TDI imagers for space,” (Teledyne Technologies Inc.), https://www.teledyneimaging.com/en/aerospace-and-defense/products/tdi-imagers-for-space/ (Accessed Date: Jul. 1, 2023).
Optical Society of Korea

Current Optics
and Photonics


Wonshik Choi,
Editor-in-chief

Share this article on :

  • line