검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2025; 9(1): 19-28

Published online February 25, 2025 https://doi.org/10.3807/COPP.2025.9.1.19

Copyright © Optical Society of Korea.

Analysis and Evaluation of the Target Detection Range of Infrared Electro-optical Tracking Systems

Kwang-Woo Park1, Sun Ho Kim1, Chi-Yeon Kim1, Sung-Chan Park2

1Agency for Defense Development, Daejeon 34060, Korea
2Department of Physics, Dankook University, Cheonan 31116, Korea

Corresponding author: *scpark@dankook.ac.kr, ORCID 0000-0003-1932-5086

Received: October 18, 2024; Revised: November 28, 2024; Accepted: December 10, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

We analyze and evaluate the detection range of infrared electro-optical tracking systems (EOTS), focusing on four key factors: The target, the atmospheric conditions, the sensor, and the optical system. Using a small jet engine as our target, we derive its precise radiant intensity from an infrared camera’s output values. Additionally, we study the infrared transmission properties of the atmosphere under various weather conditions. The proposed system yields an F-number of 1.55 and an optimized field of view (FOV) of 9° × 7° by carefully balancing critical design parameters, such as FOV and signal-to-noise ratio (SNR), to achieve a detection range exceeding 2 km. The manufactured optical system demonstrates a good modulation transfer function (MTF) performance level of 14.3%. To validate our detection-range analysis, we mount a square target (5 mrad) with dimensions corresponding to 2 km on the collimator, and confirm successful image acquisition. The detection-range performance exceeding 2 km meets the operational requirements of the anti-unmanned aerial vehicle defense system (AUDS). Conventional electro-optical/infrared (EO/IR) systems have been designed and analyzed using Johnson’s Criteria, considering only the spatial resolution of the target through the lens. In contrast, this study derives and verifies the detection range by incorporating the SNR of a point target received by the sensor, providing valuable results. Experimental validation confirms that the system meets the required conditions. It enables the use of UAVs for surveillance and reconnaissance, swarm drone reconnaissance, and the integration of drone and robot combat systems for the army, navy, and air force.

Keywords: Electro-optical tracking system (EOTS), Infrared sensor, Radiant intensity, Range equation, Signal-to-noise ratio (SNR)

OCIS codes: (010.1320) Atmospheric transmittance; (110.4999) Pattern recognition, target tracking; (120.5630) Radiometry; (260.3090) Infrared, far; (330.1880) Detection

Electro-optical/infrared (EO/IR) systems provide significant advantages over traditional radar systems, particularly in applications requiring high resolution and precise target tracking. Radar systems, while effective for detection under diverse weather conditions, are inherently limited in spatial resolution due to their reliance on radio-frequency signals [1]. In contrast, EO/IR systems utilize optical imaging to achieve superior clarity and accuracy. These capabilities make EO/IR systems particularly effective in detecting small, slow-moving targets such as drones, which are challenging for radar systems to distinguish [1].

Despite these advantages, EO/IR systems encounter performance limitations under adverse weather conditions, such as fog, heavy rain, and low visibility. According to a study by Sweden’s FOI (Swedish defence research agency), EO/IR systems experience significant performance degradation under low-visibility conditions, such as fog, heavy rain, and clouds, when visibility decreases below 5 km [2]. Under such conditions atmospheric moisture particles and aerosols scatter infrared signals, leading to signal attenuation and limiting the system’s detection performance to less than 2 km. Specifically, clouds with high moisture content, such as stratus clouds [3], can reduce visibility to as low as 30 m, making tracking and detection difficult for EO/IR systems [2, 4]. That Swedish study evaluated EO/IR sensors across various wavelength ranges, including visible (VIS), short-wavelength infrared (SWIR), mid-wavelength infrared (MWIR), and long-wavelength infrared (LWIR), under different weather conditions. It was found that under conditions of heavy fog and rain, MWIR and LWIR sensors performed better than shorter-wavelength sensors, but their ability to detect targets at distances greater than 1 km significantly deteriorated [2]. However, advancements in high-resolution sensors and high-quantum-efficiency technologies across diverse wavelength bands, including VIS, SWIR, MWIR, and LWIR, have significantly improved the reliability of EO/IR systems for various atmospheric conditions [57]. For example, SWIR sensors are particularly effective under conditions with larger atmospheric particles, where transmission at wavelengths greater than 1 µm improves. Also, integrating sensors across diverse wavelengths enables precise spectral separation, even in low-visibility scenarios, while advanced algorithms optimize sensor responses under varying atmospheric conditions.

Therefore, EO/IR systems are increasingly adopted in applications requiring reliable detection and classification, such as military surveillance, reconnaissance, and counter-drone operations [8]. Recent studies have also highlighted the use of automatic target-detection algorithms using deep-learning technologies, such as convolutional neural networks (CNNs), to enhance detection accuracy and reduce false positives, demonstrating EO/IR systems’ potential in dynamic operational environments [9, 10].

An infrared EOTS can detect the heat energy emitted by a target and extract information about the target’s position or movement. This paper builds upon these advancements, focusing on the design, development, and evaluation of an EOTS system optimized for target detection. The maximum detection range for a target is the most important criterion for evaluating the performance of a tracking system. Range performance is influenced by four main factors: (1) The radiance characteristics of the target, (2) atmospheric transmittance (i.e., the transmission characteristics of radiant energy under atmospheric weather conditions), (3) SNR, and (4) optical-system specifications. The range performance of an infrared optical tracking system can be derived from these variables, or the optical system’s specifications for the desired detection distance can be derived from the range performance.

This paper analyzes the characteristics of the four main factors (target, atmospheric properties, sensor, and optics) and designs infrared optical tracking cameras. The results are used to derive detection distance with sufficient accuracy for the prediction of system performance. The detection ranges obtained from this approach are used to define the design specifications for an optical system. The design and imaging performance are then verified experimentally to confirm the validity of the analysis.

This section presents the calculation of the maximum range of an infrared electro-optical tracking system. The range depends on four factors: (1) The optical system, (2) SNR, (3) the radiant intensity emitted by the target in a specific wavelength band, and (4) the atmospheric transmittance and scattered-light intensity due to the atmosphere between the target and the device. In operational terms, optical tracking is defined as the detection of a one-dimensional point target on a screen, rather than identifying the target type (e.g., sedan or SUV) [11, 12]. The primary objective in terms of system performance is to determine the distance of the target. This is calculated using the SNR. The amount of energy in the infrared band emitted by a distant target is calculated after it has passed through the atmosphere to reach the sensor, considering the magnitude of the signal received by a sensor. The amount of noise is an inherent characteristic of the sensor, influenced by its design and specifications, such as the transmittance of the optical system. Infrared sensors are primarily affected by inherent noise, which depends on the design parameters of the sensor. The SNR is calculated as the signal received from the target divided by the noise. Detection at a certain distance is considered possible if the SNR exceeds the system’s SNR requirement [13, 14]. The required system SNR is calculated considering the operational characteristics and the probability of detection. In this study, the required system SNR is defined to be 10 [15].

2.1. Signal-to-noise Ratio (SNR)

The operating environment of an infrared optical tracking camera is shown in Fig. 1.

Figure 1.Environment for calculating the signal to noise ratio (SNR) of an infrared tracking camera.

The radiant intensity of a distant point target is quantified in watts per steradian (W/sr). The radiant-energy difference detected by a sensor is the difference between the radiant energy emitted from the target and the radiant energy of the background. As the signal traverses the atmosphere, the radiant energy difference is attenuated. Furthermore, scattering and radiation occur along the path between the target and the sensor.

In the long-wavelength infrared band, up to approximately 10 km, the energy scattered by the atmosphere is negligible and thus excluded from the calculation. To assess the transmission of a selected wavelength, it is necessary to consider only those atmospheric variables that affect the transmission of the selected wavelength band. The energy resulting from atmospheric radiation along the signal path between the target and the sensor (path radiance) is uniformly added to the target or background. This results in a simple increase of the offset value of the output, regardless of differences in the sensor’s output. Consequently, the contrast ratio remains unaltered, and there is no additional increase in photon noise. This is because the total energy added to both near and far targets is the same, just like the energy from atmospheric transmission and path radiation. Thus, it can be concluded that the difference in radiance between the target and the background is exclusively attenuated by atmospheric transmission.

The energy is further attenuated by the transmittance of the optical system as the signal reaches the sensor. Image blur in the optical system causes the target energy to be scattered instead of hitting the sensor at a geometric point, resulting in some energy loss. The remaining energy is then incident upon the sensor, and converted to an electrical signal.

In the context of Fig. 2, the amount of flux reaching the detector can be expressed as shown in Eq. (1). We can also write an expression for the signal voltage Vsig produced by the detector, by multiplying Eq. (1) by the responsivity Rv, as shown in Eq. (2). We can formally divide each side of Eq. (2) by the root mean square (RMS) noise vn to derive the SNR, as expressed in Eq. (3). For a search system that detects a pulse signal Φp representing a target, SNR is considered to be the ratio of the peak signal to the rms noise. Applying the definition of noise-equivalent power (NEP), this can be expressed as shown in Eq. (4), and it can be further expressed as shown in Eq. (5a). Continuing with Eq. (5a), expressing NEP in terms of the detector detectivity D*, the detector area Ad, and the bandwidth Δf yields Eq. (5b). From Eq. (5b), the SNR produced by the system increases with an increase in the entrance pupil’s area A0, target intensity ΔI, and detector detectivity D*. On the other hand, the SNR decreases as the target distance R, bandwidth Δf, and detector area Ad increases. The detector area can be calculated from the solid angle Ωd of the optics and the detector, as shown in Eq. (6a), Eq. (6b). By applying this and rearranging for the distance R, it can be expressed as shown in Eq. (7). From Eq. (7), R can be categorized according to factors for target, detector, optics, and signal processing. Eq. (7) is referred to as the range equation [8, 9].

Figure 2.Point-target detection schematic.

Φd= ΔIτa×Ω0τ0=ΔI × τa  × A0 × τ0· yxPSFx x0 ,y y0 dxdyR2,
Vsig=Rv×Φd ,
SNR= VsigVn= RvVn×Φd ,
NEP=ΦpSNR=VnRv,
SNR=VsigVn=VsigVn×Φd,
SNR=VsigVn=D*AdΔf×Φd,
Ωd=AdF2,
Ad=Ad×F,
R= [ΔI ·]1/2Target×[D* ]1/2Detector ×A0·τ0· yxPSFx x0 ,y y0 dxdy]Optics× 1 SNR· AD·Δf 1/2Signal Processing,

where

Φd: The amount of flux reaching the detector,

Φp: Pulse signal of the target,

Ω0: Solid angle (sr) from target to lens,

Ωd: Solid angle (sr) from lens to detector,

ΔI: Point-target radiance intensity (W/sr),

R: Sensor-to-target range (cm),

τa: Atmospheric transmittance,

τ0: Optical transmittance,

A0: Entrance pupil’s area (cm2),

AD: Effective area of the sensor element (cm2),

Vsig: Signal voltage,

Vn: RMS noise,

Rv: Detector responsivity,

NEP: Noise-equivalent power,

PSF: Point-spread function in the array plane,

Δf: Equivalent bandwidth of noise (Hz),

D*: Specific detectivity (cmHz/watt).

2.2. SNR Component Analysis

2.2.1. Target Radiant Intensity ΔI

Accurate target data is crucial for range calculations. This study focuses on small airborne targets, specifically examining the infrared radiant energy emitted from the heated surfaces of small aircraft engines and rockets. The engine’s power is varied at three levels: 30%, 50%, and 80% of maximum power. Figure 3 illustrates the measurement method employed in this study.

Figure 3.Calculation of a jet engine’s radiant intensity, using the values derived from images from the FLIR camera.

First, a LWIR image of the engine is acquired using a FLIR A6751 SLS camera (Teledyne FLIR LLC., OR, USA). A frame rate of 60 Hz, an exposure time of 0.05 ms, and a lens focal length of 50 mm are selected for the LWIR camera. Second, a radiometric calibration experiment is performed in the laboratory to determine the infrared signal value of the target, which is considered to be a blackbody. Radiometric calibration includes determining the relationship between the input infrared radiation and the output signal, enabling the estimation of the input infrared radiation when a given output signal is provided. To obtain the input-output relationship, the system output for a blackbody is used. Since a blackbody emits infrared radiation equal to the Planck radiance multiplied by its emissivity, knowing the emissivity allows the input infrared radiation to be determined. In practice, the energy incident upon the system includes not only the blackbody radiation, but also the ambient energy reflected from the blackbody. For a system operating in the infrared wavelength range of λ1 to λ2, the infrared radiation per unit area and per unit solid angle emitted from a blackbody at temperature is given by Eq. (8). Here the blackbody is assumed to be very close to the system, so atmospheric attenuation is neglected, and the emissivity (ϵB) of the blackbody is assumed to be constant with respect to wavelength and blackbody temperature. The Planck blackbody radiance at the blackbody temperature, B(Tb), and is expressed as shown in Eq. (9) [16].

Qin=ϵBBTb+1ϵBETbs,
BTb= 1πλ2λ1 C 1 ( λ5 e c 2/λ T b1 dλ,

where

Qin: Infrared radiance input into the system (W/sr/cm2),

ϵB: Blackbody emissivity,

B(Tb): Planck’s blackbody radiance (λ1λ2) at a blackbody temperature of Tb (W/sr/cm2),

E(Tbs): Infrared radiance emitted by a blackbody at an ambient temperature of Tbs (W/sr/cm2),

Tb: Blackbody temperature (K),

Tbs: Blackbody ambient temperature (K),

C1: First constant of blackbody radiation (3.7418 × 104 W ∙ μm4/cm2),

C2: Second constant of blackbody radiation (1.4388 × 104 W ∙ μm4/cm2).

Figure 4 shows the basic setup for radiometric calibration of the camera. The blackbody used for calculating radiance in the LWIR band is the SR800R-7D-ET (CI Systems Inc.). It has an aperture size of 7″ × 7″ and an operating temperature range of 0 ℃ to 175 ℃. For standard blackbodies used in radiometric calibration, an emissivity level of up to 0.999 is typically required. The calibrated blackbody used in this experiment has an emissivity of 0.992.

Figure 4.Energy reaching the FLIR camera from the blackbody radiator, in the laboratory environment.

The system’s output is directly related to the radiance Qin, enabling input-output modeling. The relationship between the blackbody Planck radiance B(Tb) and the input energy Qin is known, as shown in Eq. (8). To simplify the calculation, the relationship between the Planck radiance B(Tb) and the output signal (grayscale level) is determined, instead of the total input energy. This allows us to establish the relationship between the input energy and the output signal. The input Planck radiance corresponding to an arbitrary output signal (grayscale level) can be determined, and by substituting it into Eq. (8), the total input radiance can be estimated. To achieve this, the radiometric calibration measurement setup is configured as shown in Fig. 5. Note that the radiance of an image formed by an optical system is equal to that of the object, per the law of conservation of radiance [11, 12, 17]. Applying this law to Fig. 5 leads to Eq. (10), as follows:

Figure 5.Radiometric calibration setup used in the laboratory. Ad: Area of the blackbody (cm2), Ω′: solid angle (sr) (blackbody → FLIR camera), R: distance (blackbody → FLIR camera), A0: area of the FLIR camera (cm2), D: diameter of the FLIR camera, f : focal length of the FLIR camera, Ω: solid angle (sr) (FLIR camera → detector), Ad: area of the detector (cm2).

A'dΩ'=AdΩ.

Figure 6 shows the relationship between the measured radiance and the output. Using the measured results, a polynomial equation is derived. Based on this equation, the Planck radiance B(Tb) at output grayscale levels of 1,735, 1,861, and 2,159 is calculated to be 0.038 W/sr/cm2 , 0.042 W/sr/cm2, and 0.052 W/sr/cm2, respectively.

Figure 6.The relationship between blackbody radiation and output.

By inputting the specific data of the system depicted in Fig. 5 into Eq. (10), the solid angle of incidence Ω at the detector is found to be 1.26 × 10⁻⁵ sr. From the results of the laboratory radiometric calibration, the total radiant luminance incident upon the sensor Qtotal and the radiant power incident on a pixel of the sensor Ptotal can be calculated using Eqs. (11) and (12), as follows:

Qtotal=ϵBτatmBTb+1ϵBτatmETbs+1τatmAatm,
Ptotal=BTbτatmΩAd.

where

τatm: Atmospheric transmittance in the laboratory,

Aatm: Radiance intensity along an atmospheric path in the laboratory,

Ptotal: Detector radiant output at the measurement point.

The atmospheric transmittance τatm in the laboratory is calculated using the PcModWin software (Ontar Co., MA, USA) based on moderate spectral resolution transmission (MODTRAN) [18].

The radiance and radiant output are calculated for each engine-power value. The radiant-output values detected by the FLIR camera are presented in Table 1.

TABLE 1 Radiant-power values detected by the FLIR camera

ValuesEngine Output (%)
305080
Image
Output Grayscale Level1,7351,8612,159
B(Tb) (W/sr/cm2)0.0380.0420.052
τatm0.9932
Ω′ (sr)1.26 × 10−5
Ptotal (W)1.07 × 10−81.18 × 10−81.46 × 10−8


Finally, when acquiring outdoor images, it is necessary to consider the atmospheric transmission between the sensor and the target. In this case the target is a flame, so we assume that its emissivity ϵ is 1. If the target has an emissivity of 1, then the ambient radiation will influence the reflectance by a factor of (1 − ϵ). The atmospheric transmittance is determined using the PcModWin software to calculate the transmission from sensor to target. Since the distance between the jet engine and the measurement camera is short (20 m), the atmospheric transmittance for field measurements τatm is defined to be 0.99. The radiance analysis is performed according to Eqs. (13) and (14), and the results are shown in Table 2.

TABLE 2 Calculated flame radiance-intensity values

ValuesEngine Output (%)
305080
Ptotal (W)1.07 × 10−81.18 × 10−81.46 × 10−8
τatm0.99
Ω′ (sr)1.26 × 10−5
WflameAd (W/sr)1.37 × 10−21.52 × 10−21.88 × 10−2
Number of Pixels50100200
Target Radiant Intensity ΔI (W/sr)0.691.523.76


Qtotal=ϵτ'atmWflame+1ϵτ'atmETbs+1τ'atmA'atm,
Ptotal=Wflameτ'atmΩAd,

where ϵ is the emissivity, Wflame is the radiant luminance of the flame (W/sr/cm2), and Aatm is the radiance intensity along an atmospheric path in the outdoor environment.

2.2.2. Atmospheric Transmittance τa

Radiant energy from the target is attenuated as it passes through the atmosphere before it reaches the sensor. It is necessary to accurately calculate the amount of attenuation by the atmosphere under different weather conditions, to obtain accurate ranges. When predicting range performance, the variability in atmospheric conditions is the most difficult factor to quantify. Elder and Strong [19] published a study focused on calculating the infrared transmittance of the atmosphere. However, their approach involves only the radiative-energy characteristics of the atmosphere, and atmospheric transmittance is approximated. In this study we use MODTRAN, which provides accurate and fast atmospheric transmittance data for different wavelength bands. MODTRAN is often used as a computational model of atmospheric transmission. It considers the operating conditions of the system, and large amounts of data from different locations and under different conditions are used for verification. By utilizing these various types of data, reliable probability-based prediction results can be obtained.

MODTRAN offers a choice of different atmospheric turbulence models. These can be broadly categorized as maritime, rural, or urban, based on the types of particles in haze. In Korea, which is a peninsula surrounded by the sea on three sides, it is appropriate to use the maritime model. In the simulation, the optical system is looking at a path along the horizon at a height of 2 km. Mid-latitude spring-summer and autumn-winter atmospheric conditions are used. Figure 7 shows the results for atmospheric transmittance τa.

Figure 7.Atmospheric transmittance τa along the horizon at a height of 2 km.

2.2.3. Sensor Properties D*, AD, and ∆f

The sensitivity of the detector is inversely related to the noise equivalent power (NEP) and directly proportional to the detectivity D*, which normalizes the noise power by the detector area AD and bandwidth ∆f. Eq. (15) defines D* and its relationship to NEP.

D*=ADΔfNEPcm·HZwatt.

For an exposure time of 80 μs and a noise equivalent temperature difference (NETD) of 50 mK, Eq. (15) yields a detectivity D* of 3.1 × 1011 cm • Hz/W [20].

2.2.4. Optical-system Performance (PSF, τ0, and A0)

The design parameter related to the optical system is the image blur at the focal plane. This blur depends on the optical aberration, optical transmittance, and area of the circular objective lens that receives energy from the target. When the phases of a point target are scattered rather than concentrated at a given point, image blur can be classified into two different types. The first is unavoidable, and is caused by the diffraction of light. The second is related to the optical system being used. The blur caused by diffraction is called Airy disk blur, with 84% of the total energy focused within the corresponding disk area. Eq. (16) provides the Airy disk diameter.

DAiry=2.44× ×F/#,

where the central wavelength λC is 9.0 μm and the F-number is 1.55. From this formula, the Airy disk in this study is 34 μm in diameter. Since the size of a single detecting element is 12 μm × 12 μm, if an optical system is designed with an F-number of 1.55 to match the performance of the diffraction limit, the phase of the spot target falls within a 3 × 3-pixel area of the detecting element. It is not possible to make all of the focused rays converge at a geometric point, in a real optical system.

There is a specific energy distribution in the focal plane of the focused phase from the point target through the optical system. This corresponding function is called the point-spread function (PSF). Assuming no image blur, the PSF would be a delta function, which is physically impossible. The ratio of the energy entering the detector element to the total energy is obtained by integrating the PSF over the area of a single element in the sensor plane. The PSF can be calculated during the optical-system design process, but in practice the PSF is obtained by calculating the rms spot diameters from ray-tracing data, rather than searching for a function. The ray-tracing data can also be used to calculate the energy contained within a given distance from the center of the phase distribution, known as the encircled energy. Considering these factors, the performance target of the optical system is set so that the energy distribution, including diffraction, is at least 85% within a 3 × 3-pixel area. The PSF of the designed optical system is found to be 0.85, according to Eq. (17). The designed optical system is described in Section IV.

yx PSF x x0,yy0dxdy=0.85.

The next parameter to be adjusted in the optical system is the optical transmittance τ0. This is a crucial parameter in the performance of sensors, as it is multiplied directly by the incident energy and is related to the amount of energy reaching the detector. Optical transmittance includes the loss reflected from the surface of each lens and the energy absorbed by the lens material itself. For the LWIR band, a double-sided coating has a transmittance of more than 98%, excluding absorption by the material. Considering the four lenses, IR filter, and cover glass that make up the optical system, the transmittance of the system is estimated to be approximately 60%.

The final parameter of the optical system is the entrance-pupil area A0 of the objective lens, which represents the light-collection efficiency of the system. The area of the objective lens is not a value that can be arbitrarily set; It is influenced by the FOV. This affects the effective focal length of the lens. Depending on the FOV, the light-collection efficiency and the SNR of the sensor will change, as will the maximum detection distance. Therefore, the FOV of the optical system can be adjusted to match the target-detection distance.

The optimum FOV should be selected, considering the FOV of the optical system and the pivoting range of the instrument upon which the optical system is mounted. In this study the ranges are analyzed for five horizontal FOVs: 3°, 6°, 9°, 12°, and 15°. The radiation intensity of the target is based on 50% of the engine power. Atmospheric conditions considered are for the mid-latitude spring-summer and autumn-winter seasons. The maximum detection distance is based on a required SNR of 10.

The maximum detection distance required by the system is about 2 km. The EOTS system is mounted on aircraft and is used to detect and identify unauthorized drones, to protect critical facilities. Target detection is performed using an automatic tracking algorithm. The detection distance required for the operation of the automatic tracking algorithm is 2 km.

When operating the system, a large FOV may reduce the mechanical range of operations if all other factors are fixed. Thus, this approach leads to a reduction in the detection range. Since the optical system is mounted on an aircraft, it should be compact and lightweight. When the FOV is less than 9°, the detection range is satisfactory. As the FOV decreases, the aperture of the optical system increases. Thus, this small FOV enlarges the space and weight requirements.

The optimum FOV of this optical system is determined from the detection-range analyses in Figs. 8 and 9, which yield an FOV of 9°. When the FOV of the optical system is 9°, we obtain a target-detection range of 2.1 km and 2.2 km under spring-summer and autumn-winter atmospheric conditions, respectively, which meet the requirement of a detection range of 2 km or greater. The detailed design specifications are given in Table 3.

Figure 8.Analysis of the detection range with the field of view (FOV) of the studied optical system, for spring-summer: ① FOV: 3°, ② FOV: 6°, ③ FOV: 9°, ④ FOV: 12°, ⑤ FOV: 15°.

Figure 9.Analysis of the detection range with the field of view (FOV) of the studied optical system, for fall-winter: ① FOV: 3°, ② FOV: 6°, ③ FOV: 9°, ④ FOV: 12°, ⑤ FOV: 15°.

TABLE 3 Optical specifications of the long-wavelength infrared (LWIR) camera

ParameterSpecification
Wavelength (μm)8–10
F-number (F/#)1.55
Image Sensor Pixel Size (μm)12 × 12
Image Sensor Pixel Number1,024 × 768
FOV (°)9.0 × 7.0
Transmittance (%)60
MTF at Nyquist Frequency (%)Designed MTFMore than 20 at Center Field
Athermalized (−30 ℃ to 70 ℃) MTFMore than 20 at Center Field
Nyquist Frequency (cycles/mm)42

The sensor uses an uncooled infrared detector with a pixel size of 12 μm, in a 1,024 × 768 array. The optical system has an F-number of 1.55 and operates in the wavelength range of 8–10 μm. To achieve a transmission of more than 60% with the LWIR camera, the number of lenses is minimized, as shown in Fig. 10. A passively athermal method is chosen to meet the performance requirements from −30 ℃ to 70 ℃. Chalcogenide series IRG26 material (Schott AG., Mainz, Germany) is applied to L2 and L4 to satisfy the athermal performance requirements, and Ti6Al4V material is used for the lens housing. Figure 11 illustrates the athermal design results. The modulation transfer function (MTF) on center field is effectively maintained between 21.6% and 22.2% over the entire temperature range of −30 ℃ to 70 ℃.

Figure 10.Optical design layout.

Figure 11.Athermal design results: Variation of modulation transfer function (MTF) with temperature.

Through evaluating the encircled energy of the designed optical system, it is confirmed that more than 85% of the light entering the detector falls on an area of 3 × 3 pixels, as described in Fig. 12. The MTF, an optical parameter representing the performance of an optical system, indicates the decrease in contrast ratio as the spatial frequency increases. MTF values over all fields are more than 27% at a Nyquist frequency of 42 cycles/mm, confirming that these values are very close to diffraction-limited performance, as shown in Fig. 13.

Figure 12.Encircled energy of the designed camera.

Figure 13.Modulation transfer function (MTF) of the designed camera.

The optical system designed in the previous section is evaluated in terms of image performance and detection range. The MTF is an important criterion for evaluating the performance of imaging systems [21]. The MTF of a manufactured optical system should be 10% or higher. Figure 14 shows a schematic setup for measuring the MTF of a manufactured camera system. The infrared beam emitted from the half-moon target on the collimator is directed to the camera, and then is focused. The blackbody temperature is set to 300 K, and the collimator focal length is 1,700 mm. Figure 15(a) shows the image of the half-moon target acquired through the camera. The measurement results in Fig. 15(b) show that the MTF is 14.3% at a Nyquist frequency of 42 cycles/mm.

Figure 14.Experimental setup for modulation transfer function (MTF) measurement and detection-range assessment.

Figure 15.Modulation transfer function (MTF) results: (a) Half-moon target, and (b) MTF graph (14.3% at Nyquist frequency of 42 cycles/mm).

In actual system operation, detection is defined as achieving a detection probability of 95% or higher on plateau-enhanced images, by running the automatic detection algorithm after accumulating 50 frame images acquired from the target. In this process, the precise detection distance is derived from the algorithm.

However, image tests conducted in the laboratory aim to determine whether the camera can resolve images of a target with defined size and temperature corresponding to the desired detection range, using a collimator. If the images are clearly acquired, this is evidence that the manufactured camera is capable of achieving the target detection range during actual operation.

We use a square target with a size of 5 mrad, equivalent to the target size at 2 km. The blackbody mounted behind the square target is set to simulate the emission temperature of the target.

The blackbody temperature is determined from the radiometric calibration results of the camera in the previous Section 2.2.1. Based on the measured relationship between the Planck radiance and the output grayscale level, at a distance of 2 km, the radiance intensity of a 5-mrad object corresponds to 320 K, 373 K, and 400 K for radiance levels of 0.69 W/sr, 1.52 W/sr, and 3.76 W/sr, respectively, at each level of engine power.

The measured results, listed in Table 4, show that the image is clearly acquired. The validity of the detection distance analysis proposed in this study is experimentally confirmed.

TABLE 4 Results for the detection range, for conditions equivalent to the flight laboratory environment

ValueEngine Output (%)
305080
Target Radiance Intensity (W/sr)0.691.523.76
Blackbody Temperature (K)320373400
Acquired Raw Image

This study has presented a novel approach to optimizing the maximum detection range of an infrared electro-optical tracking system. To derive the detection range, factors such as target characteristics, atmospheric transmittance, sensor performance, and optical-system specifications were considered. The designed optical system, with an optimized FOV of 9.0° × 7.0° and an F-number of 1.55, was experimentally validated to achieve a detection range exceeding 2 km, satisfying the performance requirements. The detection-range performance exceeding 2 km meets the operational requirements of the anti- unmanned aerial vehicles defense system (AUDS). Also, laboratory measurements demonstrated that the system achieved a measured MTF of 14.3% at the Nyquist frequency, exceeding the 10% requirement.

Although the proposed system’s detection range was successfully validated under controlled conditions, the study relied on specific atmospheric conditions and target characteristics, which may not fully represent practical scenarios.

For example, the degradation of performance under adverse weather conditions, such as fog and rain, presents a significant obstacle to long-range target detection. Additionally, variability in atmospheric conditions and sensor calibration remains a critical issue. To improve the reliability of detection distance in infrared electro-optical tracking systems, future research should focus on advanced experimental setups, including long-term field tests under diverse environmental conditions, as well as the development of more accurate atmospheric attenuation models.

This study will enhance the reliability and operational efficiency of unmanned aerial vehicles for surveillance, swarm drone reconnaissance, and the integration of drone-robot combat systems for military applications.

The data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

  1. T. Müller, H. Widak, M. Kollmann, A. Buller, L. W. Sommer, R. Spraul, A. Kröker, I. Kaufmann, A. Zube, F. Segor, T. Perschke, A. Lindner, and I. Tchouchenkov, “Drone detection, recognition, and assistance system for counter-UAV with VIS, radar, and radio sensors,” Proc. SPIE 12096, 12096A (2022).
    CrossRef
  2. O. Steinvall, D. Svedbrand, and T. Svensson, “Optical sensing during low visibility conditions,” Proc. SPIE 11160, 111600J (2019).
    CrossRef
  3. F.-Y. Song, Y. Lu, Y. Qiao, H.-F. Tao, C. Tang, and Y.-S. Ling, “Simulations of infrared atmospheric transmittance based on measured data,” Proc. SPIE 10157, 10157E (2016).
    CrossRef
  4. O. Steinvall, R. Persson, F. Berglund, O. Gustafsson, J. Öhgren, and F. Gustafsson, “Using an eye-safe laser rangefinder to assist active and passive electro-optical sensor performance prediction in low visibility conditions,” Opt. Eng. 54, 074103 (2015).
    CrossRef
  5. P. Raphael and W. F. John, “Lens requirements for sub-10 μm pixel pitch uncooled microbolometers,” Proc. SPIE 12533, 125330O (2023).
  6. M. L. Pieper, R. Lockwood, and M. Chrisp, “Hyperspectral SWIR sensor parameterization for optimal methane detection,” Proc. SPIE 12688, 126880F (2023).
    CrossRef
  7. S. A. Sánchez-Maes, J. Ho, I. Anderson, E. Aguirre-Contreras, C. Barcroft, J. Barstow, D. Caldwell, B. D'Aquino, G. Dubinsky, T. M. Gauron, J. Hong, A. T. Kenter, C. S. Moore, R. Nere, R. Pandohie, and C. Suarez, “SSAXI-Rocket delta-doped CMOS sensors,” Proc. SPIE 13103, 131030N (2024).
    CrossRef
  8. K. W. Park, J.-Y. Han, J. Bae, S.-W. Kim, and C.-W. Kim, “Novel compact dual-band LOROP camera with telecentricity,” Opt. Express 20, 10921-10932 (2012).
    Pubmed CrossRef
  9. J. E. Ball, D. T. Anderson, and C. S. Chan, “Comprehensive survey of deep learning in remote sensing: Theories, tools, and challenges for the community,” J. Appl. Remote Sens. 11, 042609 (2017).
    CrossRef
  10. H. Kim and C. Lee, “Upcycling adversarial attacks for infrared object detection,” Neurocomputing 482, 1-13 (2012).
    CrossRef
  11. R. Hartmann and W. J. Smith, Infrared Optical Design and Fabrication: Critical Reviews of Optical Science & Technology, 1st ed. (SPIE Press, USA, 1991), pp. 44-54.
  12. M. Schlessinger, Infrared Technology Fundamentals, 2nd ed. (Routledge, USA, 1995), pp. 12-25.
  13. M. C. Dudzik, The Infrared and Electro-Optical Systems Handbook: Electro-Optical Systems Design, Analysis and Testing (Infrared Information Analysis Center and SPIE Press, USA, 1993), Volume 4, pp. 63-66.
    CrossRef
  14. J. Yoon, D. Ryu, S. Kim, S. Seong, W. Yoon, J. Kim, and S.-W. Kim, “Long-distance flame detection simulation for a new MWIR camera,” Korean J. Opt. Photon. 25, 245-253 (2014).
    CrossRef
  15. L. LaCroix and S. Kurzius, “Peeling the onion: A heuristic overview of hit-to-kill missile defense in the 21st century,” Proc. SPIE 5732, 583369 (2005).
    CrossRef
  16. K. W. Park and S. H. Kim, “Analysis of flame radiant intensity using image output values of infrared cameras,” J. Korean Inst. Illum. Electr. Install. Eng. 36, 1-7 (2022).
    CrossRef
  17. W. J. Smith, Modern Optical Engineering, 4th ed. (McGraw-Hill, USA, 2008), pp. 259-263.
  18. MODTRAN4 Ver.03 User's Manual, Spectral Sciences Inc. MA, USA (2003).
  19. T. Elder and J. Strong, “The infrared transmission of atmospheric windows,” J. Franklin. Inst. 255, 189-208 (1953).
    CrossRef
  20. A. Rogalski, Infrared detectors, 2nd ed. (CRC press, USA, 2011), pp. 321-324.
  21. “Optics and photonics-Optical transfer function-Principles of measurement of modulation transfer function (MTF) of sampled imaging systems,” ISO 15529:2007 (2010).

Article

Research Paper

Curr. Opt. Photon. 2025; 9(1): 19-28

Published online February 25, 2025 https://doi.org/10.3807/COPP.2025.9.1.19

Copyright © Optical Society of Korea.

Analysis and Evaluation of the Target Detection Range of Infrared Electro-optical Tracking Systems

Kwang-Woo Park1, Sun Ho Kim1, Chi-Yeon Kim1, Sung-Chan Park2

1Agency for Defense Development, Daejeon 34060, Korea
2Department of Physics, Dankook University, Cheonan 31116, Korea

Correspondence to:*scpark@dankook.ac.kr, ORCID 0000-0003-1932-5086

Received: October 18, 2024; Revised: November 28, 2024; Accepted: December 10, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We analyze and evaluate the detection range of infrared electro-optical tracking systems (EOTS), focusing on four key factors: The target, the atmospheric conditions, the sensor, and the optical system. Using a small jet engine as our target, we derive its precise radiant intensity from an infrared camera’s output values. Additionally, we study the infrared transmission properties of the atmosphere under various weather conditions. The proposed system yields an F-number of 1.55 and an optimized field of view (FOV) of 9° × 7° by carefully balancing critical design parameters, such as FOV and signal-to-noise ratio (SNR), to achieve a detection range exceeding 2 km. The manufactured optical system demonstrates a good modulation transfer function (MTF) performance level of 14.3%. To validate our detection-range analysis, we mount a square target (5 mrad) with dimensions corresponding to 2 km on the collimator, and confirm successful image acquisition. The detection-range performance exceeding 2 km meets the operational requirements of the anti-unmanned aerial vehicle defense system (AUDS). Conventional electro-optical/infrared (EO/IR) systems have been designed and analyzed using Johnson’s Criteria, considering only the spatial resolution of the target through the lens. In contrast, this study derives and verifies the detection range by incorporating the SNR of a point target received by the sensor, providing valuable results. Experimental validation confirms that the system meets the required conditions. It enables the use of UAVs for surveillance and reconnaissance, swarm drone reconnaissance, and the integration of drone and robot combat systems for the army, navy, and air force.

Keywords: Electro-optical tracking system (EOTS), Infrared sensor, Radiant intensity, Range equation, Signal-to-noise ratio (SNR)

I. INTRODUCTION

Electro-optical/infrared (EO/IR) systems provide significant advantages over traditional radar systems, particularly in applications requiring high resolution and precise target tracking. Radar systems, while effective for detection under diverse weather conditions, are inherently limited in spatial resolution due to their reliance on radio-frequency signals [1]. In contrast, EO/IR systems utilize optical imaging to achieve superior clarity and accuracy. These capabilities make EO/IR systems particularly effective in detecting small, slow-moving targets such as drones, which are challenging for radar systems to distinguish [1].

Despite these advantages, EO/IR systems encounter performance limitations under adverse weather conditions, such as fog, heavy rain, and low visibility. According to a study by Sweden’s FOI (Swedish defence research agency), EO/IR systems experience significant performance degradation under low-visibility conditions, such as fog, heavy rain, and clouds, when visibility decreases below 5 km [2]. Under such conditions atmospheric moisture particles and aerosols scatter infrared signals, leading to signal attenuation and limiting the system’s detection performance to less than 2 km. Specifically, clouds with high moisture content, such as stratus clouds [3], can reduce visibility to as low as 30 m, making tracking and detection difficult for EO/IR systems [2, 4]. That Swedish study evaluated EO/IR sensors across various wavelength ranges, including visible (VIS), short-wavelength infrared (SWIR), mid-wavelength infrared (MWIR), and long-wavelength infrared (LWIR), under different weather conditions. It was found that under conditions of heavy fog and rain, MWIR and LWIR sensors performed better than shorter-wavelength sensors, but their ability to detect targets at distances greater than 1 km significantly deteriorated [2]. However, advancements in high-resolution sensors and high-quantum-efficiency technologies across diverse wavelength bands, including VIS, SWIR, MWIR, and LWIR, have significantly improved the reliability of EO/IR systems for various atmospheric conditions [57]. For example, SWIR sensors are particularly effective under conditions with larger atmospheric particles, where transmission at wavelengths greater than 1 µm improves. Also, integrating sensors across diverse wavelengths enables precise spectral separation, even in low-visibility scenarios, while advanced algorithms optimize sensor responses under varying atmospheric conditions.

Therefore, EO/IR systems are increasingly adopted in applications requiring reliable detection and classification, such as military surveillance, reconnaissance, and counter-drone operations [8]. Recent studies have also highlighted the use of automatic target-detection algorithms using deep-learning technologies, such as convolutional neural networks (CNNs), to enhance detection accuracy and reduce false positives, demonstrating EO/IR systems’ potential in dynamic operational environments [9, 10].

An infrared EOTS can detect the heat energy emitted by a target and extract information about the target’s position or movement. This paper builds upon these advancements, focusing on the design, development, and evaluation of an EOTS system optimized for target detection. The maximum detection range for a target is the most important criterion for evaluating the performance of a tracking system. Range performance is influenced by four main factors: (1) The radiance characteristics of the target, (2) atmospheric transmittance (i.e., the transmission characteristics of radiant energy under atmospheric weather conditions), (3) SNR, and (4) optical-system specifications. The range performance of an infrared optical tracking system can be derived from these variables, or the optical system’s specifications for the desired detection distance can be derived from the range performance.

This paper analyzes the characteristics of the four main factors (target, atmospheric properties, sensor, and optics) and designs infrared optical tracking cameras. The results are used to derive detection distance with sufficient accuracy for the prediction of system performance. The detection ranges obtained from this approach are used to define the design specifications for an optical system. The design and imaging performance are then verified experimentally to confirm the validity of the analysis.

II. PRINCIPAL ELEMENTS AFFECTING DETECTION PERFORMANCE

This section presents the calculation of the maximum range of an infrared electro-optical tracking system. The range depends on four factors: (1) The optical system, (2) SNR, (3) the radiant intensity emitted by the target in a specific wavelength band, and (4) the atmospheric transmittance and scattered-light intensity due to the atmosphere between the target and the device. In operational terms, optical tracking is defined as the detection of a one-dimensional point target on a screen, rather than identifying the target type (e.g., sedan or SUV) [11, 12]. The primary objective in terms of system performance is to determine the distance of the target. This is calculated using the SNR. The amount of energy in the infrared band emitted by a distant target is calculated after it has passed through the atmosphere to reach the sensor, considering the magnitude of the signal received by a sensor. The amount of noise is an inherent characteristic of the sensor, influenced by its design and specifications, such as the transmittance of the optical system. Infrared sensors are primarily affected by inherent noise, which depends on the design parameters of the sensor. The SNR is calculated as the signal received from the target divided by the noise. Detection at a certain distance is considered possible if the SNR exceeds the system’s SNR requirement [13, 14]. The required system SNR is calculated considering the operational characteristics and the probability of detection. In this study, the required system SNR is defined to be 10 [15].

2.1. Signal-to-noise Ratio (SNR)

The operating environment of an infrared optical tracking camera is shown in Fig. 1.

Figure 1. Environment for calculating the signal to noise ratio (SNR) of an infrared tracking camera.

The radiant intensity of a distant point target is quantified in watts per steradian (W/sr). The radiant-energy difference detected by a sensor is the difference between the radiant energy emitted from the target and the radiant energy of the background. As the signal traverses the atmosphere, the radiant energy difference is attenuated. Furthermore, scattering and radiation occur along the path between the target and the sensor.

In the long-wavelength infrared band, up to approximately 10 km, the energy scattered by the atmosphere is negligible and thus excluded from the calculation. To assess the transmission of a selected wavelength, it is necessary to consider only those atmospheric variables that affect the transmission of the selected wavelength band. The energy resulting from atmospheric radiation along the signal path between the target and the sensor (path radiance) is uniformly added to the target or background. This results in a simple increase of the offset value of the output, regardless of differences in the sensor’s output. Consequently, the contrast ratio remains unaltered, and there is no additional increase in photon noise. This is because the total energy added to both near and far targets is the same, just like the energy from atmospheric transmission and path radiation. Thus, it can be concluded that the difference in radiance between the target and the background is exclusively attenuated by atmospheric transmission.

The energy is further attenuated by the transmittance of the optical system as the signal reaches the sensor. Image blur in the optical system causes the target energy to be scattered instead of hitting the sensor at a geometric point, resulting in some energy loss. The remaining energy is then incident upon the sensor, and converted to an electrical signal.

In the context of Fig. 2, the amount of flux reaching the detector can be expressed as shown in Eq. (1). We can also write an expression for the signal voltage Vsig produced by the detector, by multiplying Eq. (1) by the responsivity Rv, as shown in Eq. (2). We can formally divide each side of Eq. (2) by the root mean square (RMS) noise vn to derive the SNR, as expressed in Eq. (3). For a search system that detects a pulse signal Φp representing a target, SNR is considered to be the ratio of the peak signal to the rms noise. Applying the definition of noise-equivalent power (NEP), this can be expressed as shown in Eq. (4), and it can be further expressed as shown in Eq. (5a). Continuing with Eq. (5a), expressing NEP in terms of the detector detectivity D*, the detector area Ad, and the bandwidth Δf yields Eq. (5b). From Eq. (5b), the SNR produced by the system increases with an increase in the entrance pupil’s area A0, target intensity ΔI, and detector detectivity D*. On the other hand, the SNR decreases as the target distance R, bandwidth Δf, and detector area Ad increases. The detector area can be calculated from the solid angle Ωd of the optics and the detector, as shown in Eq. (6a), Eq. (6b). By applying this and rearranging for the distance R, it can be expressed as shown in Eq. (7). From Eq. (7), R can be categorized according to factors for target, detector, optics, and signal processing. Eq. (7) is referred to as the range equation [8, 9].

Figure 2. Point-target detection schematic.

Φd= ΔIτa×Ω0τ0=ΔI × τa  × A0 × τ0· yxPSFx x0 ,y y0 dxdyR2,
Vsig=Rv×Φd ,
SNR= VsigVn= RvVn×Φd ,
NEP=ΦpSNR=VnRv,
SNR=VsigVn=VsigVn×Φd,
SNR=VsigVn=D*AdΔf×Φd,
Ωd=AdF2,
Ad=Ad×F,
R= [ΔI ·]1/2Target×[D* ]1/2Detector ×A0·τ0· yxPSFx x0 ,y y0 dxdy]Optics× 1 SNR· AD·Δf 1/2Signal Processing,

where

Φd: The amount of flux reaching the detector,

Φp: Pulse signal of the target,

Ω0: Solid angle (sr) from target to lens,

Ωd: Solid angle (sr) from lens to detector,

ΔI: Point-target radiance intensity (W/sr),

R: Sensor-to-target range (cm),

τa: Atmospheric transmittance,

τ0: Optical transmittance,

A0: Entrance pupil’s area (cm2),

AD: Effective area of the sensor element (cm2),

Vsig: Signal voltage,

Vn: RMS noise,

Rv: Detector responsivity,

NEP: Noise-equivalent power,

PSF: Point-spread function in the array plane,

Δf: Equivalent bandwidth of noise (Hz),

D*: Specific detectivity (cmHz/watt).

2.2. SNR Component Analysis

2.2.1. Target Radiant Intensity ΔI

Accurate target data is crucial for range calculations. This study focuses on small airborne targets, specifically examining the infrared radiant energy emitted from the heated surfaces of small aircraft engines and rockets. The engine’s power is varied at three levels: 30%, 50%, and 80% of maximum power. Figure 3 illustrates the measurement method employed in this study.

Figure 3. Calculation of a jet engine’s radiant intensity, using the values derived from images from the FLIR camera.

First, a LWIR image of the engine is acquired using a FLIR A6751 SLS camera (Teledyne FLIR LLC., OR, USA). A frame rate of 60 Hz, an exposure time of 0.05 ms, and a lens focal length of 50 mm are selected for the LWIR camera. Second, a radiometric calibration experiment is performed in the laboratory to determine the infrared signal value of the target, which is considered to be a blackbody. Radiometric calibration includes determining the relationship between the input infrared radiation and the output signal, enabling the estimation of the input infrared radiation when a given output signal is provided. To obtain the input-output relationship, the system output for a blackbody is used. Since a blackbody emits infrared radiation equal to the Planck radiance multiplied by its emissivity, knowing the emissivity allows the input infrared radiation to be determined. In practice, the energy incident upon the system includes not only the blackbody radiation, but also the ambient energy reflected from the blackbody. For a system operating in the infrared wavelength range of λ1 to λ2, the infrared radiation per unit area and per unit solid angle emitted from a blackbody at temperature is given by Eq. (8). Here the blackbody is assumed to be very close to the system, so atmospheric attenuation is neglected, and the emissivity (ϵB) of the blackbody is assumed to be constant with respect to wavelength and blackbody temperature. The Planck blackbody radiance at the blackbody temperature, B(Tb), and is expressed as shown in Eq. (9) [16].

Qin=ϵBBTb+1ϵBETbs,
BTb= 1πλ2λ1 C 1 ( λ5 e c 2/λ T b1 dλ,

where

Qin: Infrared radiance input into the system (W/sr/cm2),

ϵB: Blackbody emissivity,

B(Tb): Planck’s blackbody radiance (λ1λ2) at a blackbody temperature of Tb (W/sr/cm2),

E(Tbs): Infrared radiance emitted by a blackbody at an ambient temperature of Tbs (W/sr/cm2),

Tb: Blackbody temperature (K),

Tbs: Blackbody ambient temperature (K),

C1: First constant of blackbody radiation (3.7418 × 104 W ∙ μm4/cm2),

C2: Second constant of blackbody radiation (1.4388 × 104 W ∙ μm4/cm2).

Figure 4 shows the basic setup for radiometric calibration of the camera. The blackbody used for calculating radiance in the LWIR band is the SR800R-7D-ET (CI Systems Inc.). It has an aperture size of 7″ × 7″ and an operating temperature range of 0 ℃ to 175 ℃. For standard blackbodies used in radiometric calibration, an emissivity level of up to 0.999 is typically required. The calibrated blackbody used in this experiment has an emissivity of 0.992.

Figure 4. Energy reaching the FLIR camera from the blackbody radiator, in the laboratory environment.

The system’s output is directly related to the radiance Qin, enabling input-output modeling. The relationship between the blackbody Planck radiance B(Tb) and the input energy Qin is known, as shown in Eq. (8). To simplify the calculation, the relationship between the Planck radiance B(Tb) and the output signal (grayscale level) is determined, instead of the total input energy. This allows us to establish the relationship between the input energy and the output signal. The input Planck radiance corresponding to an arbitrary output signal (grayscale level) can be determined, and by substituting it into Eq. (8), the total input radiance can be estimated. To achieve this, the radiometric calibration measurement setup is configured as shown in Fig. 5. Note that the radiance of an image formed by an optical system is equal to that of the object, per the law of conservation of radiance [11, 12, 17]. Applying this law to Fig. 5 leads to Eq. (10), as follows:

Figure 5. Radiometric calibration setup used in the laboratory. Ad: Area of the blackbody (cm2), Ω′: solid angle (sr) (blackbody → FLIR camera), R: distance (blackbody → FLIR camera), A0: area of the FLIR camera (cm2), D: diameter of the FLIR camera, f : focal length of the FLIR camera, Ω: solid angle (sr) (FLIR camera → detector), Ad: area of the detector (cm2).

A'dΩ'=AdΩ.

Figure 6 shows the relationship between the measured radiance and the output. Using the measured results, a polynomial equation is derived. Based on this equation, the Planck radiance B(Tb) at output grayscale levels of 1,735, 1,861, and 2,159 is calculated to be 0.038 W/sr/cm2 , 0.042 W/sr/cm2, and 0.052 W/sr/cm2, respectively.

Figure 6. The relationship between blackbody radiation and output.

By inputting the specific data of the system depicted in Fig. 5 into Eq. (10), the solid angle of incidence Ω at the detector is found to be 1.26 × 10⁻⁵ sr. From the results of the laboratory radiometric calibration, the total radiant luminance incident upon the sensor Qtotal and the radiant power incident on a pixel of the sensor Ptotal can be calculated using Eqs. (11) and (12), as follows:

Qtotal=ϵBτatmBTb+1ϵBτatmETbs+1τatmAatm,
Ptotal=BTbτatmΩAd.

where

τatm: Atmospheric transmittance in the laboratory,

Aatm: Radiance intensity along an atmospheric path in the laboratory,

Ptotal: Detector radiant output at the measurement point.

The atmospheric transmittance τatm in the laboratory is calculated using the PcModWin software (Ontar Co., MA, USA) based on moderate spectral resolution transmission (MODTRAN) [18].

The radiance and radiant output are calculated for each engine-power value. The radiant-output values detected by the FLIR camera are presented in Table 1.

TABLE 1. Radiant-power values detected by the FLIR camera.

ValuesEngine Output (%)
305080
Image
Output Grayscale Level1,7351,8612,159
B(Tb) (W/sr/cm2)0.0380.0420.052
τatm0.9932
Ω′ (sr)1.26 × 10−5
Ptotal (W)1.07 × 10−81.18 × 10−81.46 × 10−8


Finally, when acquiring outdoor images, it is necessary to consider the atmospheric transmission between the sensor and the target. In this case the target is a flame, so we assume that its emissivity ϵ is 1. If the target has an emissivity of 1, then the ambient radiation will influence the reflectance by a factor of (1 − ϵ). The atmospheric transmittance is determined using the PcModWin software to calculate the transmission from sensor to target. Since the distance between the jet engine and the measurement camera is short (20 m), the atmospheric transmittance for field measurements τatm is defined to be 0.99. The radiance analysis is performed according to Eqs. (13) and (14), and the results are shown in Table 2.

TABLE 2. Calculated flame radiance-intensity values.

ValuesEngine Output (%)
305080
Ptotal (W)1.07 × 10−81.18 × 10−81.46 × 10−8
τatm0.99
Ω′ (sr)1.26 × 10−5
WflameAd (W/sr)1.37 × 10−21.52 × 10−21.88 × 10−2
Number of Pixels50100200
Target Radiant Intensity ΔI (W/sr)0.691.523.76


Qtotal=ϵτ'atmWflame+1ϵτ'atmETbs+1τ'atmA'atm,
Ptotal=Wflameτ'atmΩAd,

where ϵ is the emissivity, Wflame is the radiant luminance of the flame (W/sr/cm2), and Aatm is the radiance intensity along an atmospheric path in the outdoor environment.

2.2.2. Atmospheric Transmittance τa

Radiant energy from the target is attenuated as it passes through the atmosphere before it reaches the sensor. It is necessary to accurately calculate the amount of attenuation by the atmosphere under different weather conditions, to obtain accurate ranges. When predicting range performance, the variability in atmospheric conditions is the most difficult factor to quantify. Elder and Strong [19] published a study focused on calculating the infrared transmittance of the atmosphere. However, their approach involves only the radiative-energy characteristics of the atmosphere, and atmospheric transmittance is approximated. In this study we use MODTRAN, which provides accurate and fast atmospheric transmittance data for different wavelength bands. MODTRAN is often used as a computational model of atmospheric transmission. It considers the operating conditions of the system, and large amounts of data from different locations and under different conditions are used for verification. By utilizing these various types of data, reliable probability-based prediction results can be obtained.

MODTRAN offers a choice of different atmospheric turbulence models. These can be broadly categorized as maritime, rural, or urban, based on the types of particles in haze. In Korea, which is a peninsula surrounded by the sea on three sides, it is appropriate to use the maritime model. In the simulation, the optical system is looking at a path along the horizon at a height of 2 km. Mid-latitude spring-summer and autumn-winter atmospheric conditions are used. Figure 7 shows the results for atmospheric transmittance τa.

Figure 7. Atmospheric transmittance τa along the horizon at a height of 2 km.

2.2.3. Sensor Properties D*, AD, and ∆f

The sensitivity of the detector is inversely related to the noise equivalent power (NEP) and directly proportional to the detectivity D*, which normalizes the noise power by the detector area AD and bandwidth ∆f. Eq. (15) defines D* and its relationship to NEP.

D*=ADΔfNEPcm·HZwatt.

For an exposure time of 80 μs and a noise equivalent temperature difference (NETD) of 50 mK, Eq. (15) yields a detectivity D* of 3.1 × 1011 cm • Hz/W [20].

2.2.4. Optical-system Performance (PSF, τ0, and A0)

The design parameter related to the optical system is the image blur at the focal plane. This blur depends on the optical aberration, optical transmittance, and area of the circular objective lens that receives energy from the target. When the phases of a point target are scattered rather than concentrated at a given point, image blur can be classified into two different types. The first is unavoidable, and is caused by the diffraction of light. The second is related to the optical system being used. The blur caused by diffraction is called Airy disk blur, with 84% of the total energy focused within the corresponding disk area. Eq. (16) provides the Airy disk diameter.

DAiry=2.44× ×F/#,

where the central wavelength λC is 9.0 μm and the F-number is 1.55. From this formula, the Airy disk in this study is 34 μm in diameter. Since the size of a single detecting element is 12 μm × 12 μm, if an optical system is designed with an F-number of 1.55 to match the performance of the diffraction limit, the phase of the spot target falls within a 3 × 3-pixel area of the detecting element. It is not possible to make all of the focused rays converge at a geometric point, in a real optical system.

There is a specific energy distribution in the focal plane of the focused phase from the point target through the optical system. This corresponding function is called the point-spread function (PSF). Assuming no image blur, the PSF would be a delta function, which is physically impossible. The ratio of the energy entering the detector element to the total energy is obtained by integrating the PSF over the area of a single element in the sensor plane. The PSF can be calculated during the optical-system design process, but in practice the PSF is obtained by calculating the rms spot diameters from ray-tracing data, rather than searching for a function. The ray-tracing data can also be used to calculate the energy contained within a given distance from the center of the phase distribution, known as the encircled energy. Considering these factors, the performance target of the optical system is set so that the energy distribution, including diffraction, is at least 85% within a 3 × 3-pixel area. The PSF of the designed optical system is found to be 0.85, according to Eq. (17). The designed optical system is described in Section IV.

yx PSF x x0,yy0dxdy=0.85.

The next parameter to be adjusted in the optical system is the optical transmittance τ0. This is a crucial parameter in the performance of sensors, as it is multiplied directly by the incident energy and is related to the amount of energy reaching the detector. Optical transmittance includes the loss reflected from the surface of each lens and the energy absorbed by the lens material itself. For the LWIR band, a double-sided coating has a transmittance of more than 98%, excluding absorption by the material. Considering the four lenses, IR filter, and cover glass that make up the optical system, the transmittance of the system is estimated to be approximately 60%.

The final parameter of the optical system is the entrance-pupil area A0 of the objective lens, which represents the light-collection efficiency of the system. The area of the objective lens is not a value that can be arbitrarily set; It is influenced by the FOV. This affects the effective focal length of the lens. Depending on the FOV, the light-collection efficiency and the SNR of the sensor will change, as will the maximum detection distance. Therefore, the FOV of the optical system can be adjusted to match the target-detection distance.

III. DETECTION-RANGE PREDICTION AND DESIGN SPECIFICATIONS

The optimum FOV should be selected, considering the FOV of the optical system and the pivoting range of the instrument upon which the optical system is mounted. In this study the ranges are analyzed for five horizontal FOVs: 3°, 6°, 9°, 12°, and 15°. The radiation intensity of the target is based on 50% of the engine power. Atmospheric conditions considered are for the mid-latitude spring-summer and autumn-winter seasons. The maximum detection distance is based on a required SNR of 10.

The maximum detection distance required by the system is about 2 km. The EOTS system is mounted on aircraft and is used to detect and identify unauthorized drones, to protect critical facilities. Target detection is performed using an automatic tracking algorithm. The detection distance required for the operation of the automatic tracking algorithm is 2 km.

When operating the system, a large FOV may reduce the mechanical range of operations if all other factors are fixed. Thus, this approach leads to a reduction in the detection range. Since the optical system is mounted on an aircraft, it should be compact and lightweight. When the FOV is less than 9°, the detection range is satisfactory. As the FOV decreases, the aperture of the optical system increases. Thus, this small FOV enlarges the space and weight requirements.

The optimum FOV of this optical system is determined from the detection-range analyses in Figs. 8 and 9, which yield an FOV of 9°. When the FOV of the optical system is 9°, we obtain a target-detection range of 2.1 km and 2.2 km under spring-summer and autumn-winter atmospheric conditions, respectively, which meet the requirement of a detection range of 2 km or greater. The detailed design specifications are given in Table 3.

Figure 8. Analysis of the detection range with the field of view (FOV) of the studied optical system, for spring-summer: ① FOV: 3°, ② FOV: 6°, ③ FOV: 9°, ④ FOV: 12°, ⑤ FOV: 15°.

Figure 9. Analysis of the detection range with the field of view (FOV) of the studied optical system, for fall-winter: ① FOV: 3°, ② FOV: 6°, ③ FOV: 9°, ④ FOV: 12°, ⑤ FOV: 15°.

TABLE 3. Optical specifications of the long-wavelength infrared (LWIR) camera.

ParameterSpecification
Wavelength (μm)8–10
F-number (F/#)1.55
Image Sensor Pixel Size (μm)12 × 12
Image Sensor Pixel Number1,024 × 768
FOV (°)9.0 × 7.0
Transmittance (%)60
MTF at Nyquist Frequency (%)Designed MTFMore than 20 at Center Field
Athermalized (−30 ℃ to 70 ℃) MTFMore than 20 at Center Field
Nyquist Frequency (cycles/mm)42

IV. OPTICAL-SYSTEM DESIGN

The sensor uses an uncooled infrared detector with a pixel size of 12 μm, in a 1,024 × 768 array. The optical system has an F-number of 1.55 and operates in the wavelength range of 8–10 μm. To achieve a transmission of more than 60% with the LWIR camera, the number of lenses is minimized, as shown in Fig. 10. A passively athermal method is chosen to meet the performance requirements from −30 ℃ to 70 ℃. Chalcogenide series IRG26 material (Schott AG., Mainz, Germany) is applied to L2 and L4 to satisfy the athermal performance requirements, and Ti6Al4V material is used for the lens housing. Figure 11 illustrates the athermal design results. The modulation transfer function (MTF) on center field is effectively maintained between 21.6% and 22.2% over the entire temperature range of −30 ℃ to 70 ℃.

Figure 10. Optical design layout.

Figure 11. Athermal design results: Variation of modulation transfer function (MTF) with temperature.

Through evaluating the encircled energy of the designed optical system, it is confirmed that more than 85% of the light entering the detector falls on an area of 3 × 3 pixels, as described in Fig. 12. The MTF, an optical parameter representing the performance of an optical system, indicates the decrease in contrast ratio as the spatial frequency increases. MTF values over all fields are more than 27% at a Nyquist frequency of 42 cycles/mm, confirming that these values are very close to diffraction-limited performance, as shown in Fig. 13.

Figure 12. Encircled energy of the designed camera.

Figure 13. Modulation transfer function (MTF) of the designed camera.

V. EVALUATION

The optical system designed in the previous section is evaluated in terms of image performance and detection range. The MTF is an important criterion for evaluating the performance of imaging systems [21]. The MTF of a manufactured optical system should be 10% or higher. Figure 14 shows a schematic setup for measuring the MTF of a manufactured camera system. The infrared beam emitted from the half-moon target on the collimator is directed to the camera, and then is focused. The blackbody temperature is set to 300 K, and the collimator focal length is 1,700 mm. Figure 15(a) shows the image of the half-moon target acquired through the camera. The measurement results in Fig. 15(b) show that the MTF is 14.3% at a Nyquist frequency of 42 cycles/mm.

Figure 14. Experimental setup for modulation transfer function (MTF) measurement and detection-range assessment.

Figure 15. Modulation transfer function (MTF) results: (a) Half-moon target, and (b) MTF graph (14.3% at Nyquist frequency of 42 cycles/mm).

In actual system operation, detection is defined as achieving a detection probability of 95% or higher on plateau-enhanced images, by running the automatic detection algorithm after accumulating 50 frame images acquired from the target. In this process, the precise detection distance is derived from the algorithm.

However, image tests conducted in the laboratory aim to determine whether the camera can resolve images of a target with defined size and temperature corresponding to the desired detection range, using a collimator. If the images are clearly acquired, this is evidence that the manufactured camera is capable of achieving the target detection range during actual operation.

We use a square target with a size of 5 mrad, equivalent to the target size at 2 km. The blackbody mounted behind the square target is set to simulate the emission temperature of the target.

The blackbody temperature is determined from the radiometric calibration results of the camera in the previous Section 2.2.1. Based on the measured relationship between the Planck radiance and the output grayscale level, at a distance of 2 km, the radiance intensity of a 5-mrad object corresponds to 320 K, 373 K, and 400 K for radiance levels of 0.69 W/sr, 1.52 W/sr, and 3.76 W/sr, respectively, at each level of engine power.

The measured results, listed in Table 4, show that the image is clearly acquired. The validity of the detection distance analysis proposed in this study is experimentally confirmed.

TABLE 4. Results for the detection range, for conditions equivalent to the flight laboratory environment.

ValueEngine Output (%)
305080
Target Radiance Intensity (W/sr)0.691.523.76
Blackbody Temperature (K)320373400
Acquired Raw Image

VI. CONCLUSION

This study has presented a novel approach to optimizing the maximum detection range of an infrared electro-optical tracking system. To derive the detection range, factors such as target characteristics, atmospheric transmittance, sensor performance, and optical-system specifications were considered. The designed optical system, with an optimized FOV of 9.0° × 7.0° and an F-number of 1.55, was experimentally validated to achieve a detection range exceeding 2 km, satisfying the performance requirements. The detection-range performance exceeding 2 km meets the operational requirements of the anti- unmanned aerial vehicles defense system (AUDS). Also, laboratory measurements demonstrated that the system achieved a measured MTF of 14.3% at the Nyquist frequency, exceeding the 10% requirement.

Although the proposed system’s detection range was successfully validated under controlled conditions, the study relied on specific atmospheric conditions and target characteristics, which may not fully represent practical scenarios.

For example, the degradation of performance under adverse weather conditions, such as fog and rain, presents a significant obstacle to long-range target detection. Additionally, variability in atmospheric conditions and sensor calibration remains a critical issue. To improve the reliability of detection distance in infrared electro-optical tracking systems, future research should focus on advanced experimental setups, including long-term field tests under diverse environmental conditions, as well as the development of more accurate atmospheric attenuation models.

This study will enhance the reliability and operational efficiency of unmanned aerial vehicles for surveillance, swarm drone reconnaissance, and the integration of drone-robot combat systems for military applications.

Acknowledgments

We give thanks to Siyoun Choi of LIG Nex1, for their support in manufacturing and measurement.

FUNDING

Agency for Defense Development Grant, funded by the Korean Government (912A16701).

DISCLOSURES

The authors declare no conflicts of interest.

DATA AVAILABILITY

The data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.

Fig 1.

Figure 1.Environment for calculating the signal to noise ratio (SNR) of an infrared tracking camera.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 2.

Figure 2.Point-target detection schematic.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 3.

Figure 3.Calculation of a jet engine’s radiant intensity, using the values derived from images from the FLIR camera.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 4.

Figure 4.Energy reaching the FLIR camera from the blackbody radiator, in the laboratory environment.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 5.

Figure 5.Radiometric calibration setup used in the laboratory. Ad: Area of the blackbody (cm2), Ω′: solid angle (sr) (blackbody → FLIR camera), R: distance (blackbody → FLIR camera), A0: area of the FLIR camera (cm2), D: diameter of the FLIR camera, f : focal length of the FLIR camera, Ω: solid angle (sr) (FLIR camera → detector), Ad: area of the detector (cm2).
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 6.

Figure 6.The relationship between blackbody radiation and output.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 7.

Figure 7.Atmospheric transmittance τa along the horizon at a height of 2 km.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 8.

Figure 8.Analysis of the detection range with the field of view (FOV) of the studied optical system, for spring-summer: ① FOV: 3°, ② FOV: 6°, ③ FOV: 9°, ④ FOV: 12°, ⑤ FOV: 15°.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 9.

Figure 9.Analysis of the detection range with the field of view (FOV) of the studied optical system, for fall-winter: ① FOV: 3°, ② FOV: 6°, ③ FOV: 9°, ④ FOV: 12°, ⑤ FOV: 15°.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 10.

Figure 10.Optical design layout.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 11.

Figure 11.Athermal design results: Variation of modulation transfer function (MTF) with temperature.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 12.

Figure 12.Encircled energy of the designed camera.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 13.

Figure 13.Modulation transfer function (MTF) of the designed camera.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 14.

Figure 14.Experimental setup for modulation transfer function (MTF) measurement and detection-range assessment.
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

Fig 15.

Figure 15.Modulation transfer function (MTF) results: (a) Half-moon target, and (b) MTF graph (14.3% at Nyquist frequency of 42 cycles/mm).
Current Optics and Photonics 2025; 9: 19-28https://doi.org/10.3807/COPP.2025.9.1.19

TABLE 1 Radiant-power values detected by the FLIR camera

ValuesEngine Output (%)
305080
Image
Output Grayscale Level1,7351,8612,159
B(Tb) (W/sr/cm2)0.0380.0420.052
τatm0.9932
Ω′ (sr)1.26 × 10−5
Ptotal (W)1.07 × 10−81.18 × 10−81.46 × 10−8

TABLE 2 Calculated flame radiance-intensity values

ValuesEngine Output (%)
305080
Ptotal (W)1.07 × 10−81.18 × 10−81.46 × 10−8
τatm0.99
Ω′ (sr)1.26 × 10−5
WflameAd (W/sr)1.37 × 10−21.52 × 10−21.88 × 10−2
Number of Pixels50100200
Target Radiant Intensity ΔI (W/sr)0.691.523.76

TABLE 3 Optical specifications of the long-wavelength infrared (LWIR) camera

ParameterSpecification
Wavelength (μm)8–10
F-number (F/#)1.55
Image Sensor Pixel Size (μm)12 × 12
Image Sensor Pixel Number1,024 × 768
FOV (°)9.0 × 7.0
Transmittance (%)60
MTF at Nyquist Frequency (%)Designed MTFMore than 20 at Center Field
Athermalized (−30 ℃ to 70 ℃) MTFMore than 20 at Center Field
Nyquist Frequency (cycles/mm)42

TABLE 4 Results for the detection range, for conditions equivalent to the flight laboratory environment

ValueEngine Output (%)
305080
Target Radiance Intensity (W/sr)0.691.523.76
Blackbody Temperature (K)320373400
Acquired Raw Image

References

  1. T. Müller, H. Widak, M. Kollmann, A. Buller, L. W. Sommer, R. Spraul, A. Kröker, I. Kaufmann, A. Zube, F. Segor, T. Perschke, A. Lindner, and I. Tchouchenkov, “Drone detection, recognition, and assistance system for counter-UAV with VIS, radar, and radio sensors,” Proc. SPIE 12096, 12096A (2022).
    CrossRef
  2. O. Steinvall, D. Svedbrand, and T. Svensson, “Optical sensing during low visibility conditions,” Proc. SPIE 11160, 111600J (2019).
    CrossRef
  3. F.-Y. Song, Y. Lu, Y. Qiao, H.-F. Tao, C. Tang, and Y.-S. Ling, “Simulations of infrared atmospheric transmittance based on measured data,” Proc. SPIE 10157, 10157E (2016).
    CrossRef
  4. O. Steinvall, R. Persson, F. Berglund, O. Gustafsson, J. Öhgren, and F. Gustafsson, “Using an eye-safe laser rangefinder to assist active and passive electro-optical sensor performance prediction in low visibility conditions,” Opt. Eng. 54, 074103 (2015).
    CrossRef
  5. P. Raphael and W. F. John, “Lens requirements for sub-10 μm pixel pitch uncooled microbolometers,” Proc. SPIE 12533, 125330O (2023).
  6. M. L. Pieper, R. Lockwood, and M. Chrisp, “Hyperspectral SWIR sensor parameterization for optimal methane detection,” Proc. SPIE 12688, 126880F (2023).
    CrossRef
  7. S. A. Sánchez-Maes, J. Ho, I. Anderson, E. Aguirre-Contreras, C. Barcroft, J. Barstow, D. Caldwell, B. D'Aquino, G. Dubinsky, T. M. Gauron, J. Hong, A. T. Kenter, C. S. Moore, R. Nere, R. Pandohie, and C. Suarez, “SSAXI-Rocket delta-doped CMOS sensors,” Proc. SPIE 13103, 131030N (2024).
    CrossRef
  8. K. W. Park, J.-Y. Han, J. Bae, S.-W. Kim, and C.-W. Kim, “Novel compact dual-band LOROP camera with telecentricity,” Opt. Express 20, 10921-10932 (2012).
    Pubmed CrossRef
  9. J. E. Ball, D. T. Anderson, and C. S. Chan, “Comprehensive survey of deep learning in remote sensing: Theories, tools, and challenges for the community,” J. Appl. Remote Sens. 11, 042609 (2017).
    CrossRef
  10. H. Kim and C. Lee, “Upcycling adversarial attacks for infrared object detection,” Neurocomputing 482, 1-13 (2012).
    CrossRef
  11. R. Hartmann and W. J. Smith, Infrared Optical Design and Fabrication: Critical Reviews of Optical Science & Technology, 1st ed. (SPIE Press, USA, 1991), pp. 44-54.
  12. M. Schlessinger, Infrared Technology Fundamentals, 2nd ed. (Routledge, USA, 1995), pp. 12-25.
  13. M. C. Dudzik, The Infrared and Electro-Optical Systems Handbook: Electro-Optical Systems Design, Analysis and Testing (Infrared Information Analysis Center and SPIE Press, USA, 1993), Volume 4, pp. 63-66.
    CrossRef
  14. J. Yoon, D. Ryu, S. Kim, S. Seong, W. Yoon, J. Kim, and S.-W. Kim, “Long-distance flame detection simulation for a new MWIR camera,” Korean J. Opt. Photon. 25, 245-253 (2014).
    CrossRef
  15. L. LaCroix and S. Kurzius, “Peeling the onion: A heuristic overview of hit-to-kill missile defense in the 21st century,” Proc. SPIE 5732, 583369 (2005).
    CrossRef
  16. K. W. Park and S. H. Kim, “Analysis of flame radiant intensity using image output values of infrared cameras,” J. Korean Inst. Illum. Electr. Install. Eng. 36, 1-7 (2022).
    CrossRef
  17. W. J. Smith, Modern Optical Engineering, 4th ed. (McGraw-Hill, USA, 2008), pp. 259-263.
  18. MODTRAN4 Ver.03 User's Manual, Spectral Sciences Inc. MA, USA (2003).
  19. T. Elder and J. Strong, “The infrared transmission of atmospheric windows,” J. Franklin. Inst. 255, 189-208 (1953).
    CrossRef
  20. A. Rogalski, Infrared detectors, 2nd ed. (CRC press, USA, 2011), pp. 321-324.
  21. “Optics and photonics-Optical transfer function-Principles of measurement of modulation transfer function (MTF) of sampled imaging systems,” ISO 15529:2007 (2010).
Optical Society of Korea

Current Optics
and Photonics


Min-Kyo Seo,
Editor-in-chief

Share this article on :

  • line