Ex) Article Title, Author, Keywords
Current Optics
and Photonics
Ex) Article Title, Author, Keywords
Curr. Opt. Photon. 2023; 7(2): 200-206
Published online April 25, 2023 https://doi.org/10.3807/COPP.2023.7.2.200
Copyright © Optical Society of Korea.
Hyunjoon Sung1, Yunkyung Kim1,2
Corresponding author: *yunkkim@dau.ac.kr, ORCID 0000-0002-4338-7642
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Near-infrared (NIR) sensing technology using CMOS image sensors is used in many applications, including automobiles, biological inspection, surveillance, and mobile devices. An intuitive way to improve NIR sensitivity is to thicken the light absorption layer (silicon). However, thickened silicon lacks NIR sensitivity and has other disadvantages, such as diminished optical performance (e.g. crosstalk) and difficulty in processing. In this paper, a pixel structure for NIR sensing using a stacked CMOS image sensor is introduced. There are two photodetection layers, a conventional layer and a bottom photodiode, in the stacked CMOS image sensor. The bottom photodiode is used as the NIR absorption layer. Therefore, the suggested pixel structure does not change the thickness of the conventional photodiode. To verify the suggested pixel structure, sensitivity was simulated using an optical simulator. As a result, the sensitivity was improved by a maximum of 130% and 160% at wavelengths of 850 nm and 940 nm, respectively, with a pixel size of 1.2 μm. Therefore, the proposed pixel structure is useful for NIR sensing without thickening the silicon.
Keywords: CMOS image sensor, High sensitivity, Infrared camera, Optical sensors, Wavelength sensitivity
OCIS codes: (040.5160) Photodetectors; (110.3080) Infrared imaging; (230.5170) Photodiodes; (280.4788) Optical sensing and sensors
Near-infrared (NIR) sensing technology has been extensively applied to automobiles, biological inspection, surveillance, mobile devices, and fiber optic communication [1-4]. NIR is a subset of the infrared band of the electromagnetic spectrum, covering wavelengths ranging from 0.7 μm to 1.4 μm. NIR is just outside the range of what humans can see and sometimes offers clearer details than what is achievable with visible images. NIR sensitivity is one of the important optical parameters for improving the image quality of NIR applications. However, silicon-based image sensors lack NIR sensitivity and have low absorption in the NIR wavelength band due to the limitation of silicon thickness [5].
Many studies are underway to increase NIR sensitivity by increasing the thickness of silicon or adding optical elements. The simplest way to improve NIR sensitivity is to make the silicon, as the light absorption layer, thicker [6, 7]. Increasing the silicon thickness to 6.0 μm results in a 50% sensitivity improvement at a wavelength of 940 nm [6]. However, while thick silicon improves NIR sensitivity, it degrades certain optical characteristics, such as lateral crosstalk, especially with small pixel sizes [8].
Another method to increase NIR sensitivity is by adding optical elements such as an inverted pyramid array (IPA) structure, backside scattering technology (BST), black silicon, or gratings in the pixel structure [7, 9-11]. An IPA silicon surface has been proposed and developed to increase the light propagation length and effective silicon thickness by light diffraction [7]. The IPA structure showed an 80% improvement in sensitivity, even though the light absorption layer was only 3.0 μm thick. BST was also introduced [9]. As light induces a BST pattern, light scattering occurs. As a result, the light path in the silicon increases, showing an improvement in light absorption. In addition, to improve NIR sensitivity, a black silicon surface using a nanostructure smaller than the wavelength on the silicon surface has been reported [10]. Black silicon is formed with random needle-shaped nanostructures on the surface. As the name suggests, black silicon has been applied to c-Si solar cells and image sensors due to its very efficient absorption of light over a wide range of wavelengths. Finally, improvement of NIR sensitivity using a grating structure has also been reported recently. A plasmonic image sensor with a surface metal grating structure improves NIR sensitivity [11]. Using the quasi-resonant condition of the plasmonic sensor, incident light is diffracted, and the optical path is increased. Moreover, due to plasmon resonance, most of the incident light on the surface is converted into surface plasmon waves, which suppresses the reflectance of the sensor surface, thereby improving the light utilization efficiency. A silver grating structure improved sensitivity by 5.3 times at a wavelength of 940 nm with a 3.0 μm silicon image sensor. However, these different pixel structures all result in optical degradation. NIR sensitivity is improved by the refraction, reflection, and high diffraction of incident light. At the same time, these optical characteristics affect crosstalk, which has a direct effect on image quality. In addition, the improvement of NIR sensitivity is still insufficient compared to visible light.
In this paper, we propose a pixel structure using a stacked CMOS image sensor with a stacked photodiode architecture to improve sensitivity in NIR wavelength bands. Chapter 2 describes the concept of the stacked photodiode pixel structure, and Chapter 3 presents and discusses the simulation results. Conclusions are presented in Chapter 4.
The NIR sensitivity of silicon-based image sensors is significantly lacking compared to visible light sensitivity. As mentioned above, the simplest way to increase NIR sensitivity is to increase the thickness of the silicon. Therefore, we first examined how the sensitivity increases according to the silicon thickness. Figure 1 shows the simulated sensitivity of typical pixel structures with silicon thicknesses of 3.0 μm, 4.0 μm, 5.0 μm, and 6.0 μm. As the silicon thickness increased, the sensitivity increased in the relatively long wavelength range above 600 nm.
Figure 1 shows normalized sensitivities for silicon thicknesses in the 400–1,000 nm wavelength range. As shown in the graph, the sensitivity of 6.0 μm silicon increased by 42% and 60% at NIR wavelengths of 850 nm and 940 nm, respectively, compared to 3.0 μm silicon. Although the silicon thicknesses have doubled, the sensitivities at the 850 nm and 940 nm NIR wavelengths are only 50% and 30% of that at the visible wavelength of 650 nm. Therefore, a new pixel structure is required to improve NIR sensitivity rather than simply increasing silicon thickness.
We suggested a back-side-illuminated (BSI) pixel structure with a stacked photodiode architecture to improve NIR sensitivity, as shown in Fig. 2. Since the absorption rate of silicon is lower as the wavelength increases, the penetration depth of photons increases [12]. Therefore, the longer the wavelength, the longer the optical path must be. The proposed stacked photodiode architecture provides a longer optical path than the typical pixel structure, so the sensitivity is expected to improve. The suggested structure is based on three-dimensional (3D) stacked image sensor technology. 3D stacked image sensors that bond three wafers are already applied and commercialized in many industries, and in this paper, a photodiode was formed on the pixel transistor layer based on this technology [13]. Figure 2(a) shows a conceptual diagram of the proposed stacked CMOS image sensors. There are three layers, a photodetection layer, a pixel transistor layer, and a logic layer, in the stacked CMOS image sensor. The photodetection layer mainly utilizes silicon as the light absorption layer, the color filter, and the microlens. The pixel transistor layer contains the pixel transistors to read out and amplify the photocurrent. Additionally, the logic layer includes an analog-to-digital converter (ADC) circuit, which converts an analog signal into a digital signal, and an image signal processing (IPS) circuit, which improves the quality of the image through various signal processing procedures. Figure 2(b) shows the suggested pixel structure, including the photodetection layer and the pixel transistor layer. The suggested structure has two light absorption regions, top and bottom photodiodes. The bottom photodiode is located in the pixel transistor layer below the top photodiode. The bottom photodiode absorbs light transmitted from the top photodiode. The bottom photodiode as well as the top photodiode has a floating diffusion (FD) node and a transfer gate for readout. The transfer gate transfers the charge generated by the photodiode to the FD node, and the charge collected in the FD node is converted into a voltage. The suggested pixel structure has wiring to connect the FD node and pixel transistors. Therefore, the wiring, called the deep contact, was shown to connect the FD nodes between the photodetection layer and the pixel transistor layer in Fig. 2. The deep contact is one of the methods for interconnecting wafers in 3D stacked image sensors [14]. The deep contact enables a connection between the top photodiode and the pixel transistor. Also, this wiring is efficient in area for the bottom photodiode.
Table 1 shows the conditions of the simulated pixel structures. The pitches of the pixels were 1.0 μm and 1.2 μm, and the radius of curvature (ROC) and height of the microlens were optimized to 0.7 μm and 0.7 μm, respectively. The color filter used a WRGB color filter array instead of a Bayer color filter array, and the thickness was 0.6 μm [15]. Two pixels of the WRGB color filter were used for sensitivity comparison: A red color filter (RCF) and white color filter (WCF). The reason for this choice was that RCF and WCF have high transmittance from the red wavelength to the NIR wavelength, that is, the 650 nm, 850 nm, and 940 nm wavelength bands. To suppress crosstalk between adjacent pixels, deep trench isolation (DTI) was used, and the width was 85 nm. The silicon thickness of the top photodiode was fixed at 3 μm, and the bottom photodiode was simulated by increasing the silicon thickness. The silicon thickness of the bottom photodiode was simulated in increments of 0.1 μm, from a minimum of 1.0 μm to a maximum of 2.1 μm. In addition, the wavelengths were simulated as 650 nm for visible light and 850 nm and 940 nm for NIR light.
Table 1 Conditions of the simulated pixel structures
Pixel Pitch (μm) | 1.0, 1.2 |
---|---|
Height and ROC of Microlens (μm) | 0.7 |
Thickness of Color Filter (W-RGB) (μm) | 0.6 |
Width of Deep Trench Isolation (DTI) (μm) | 0.085 |
Thickness of Silicon (μm) | 3.0–5.1 (3 + 2.1) |
We investigated the optical properties of the proposed stacked photodiode pixel structure using a 3D optical simulator based on the finite difference time domain (FDTD) method [16]. The FDTD method is generally used for the numerical analysis of CMOS image sensors [17]. To analyze the optical characteristics of the pixel, the absorbed photon density was used as a measure of sensitivity. The absorbed photon density was specified as the absorbed power density divided by photon energy. Here, the absorbed power density was calculated as the time average of the power of light absorbed in a given unit area.
Figure 3 shows the simulated sensitivity results for various bottom photodiode thicknesses in the 1.2 μm pixel pitch. The 650 nm wavelength was used for visible light, while 850 nm and 940 nm were used for NIR wavelengths. The sensitivities shown in the graphs are the sums of the sensitivities of the top and bottom photodiodes. Figure 3 (a) shows the sensitivity of the red pixel, which has the RCF, and Fig. 3(b) shows the sensitivity of the white pixel. The sensitivity shown in Fig. 3(a) (Bot: 2.1 μm) increased by 30%, 70%, and 120% at 650 nm, 850 nm, and 940 nm, respectively, compared to the sensitivity shown in Fig. 3(a) (typical). As shown in Fig. 3, the typical structure has insufficient sensitivity for NIR imaging. In contrast, when the proposed structure is used, not only the red wavelength sensitivity but also the NIR sensitivity is significantly increased. In the proposed structure, the amount of light absorbed increases as the optical path lengthens, because the optical path is extended. Compared with Fig. 1, the sensitivity results in Fig. 3 have different results. In Fig. 1, the sensitivity generally increases as the thickness increases, whereas the sensitivity results using the proposed structure tend not to increase uniformly as the thickness of the bottom photodiode increases. The reason is that the pixel structure of Fig. 1 is a structure in which only the thickness of silicon is increased, whereas the photodiode in Fig. 3 has two silicon layers. Therefore, there are some effects by the wiring layer and the silicon thickness of the bottom photodiode. The incident light can be reflected by metal wire. Also, the silicon thickness of the bottom photodiode makes the interference effect. For this reason, the sensitivity did not match between the typical pixel structure and suggested pixel structure with the same thickness. Moreover, similar to DTI, the wiring layer refracts the incident light and suppresses crosstalk. Since the power flux density is strong in the bottom photodiode, NIR sensitivity is increased.
Figure 4 shows the simulation results with a 1.0 μm pixel pitch. As the pixel size approaches the size of the incident wavelength, there is a physical limit where the light cannot fully reach the pixel. Therefore, the simulated incident wavelengths were changed to 650 nm and 850 nm. As the pixel gets smaller, sensitivity improvement is an important parameter not only for NIR wavelengths but also for visible wavelengths. The sensitivities shown in Fig. 4(a) (Bot: 2.1 μm) increased by 20% and 70% at 650 nm and 850 nm, respectively, compared to those shown in Fig. 4(a) (typical). Therefore, it was confirmed that the sensitivities of the red and NIR wavelengths were improved even with small pixels.
To clarify our simulation results, we show that the optical generated beam profile has a 1.2 μm pixel pitch (Fig. 5). The absorbed power flux density is shown when the 650 nm, 850 nm, and 940 nm wavelengths are incident at a light angle of 0°. Compared to the typical structure shown in Fig. 5(a), in the proposed structure presented in Fig. 5(b), the light passing through the top photodiode is generated from the bottom photodiode.
Figure 6 shows a crosstalk comparison between the typical pixel structure and the suggested stacked photodiode pixel structure. The pixel structure in which the silicon thickness is increased to 5.1 μm from the typical pixel structure and the stacked photodiode structure have a top photodiode of 3.0 μm and a bottom photodiode of 2.1 μm. This is the crosstalk result when incident light of 940 nm wavelength is incident at an angle of 10°. It was confirmed that the typical pixel structure had a high crosstalk of 25%, and the stacked photodiode pixel structure significantly improved the crosstalk to 19.3%. The suggested pixel structure has low crosstalk because the wiring layer around the bottom photodiode has the same role as DTI. Therefore, the suggested pixel structure has high efficiency in terms of crosstalk as well as sensitivity.
Figure 7 shows the simulated sensitivity in units of 10 nm from 400 nm to 1,000 nm wavelengths of a typical structure and a stacked photodiode architecture with a pixel pitch of 1.2 μm. In the graph, the red line is the sensitivity results of the incident light passing through the RCF of the W-RGB CFA, and similarly, the blue line represents the BCF, the white line represents the WCF, and the green line represents the GCF. Sensitivity was greatly improved in the long wavelength region above 600 nm. As shown in Fig. 7(a), the sensitivity to the 650 nm wavelength transmitted through the RCF improved by 27%. In addition, the sensitivity was improved by 130% for the 850 nm wavelength and 160% for the 940 nm wavelength transmitted through the RCF. In the case of the smaller pixel size shown in Fig. 7(b), sensitivity improved by 21% for the 650 nm wavelength and 109% for the 850 nm wavelength transmitted through the RCF.
This paper introduced a CMOS image sensor pixel structure with a stacked photodiode architecture for high-sensitivity NIR sensing. A bottom photodiode was located in the pixel transistor layer under the top photodiode to create a new light absorption layer. To analyze the performance of the suggested structure, four-pixel structures with BSI pixel sizes of 1.2 μm and 1.0 μm were investigated through 3D optical simulation. The thickness of the top photodiode was fixed at 3.0 μm, and the bottom photodiode was simulated by increasing the thickness. The optical path became longer due to the bottom photodiode, resulting in an improvement in sensitivity. At wavelengths of 650 nm, 850 nm, and 940 nm with a 1.2 μm pixel structure, the improvements were 27%, 130%, and 160%, respectively. We compared the typical pixel with the suggested pixel and found that the absorptivity of the suggested pixel was greatly improved from the visible light to NIR wavelength bands. Therefore, the proposed pixel structure is useful for both visible and NIR images without thickening the silicon.
A pixel structure with high sensitivity is needed for NIR applications. Accordingly, we developed a high-sensitivity NIR pixel based on stacked image sensor technology [18]. 3D stacking technology is currently an important trend in the image sensor industry. Although measurement results of the proposed pixel structure were not available in this paper due to environmental constraints, it is expected that it would be possible to fabricate the structure based on stacked image sensors. For example, sublocal connection and deep contact, which optimize the wiring layer for connecting the pixel transistor layer and the photodetection layer, are known as one of the latest stacked image sensor technologies [19]. It is expected that the complexity of the wiring layer existing in the bottom photodiode can be reduced with sublocal connection technology, and the wiring layer can be designed even for the thicker bottom photodiode with deep contact. In addition, the application of dynamic random access memory (DRAM) to pixel transistor layers has been reported [20]. The location of the photodiode proposed in this paper is in the pixel transistor layer, which increases feasibility as many packaging and stacking technologies develop. In subsequent research, not only RGB-W CFA but also Bayer CFA will be applied, and analyses will be performed at various smaller pixel sizes.
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.
The EDA tool was supported by the IC Design Education Center (IDEC), Korea.
National Research Foundation of Korea (NRF) grant funded by the Korean government (MIST) (No. NRF-2020R1F1A1073614).
Curr. Opt. Photon. 2023; 7(2): 200-206
Published online April 25, 2023 https://doi.org/10.3807/COPP.2023.7.2.200
Copyright © Optical Society of Korea.
Hyunjoon Sung1, Yunkyung Kim1,2
1Department of ICT Integrated Safe Ocean Smart Cities Engineering, Dong-A University, Busan 49315, Korea
2Department of Electronic Engineering, Dong-A University, Busan 49315, Korea
Correspondence to:*yunkkim@dau.ac.kr, ORCID 0000-0002-4338-7642
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Near-infrared (NIR) sensing technology using CMOS image sensors is used in many applications, including automobiles, biological inspection, surveillance, and mobile devices. An intuitive way to improve NIR sensitivity is to thicken the light absorption layer (silicon). However, thickened silicon lacks NIR sensitivity and has other disadvantages, such as diminished optical performance (e.g. crosstalk) and difficulty in processing. In this paper, a pixel structure for NIR sensing using a stacked CMOS image sensor is introduced. There are two photodetection layers, a conventional layer and a bottom photodiode, in the stacked CMOS image sensor. The bottom photodiode is used as the NIR absorption layer. Therefore, the suggested pixel structure does not change the thickness of the conventional photodiode. To verify the suggested pixel structure, sensitivity was simulated using an optical simulator. As a result, the sensitivity was improved by a maximum of 130% and 160% at wavelengths of 850 nm and 940 nm, respectively, with a pixel size of 1.2 μm. Therefore, the proposed pixel structure is useful for NIR sensing without thickening the silicon.
Keywords: CMOS image sensor, High sensitivity, Infrared camera, Optical sensors, Wavelength sensitivity
Near-infrared (NIR) sensing technology has been extensively applied to automobiles, biological inspection, surveillance, mobile devices, and fiber optic communication [1-4]. NIR is a subset of the infrared band of the electromagnetic spectrum, covering wavelengths ranging from 0.7 μm to 1.4 μm. NIR is just outside the range of what humans can see and sometimes offers clearer details than what is achievable with visible images. NIR sensitivity is one of the important optical parameters for improving the image quality of NIR applications. However, silicon-based image sensors lack NIR sensitivity and have low absorption in the NIR wavelength band due to the limitation of silicon thickness [5].
Many studies are underway to increase NIR sensitivity by increasing the thickness of silicon or adding optical elements. The simplest way to improve NIR sensitivity is to make the silicon, as the light absorption layer, thicker [6, 7]. Increasing the silicon thickness to 6.0 μm results in a 50% sensitivity improvement at a wavelength of 940 nm [6]. However, while thick silicon improves NIR sensitivity, it degrades certain optical characteristics, such as lateral crosstalk, especially with small pixel sizes [8].
Another method to increase NIR sensitivity is by adding optical elements such as an inverted pyramid array (IPA) structure, backside scattering technology (BST), black silicon, or gratings in the pixel structure [7, 9-11]. An IPA silicon surface has been proposed and developed to increase the light propagation length and effective silicon thickness by light diffraction [7]. The IPA structure showed an 80% improvement in sensitivity, even though the light absorption layer was only 3.0 μm thick. BST was also introduced [9]. As light induces a BST pattern, light scattering occurs. As a result, the light path in the silicon increases, showing an improvement in light absorption. In addition, to improve NIR sensitivity, a black silicon surface using a nanostructure smaller than the wavelength on the silicon surface has been reported [10]. Black silicon is formed with random needle-shaped nanostructures on the surface. As the name suggests, black silicon has been applied to c-Si solar cells and image sensors due to its very efficient absorption of light over a wide range of wavelengths. Finally, improvement of NIR sensitivity using a grating structure has also been reported recently. A plasmonic image sensor with a surface metal grating structure improves NIR sensitivity [11]. Using the quasi-resonant condition of the plasmonic sensor, incident light is diffracted, and the optical path is increased. Moreover, due to plasmon resonance, most of the incident light on the surface is converted into surface plasmon waves, which suppresses the reflectance of the sensor surface, thereby improving the light utilization efficiency. A silver grating structure improved sensitivity by 5.3 times at a wavelength of 940 nm with a 3.0 μm silicon image sensor. However, these different pixel structures all result in optical degradation. NIR sensitivity is improved by the refraction, reflection, and high diffraction of incident light. At the same time, these optical characteristics affect crosstalk, which has a direct effect on image quality. In addition, the improvement of NIR sensitivity is still insufficient compared to visible light.
In this paper, we propose a pixel structure using a stacked CMOS image sensor with a stacked photodiode architecture to improve sensitivity in NIR wavelength bands. Chapter 2 describes the concept of the stacked photodiode pixel structure, and Chapter 3 presents and discusses the simulation results. Conclusions are presented in Chapter 4.
The NIR sensitivity of silicon-based image sensors is significantly lacking compared to visible light sensitivity. As mentioned above, the simplest way to increase NIR sensitivity is to increase the thickness of the silicon. Therefore, we first examined how the sensitivity increases according to the silicon thickness. Figure 1 shows the simulated sensitivity of typical pixel structures with silicon thicknesses of 3.0 μm, 4.0 μm, 5.0 μm, and 6.0 μm. As the silicon thickness increased, the sensitivity increased in the relatively long wavelength range above 600 nm.
Figure 1 shows normalized sensitivities for silicon thicknesses in the 400–1,000 nm wavelength range. As shown in the graph, the sensitivity of 6.0 μm silicon increased by 42% and 60% at NIR wavelengths of 850 nm and 940 nm, respectively, compared to 3.0 μm silicon. Although the silicon thicknesses have doubled, the sensitivities at the 850 nm and 940 nm NIR wavelengths are only 50% and 30% of that at the visible wavelength of 650 nm. Therefore, a new pixel structure is required to improve NIR sensitivity rather than simply increasing silicon thickness.
We suggested a back-side-illuminated (BSI) pixel structure with a stacked photodiode architecture to improve NIR sensitivity, as shown in Fig. 2. Since the absorption rate of silicon is lower as the wavelength increases, the penetration depth of photons increases [12]. Therefore, the longer the wavelength, the longer the optical path must be. The proposed stacked photodiode architecture provides a longer optical path than the typical pixel structure, so the sensitivity is expected to improve. The suggested structure is based on three-dimensional (3D) stacked image sensor technology. 3D stacked image sensors that bond three wafers are already applied and commercialized in many industries, and in this paper, a photodiode was formed on the pixel transistor layer based on this technology [13]. Figure 2(a) shows a conceptual diagram of the proposed stacked CMOS image sensors. There are three layers, a photodetection layer, a pixel transistor layer, and a logic layer, in the stacked CMOS image sensor. The photodetection layer mainly utilizes silicon as the light absorption layer, the color filter, and the microlens. The pixel transistor layer contains the pixel transistors to read out and amplify the photocurrent. Additionally, the logic layer includes an analog-to-digital converter (ADC) circuit, which converts an analog signal into a digital signal, and an image signal processing (IPS) circuit, which improves the quality of the image through various signal processing procedures. Figure 2(b) shows the suggested pixel structure, including the photodetection layer and the pixel transistor layer. The suggested structure has two light absorption regions, top and bottom photodiodes. The bottom photodiode is located in the pixel transistor layer below the top photodiode. The bottom photodiode absorbs light transmitted from the top photodiode. The bottom photodiode as well as the top photodiode has a floating diffusion (FD) node and a transfer gate for readout. The transfer gate transfers the charge generated by the photodiode to the FD node, and the charge collected in the FD node is converted into a voltage. The suggested pixel structure has wiring to connect the FD node and pixel transistors. Therefore, the wiring, called the deep contact, was shown to connect the FD nodes between the photodetection layer and the pixel transistor layer in Fig. 2. The deep contact is one of the methods for interconnecting wafers in 3D stacked image sensors [14]. The deep contact enables a connection between the top photodiode and the pixel transistor. Also, this wiring is efficient in area for the bottom photodiode.
Table 1 shows the conditions of the simulated pixel structures. The pitches of the pixels were 1.0 μm and 1.2 μm, and the radius of curvature (ROC) and height of the microlens were optimized to 0.7 μm and 0.7 μm, respectively. The color filter used a WRGB color filter array instead of a Bayer color filter array, and the thickness was 0.6 μm [15]. Two pixels of the WRGB color filter were used for sensitivity comparison: A red color filter (RCF) and white color filter (WCF). The reason for this choice was that RCF and WCF have high transmittance from the red wavelength to the NIR wavelength, that is, the 650 nm, 850 nm, and 940 nm wavelength bands. To suppress crosstalk between adjacent pixels, deep trench isolation (DTI) was used, and the width was 85 nm. The silicon thickness of the top photodiode was fixed at 3 μm, and the bottom photodiode was simulated by increasing the silicon thickness. The silicon thickness of the bottom photodiode was simulated in increments of 0.1 μm, from a minimum of 1.0 μm to a maximum of 2.1 μm. In addition, the wavelengths were simulated as 650 nm for visible light and 850 nm and 940 nm for NIR light.
Table 1 . Conditions of the simulated pixel structures.
Pixel Pitch (μm) | 1.0, 1.2 |
---|---|
Height and ROC of Microlens (μm) | 0.7 |
Thickness of Color Filter (W-RGB) (μm) | 0.6 |
Width of Deep Trench Isolation (DTI) (μm) | 0.085 |
Thickness of Silicon (μm) | 3.0–5.1 (3 + 2.1) |
We investigated the optical properties of the proposed stacked photodiode pixel structure using a 3D optical simulator based on the finite difference time domain (FDTD) method [16]. The FDTD method is generally used for the numerical analysis of CMOS image sensors [17]. To analyze the optical characteristics of the pixel, the absorbed photon density was used as a measure of sensitivity. The absorbed photon density was specified as the absorbed power density divided by photon energy. Here, the absorbed power density was calculated as the time average of the power of light absorbed in a given unit area.
Figure 3 shows the simulated sensitivity results for various bottom photodiode thicknesses in the 1.2 μm pixel pitch. The 650 nm wavelength was used for visible light, while 850 nm and 940 nm were used for NIR wavelengths. The sensitivities shown in the graphs are the sums of the sensitivities of the top and bottom photodiodes. Figure 3 (a) shows the sensitivity of the red pixel, which has the RCF, and Fig. 3(b) shows the sensitivity of the white pixel. The sensitivity shown in Fig. 3(a) (Bot: 2.1 μm) increased by 30%, 70%, and 120% at 650 nm, 850 nm, and 940 nm, respectively, compared to the sensitivity shown in Fig. 3(a) (typical). As shown in Fig. 3, the typical structure has insufficient sensitivity for NIR imaging. In contrast, when the proposed structure is used, not only the red wavelength sensitivity but also the NIR sensitivity is significantly increased. In the proposed structure, the amount of light absorbed increases as the optical path lengthens, because the optical path is extended. Compared with Fig. 1, the sensitivity results in Fig. 3 have different results. In Fig. 1, the sensitivity generally increases as the thickness increases, whereas the sensitivity results using the proposed structure tend not to increase uniformly as the thickness of the bottom photodiode increases. The reason is that the pixel structure of Fig. 1 is a structure in which only the thickness of silicon is increased, whereas the photodiode in Fig. 3 has two silicon layers. Therefore, there are some effects by the wiring layer and the silicon thickness of the bottom photodiode. The incident light can be reflected by metal wire. Also, the silicon thickness of the bottom photodiode makes the interference effect. For this reason, the sensitivity did not match between the typical pixel structure and suggested pixel structure with the same thickness. Moreover, similar to DTI, the wiring layer refracts the incident light and suppresses crosstalk. Since the power flux density is strong in the bottom photodiode, NIR sensitivity is increased.
Figure 4 shows the simulation results with a 1.0 μm pixel pitch. As the pixel size approaches the size of the incident wavelength, there is a physical limit where the light cannot fully reach the pixel. Therefore, the simulated incident wavelengths were changed to 650 nm and 850 nm. As the pixel gets smaller, sensitivity improvement is an important parameter not only for NIR wavelengths but also for visible wavelengths. The sensitivities shown in Fig. 4(a) (Bot: 2.1 μm) increased by 20% and 70% at 650 nm and 850 nm, respectively, compared to those shown in Fig. 4(a) (typical). Therefore, it was confirmed that the sensitivities of the red and NIR wavelengths were improved even with small pixels.
To clarify our simulation results, we show that the optical generated beam profile has a 1.2 μm pixel pitch (Fig. 5). The absorbed power flux density is shown when the 650 nm, 850 nm, and 940 nm wavelengths are incident at a light angle of 0°. Compared to the typical structure shown in Fig. 5(a), in the proposed structure presented in Fig. 5(b), the light passing through the top photodiode is generated from the bottom photodiode.
Figure 6 shows a crosstalk comparison between the typical pixel structure and the suggested stacked photodiode pixel structure. The pixel structure in which the silicon thickness is increased to 5.1 μm from the typical pixel structure and the stacked photodiode structure have a top photodiode of 3.0 μm and a bottom photodiode of 2.1 μm. This is the crosstalk result when incident light of 940 nm wavelength is incident at an angle of 10°. It was confirmed that the typical pixel structure had a high crosstalk of 25%, and the stacked photodiode pixel structure significantly improved the crosstalk to 19.3%. The suggested pixel structure has low crosstalk because the wiring layer around the bottom photodiode has the same role as DTI. Therefore, the suggested pixel structure has high efficiency in terms of crosstalk as well as sensitivity.
Figure 7 shows the simulated sensitivity in units of 10 nm from 400 nm to 1,000 nm wavelengths of a typical structure and a stacked photodiode architecture with a pixel pitch of 1.2 μm. In the graph, the red line is the sensitivity results of the incident light passing through the RCF of the W-RGB CFA, and similarly, the blue line represents the BCF, the white line represents the WCF, and the green line represents the GCF. Sensitivity was greatly improved in the long wavelength region above 600 nm. As shown in Fig. 7(a), the sensitivity to the 650 nm wavelength transmitted through the RCF improved by 27%. In addition, the sensitivity was improved by 130% for the 850 nm wavelength and 160% for the 940 nm wavelength transmitted through the RCF. In the case of the smaller pixel size shown in Fig. 7(b), sensitivity improved by 21% for the 650 nm wavelength and 109% for the 850 nm wavelength transmitted through the RCF.
This paper introduced a CMOS image sensor pixel structure with a stacked photodiode architecture for high-sensitivity NIR sensing. A bottom photodiode was located in the pixel transistor layer under the top photodiode to create a new light absorption layer. To analyze the performance of the suggested structure, four-pixel structures with BSI pixel sizes of 1.2 μm and 1.0 μm were investigated through 3D optical simulation. The thickness of the top photodiode was fixed at 3.0 μm, and the bottom photodiode was simulated by increasing the thickness. The optical path became longer due to the bottom photodiode, resulting in an improvement in sensitivity. At wavelengths of 650 nm, 850 nm, and 940 nm with a 1.2 μm pixel structure, the improvements were 27%, 130%, and 160%, respectively. We compared the typical pixel with the suggested pixel and found that the absorptivity of the suggested pixel was greatly improved from the visible light to NIR wavelength bands. Therefore, the proposed pixel structure is useful for both visible and NIR images without thickening the silicon.
A pixel structure with high sensitivity is needed for NIR applications. Accordingly, we developed a high-sensitivity NIR pixel based on stacked image sensor technology [18]. 3D stacking technology is currently an important trend in the image sensor industry. Although measurement results of the proposed pixel structure were not available in this paper due to environmental constraints, it is expected that it would be possible to fabricate the structure based on stacked image sensors. For example, sublocal connection and deep contact, which optimize the wiring layer for connecting the pixel transistor layer and the photodetection layer, are known as one of the latest stacked image sensor technologies [19]. It is expected that the complexity of the wiring layer existing in the bottom photodiode can be reduced with sublocal connection technology, and the wiring layer can be designed even for the thicker bottom photodiode with deep contact. In addition, the application of dynamic random access memory (DRAM) to pixel transistor layers has been reported [20]. The location of the photodiode proposed in this paper is in the pixel transistor layer, which increases feasibility as many packaging and stacking technologies develop. In subsequent research, not only RGB-W CFA but also Bayer CFA will be applied, and analyses will be performed at various smaller pixel sizes.
The authors declare no conflicts of interest.
Data underlying the results presented in this paper are not publicly available at the time of publication, but may be obtained from the authors upon reasonable request.
The EDA tool was supported by the IC Design Education Center (IDEC), Korea.
National Research Foundation of Korea (NRF) grant funded by the Korean government (MIST) (No. NRF-2020R1F1A1073614).
Table 1 Conditions of the simulated pixel structures
Pixel Pitch (μm) | 1.0, 1.2 |
---|---|
Height and ROC of Microlens (μm) | 0.7 |
Thickness of Color Filter (W-RGB) (μm) | 0.6 |
Width of Deep Trench Isolation (DTI) (μm) | 0.085 |
Thickness of Silicon (μm) | 3.0–5.1 (3 + 2.1) |