G-0K8J8ZR168
검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Curr. Opt. Photon. 2021; 5(4): 409-420

Published online August 25, 2021 https://doi.org/10.3807/COPP.2021.5.4.409

Full-color Non-hogel-based Computer-generated Hologram from Light Field without Color Aberration

Dabin Min, Kyosik Min, Jae-Hyeung Park

Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea

Corresponding author: *jh.park@inha.ac.kr, ORCID 0000-0002-5881-7369

Received: June 18, 2021; Revised: July 9, 2021; Accepted: July 16, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

We propose a method to synthesize a color non-hogel-based computer-generated-hologram (CGH) from light field data of a three-dimensional scene with a hologram pixel pitch shared for all color channels. The non-hogel-based CGH technique generates a continuous wavefront with arbitrary carrier wave from given light field data by interpreting the ray angle in the light field to the spatial frequency of the plane wavefront. The relation between ray angle and spatial frequency is, however, dependent on the wavelength, which leads to different spatial frequency sampling grid in the light field data, resulting in color aberrations in the hologram reconstruction. The proposed method sets a hologram pixel pitch common to all color channels such that the smallest blue diffraction angle covers the field of view of the light field. Then a spatial frequency sampling grid common to all color channels is established by interpolating the light field with the spatial frequency range of the blue wavelength and the sampling interval of the red wavelength. The common hologram pixel pitch and light field spatial frequency sampling grid ensure the synthesis of a color hologram without any color aberrations in the hologram reconstructions, or any loss of information contained in the light field. The proposed method is successfully verified using color light field data of various test or natural 3D scenes.

Keywords: Computer-generated hologram, Full-color, Light field

OCIS codes: (090.1000) Aberration compensation; (090.1705) Color holography; (090.1760) Computer holography; (090.1995) Digital holography

We propose a novel method that synthesizes a color computer-generated-hologram (CGH) from light field data. CGH plays an important role in three-dimensional (3D) holographic displays [1, 2]. For the hologram synthesis, 3D objects are represented in various forms, and one of them is the light field. Light field data is expressed as a set of spatial and angular rays of light coming from 3D objects [37]. The representation of these rays is equivalent to an array of views that look at the 3D object from various directions. Creating holograms from light field data has the advantage that the light field acquisition of real objects is easily achieved with a light field camera [4]. Another advantage of the light field CGH is that scene details such as occlusion and material reflection properties are already included in the light field data, and they can be reflected in the hologram with proper processing.

Synthesizing holograms from light field data has been researched for decades [58]. Most of the conventional methods divide the holographic plane into small areas called hogels [9, 10]. Each hogel is processed with a corresponding view and the processed hogels are assembled together, completing the hologram. These hogel based methods, however, generally have a limitation on the number of hogels because of the tradeoff relationship between the number of hogels and the number of pixels in each hogel at a given hologram resolution. This hogel number limitation reduces the maximum spatial resolution of the reconstructed 3D images. Another limitation of the hogel based methods is the phase mismatch. Since reconstruction from each hogel has phase mismatch with the one from the neighboring hogel, continuous wavefront over the hologram plane cannot be reproduced.

Recently, a non-hogel-based CGH method has been introduced [11]. The non-hogel-based CGH method processes all the views in the light field data globally, solving the shortcomings of the hogel-based method. Applicability of arbitrary carrier wave or phase distribution on the 3D object surface is another advantage, enabling optimization of the hologram for each specific application. Although the requirement of a densely sampled large amounts of light field data was problematic in the initial proposal of the non-hogel-based CGH [11], a more efficient calculation scheme has been developed later, reducing the computation time significantly [12].

Despite the advantages, the non-hogel-based CGH has only been demonstrated for a single color. The non-hogel-based CGH method involves sampling and manipulation of the light field data in a spatial frequency domain. Different wavelength results in different spatial frequency for the same ray direction. As red, green, and blue color channels of each view in the light field data share the same field of view (FoV) and observing direction, each color channel has a different spatial frequency sampling grid whose efficient processing is not trivial and has not been developed yet.

In this paper, we propose a method for calculating color non-hogel based CGH. The proposed method interpolates the two-dimensional (2D) spatial frequency grid of green and blue color channels using the sampling interval of a red color channel. Meanwhile, the spatial frequency range is set by the blue channel and the red and green channels are zero-padded to match the range. The proposed spatial frequency grid manipulation using the red sampling interval and blue grid range ensures efficient processing without loss of information. The proposed method is verified by comparing the reconstructions of color CGHs synthesized with and without the proposed method.

The non-hogel-based CGH technique synthesizes complex fields from light field data of the 3D scenes. In a plane, the light field can be denoted by l(tx, ty, θx, θy), where (tx, ty) is the spatial position of the ray and (θx, θy) is the angular direction measured in radians. The light field l(tx, ty, θx, θy) is fully defined in geometric optics. However, to relate the light field with a hologram defined in wave optics, the non-hogel-based CGH technique interprets the ray direction (θx, θy) of the light field as the spatial frequency (u, v) of the plane wave having the same direction as the ray [11]. The spatial frequency – ray angle relation is given by

u=sinθxλθxλ, ν=sinθyλθyλ,

where λ is the wavelength. The light field is now represented by L(tx, ty, u, v) = l(tx, ty, θx = λu, θyv) with the spatial position and spatial frequency.

Hologram synthesis from the light field L(tx, ty, u, v) by the non-hogel-based CGH consists of two steps [11, 12]. In the first step, the light field L(tx, ty, u, v) is 2D Fourier transformed along the (u, v) axes, giving

L˜tx,ty,τx,τy= L t x , t y ,μ,νexpj2πτxμ+τyνdudν,

where (τx, τy) represents axes after the 2D Fourier transform over (u, v) axes. The hologram H(x, y) is then synthesized by

Hx,y= L˜ x+xc2, y+yc2,xxc,yycWxc,yc dxcdyc,

where xc and yc are spatial positions in the hologram plane and W(xc, yc) represents a carrier wave which can be selected arbitrarily. Using τx = xxc, τy = yyc, tx = xτx / 2, ty = yτy / 2, Eq. (3) can also be rearranged to

Hx,y= H τx,τy x,ydτxdτy,Hτx,τy tx+ τ x 2,ty+ τ y 2 =L˜tx,ty,τx,τy Wtx τ x 2,ty τ y 2 .

Equation (4) indicates that each (τx, τy) slice of the L~ (tx, ty, τx, τy) is multiplied with W and accumulated in the hologram plane with a corresponding shift (±τx/2, ±τy/2), completing the hologram.

Implementation of the non-hogel-based CGH explained in the previous section involves discrete signals with adequate sampling. The wavelength dependency of the sampling requirement makes the color hologram synthesis non-trivial.

Suppose that a light field data l(tx, ty, θx, θy) is prepared with Ntx × Nty × Nθx × Nθy samples with ∆tx, ∆ty, ∆θx, ∆θy sampling intervals. The angular range of the light field data, or FoV is given by FoVx = Nθx∆θx, FoVy = Nθy∆θy. From Eq. (1), the light field represented in spatial frequency L(tx, ty, u, v) has Ntx × Nty × Nu(= Nθx) × Nv(= Nθy) samples with ∆tx, ∆ty, ∆u(=∆θx,/λ), ∆v(=∆θy/λ) sampling intervals. The hologram sampling number Nx × Ny and interval ∆x and ∆y are determined to cover the spatial and angular range of the light field, i.e. Ntxtx × Ntyty in the spatial domain and FoVx × FoVy = Nθx∆θx × Nθy∆θy in the angular domain which gives a condition:

ΔxλFoVx, ΔyλFoVy, NxNtxΔtxΔxNtxΔtxFoVxλ, NyNtyΔtyΔyNtyΔtyFoVyλ,

where Eq. (5) is deduced from the maximum diffraction angle of the hologram with a pixel pitch ∆x and ∆y for a wavelength λ. Equations (5) and (6) indicate that for a given FoV the required sampling pitch and the sampling number of the hologram are dependent on the wavelength.

Another sampling consideration is for the τx and τy of the light field. From Eq. (2), the sampling interval ∆τx and ∆τy are given by

Δτx=1NuΔu=λNθxΔθx=λFoVx, Δτy=1NνΔν=λNθyΔθy=λFoVy,

where Nu = Nθx, Nv = Nθy, ∆u = ∆θx/λ, and ∆v = ∆θy/λ are used. Note that from Eqs. (5) and (7) the hologram sampling interval ∆x, ∆y and the light field sampling interval ∆τx, ∆τy can be the same and this is advantageous to implement the shift operation in Eq. (4).

Suppose a color light field data is given with the same FoV for all color channels. The red, green, and blue channels of the light field data lR(tx, ty, θx, θy), lG(tx, ty, θx, θy), lB(tx, ty, θx, θy) have the same number of the sampling points Ntx × Nty × Nθx(= Nu) × Nθy(= Nv) with the same sampling intervals ∆tx, ∆ty, ∆θx(= FoVx / Nθx = FoVx / Nu), and ∆θy(= FoVy/Nθy = FoVy / Nv) as illustrated in Fig. 1(a). Because of the wavelength dependency of the angle – spatial frequency relation in Eq. (1), the spatial frequency range u, v contained in the light field is different for each color channel at the same FoV as given by

Figure 1.Proposed method: (a) original color light field data with the same FoV for all color channels, (b) resampling by zero-padding, and (c) interpolation. Vertical axis is represented in ray angle θx.

uRFoVx2λR, uGFoVx2λG, uBFoVx2λB,νRFoVy2λR, νGFoVy2λG, νBFoVy2λB,

where λR, λG, and λB are the wavelengths of the red, green, and blue color channel. The sampling interval of the spatial frequency can also be obtained by dividing the range in Eq. (8) by the number of samples Nθx = Nu, Nθy = Nv:

ΔuRFoVxNuλR, ΔuGFoVxNuλG, ΔuBFoVxNuλB,ΔνRFoVyNνλR, ΔνGFoVyNνλG, ΔνBFoVyNνλB.

Equations (8) and (9) indicate that the spatial frequency range and the sampling interval in LR(tx, ty, u, v), LG(tx, ty, u, v), and LB(tx, ty, u, v) of the same FoV are inversely proportional to the wavelength, giving largest values for the blue color channel as illustrated in Fig. 2(a). Note that the sampling interval ∆τx, ∆τy is also different for different colors, implementation of the shift operation in Eq. (4) is complicated.

Figure 2.Proposed method: (a) original color light field data with the same FoV for all color channels, (b) resampling by zero-padding, and (c) interpolation. Vertical axis is represented in spatial frequency u.

To synthesize a color hologram without information loss of the light field, the proposed method sets a single common hologram pixel pitch and performs 2D interpolation along the u, v axes of the light field with proper zero-padding operation. First, the hologram pixel pitch is determined for the diffraction angle of each color channel to cover the FoV. From Eq. (5), the pixel pitch requirements of three-color channels are given by

ΔxRλRFoVx, ΔxGλGFoVx, ΔxBλBFoVx,ΔyRλRFoVy, ΔyGλGFoVy, ΔyBλBFoVy.

To satisfy three conditions of Eq. (10) together, the hologram pixel pitch is set to be

Δx=λBFoVx, Δy=λBFoVy,

following the blue condition of the shortened wavelength.

Once the pixel pitch condition is determined, the FoV of each color channel is extended with zero-padded in the proposed method. As mentioned earlier, it is advantageous that the sampling interval ∆τx is equal to the hologram sampling interval ∆x in implementing the shift and accumulation operation of Eq. (4). Considering the hologram pixel pitch condition given in Eq. (11), the FoVs of red and green color channels are manipulated according to Eq. (7) by

FoVx,R=λRΔx=FoVxλRλB, FoVx,G=λGΔx=FoVxλGλBFoVy,R=λRΔy=FoVyλRλB, FoVy,G=λGΔy=FoVyλGλB,

as illustrated in Fig. 1(b). Note that the manipulated FoVs, i.e., FoVx,R and FoVx,G, are always larger than the original FoVx.

Finally, each color channel of the light field data LR(tx, ty, u, v), LG(tx, ty, u, v), LB(tx, ty, u, v) is interpolated along the u, v axes with the original sampling interval of the red channel ∆uR and ∆vR given by Eq. (9). Note that the u, v sampling interval common to all color channels is selected to be red channel one ∆uR and ∆vR because it is the smallest according to Eq. (9) and prevents possible information loss due to the increase of the sampling interval. Because the FoV increases in the case of red and green channels, i.e., from FoVx to FoVx,R or FoVx,G and the sampling interval decreases in the case of green and blue channels, i.e., from ∆uG or ∆uB to ∆uR, the number of sampling points along the u, v axes increases in all color channels to

FoVx,RΔuRλR=FoVx,GΔuRλG=FoVxΔuRλB=NuλRλB=Nu',FoVy,RΔνRλR=FoVy,GΔνRλG=FoVyΔνRλB=NνλRλB=Nν',

where Eqs. (8), (9), and (12) are used. In Fig. 2 (a), (b), illustrate the extended FoV with zero-padding and the new interpolation of the proposed method, respectively. Note that by the proposed method all three color channels have the same sampling points Nu′ × Nv′ with the same sampling interval ∆uR and ∆vR along u, v axes, which correspond to the same u, v range but different FoVs.

The proposed method is verified by synthesizing color holograms from light field data of various 3D scenes. In all calculations, the red, green, and blue wavelengths are set to be λR = 633 nm, λG = 532 nm, and λB = 488 nm. Figure 3 shows light field data of cross target objects. The scene consists of 4 cross targets which are in the same z = −0.377 mm plane, but have different colors i.e., red, green, blue, and white. The light field data has Nu × Nv = 64 × 64 orthographic views with FoVx = FoVy = 7.5°. Each orthographic view has Ntx × Nty = 300 × 300 pixels with ∆tx = ∆ty = 3.73 µm pixel pitch. Using this light field data, color holograms are synthesized with and without the proposed method. When the proposed method is not applied, all red, green, and blue channels of the hologram are synthesized using the blue wavelength λB = 488 nm to make the hologram pixel pitch the same for all color channels. In the reconstruction, the red, green, and blue wavelengths λR = 633 nm, λG = 532 nm, and λB = 488 nm are used for the corresponding color channel of the hologram. Figure 4 shows the amplitude and phase of the synthesized holograms. Figure 5 gives their reconstruction results. As can be seen in the top row of Fig. 5, when the proposed method is not applied, color aberration occurs as expected. Although the blue cross target is focused at the original distance z = −0.377 mm, the green and red cross targets are focused at aberrated distances z = −0.346 mm and z = −0.291 mm, respectively, which correspond to λBG and λBR times the original distance z = −0.377 mm. This is because the sampling condition determination and the synthesis of all color channels of the hologram are conducted using the blue wavelength to generate a color hologram with a single pixel pitch shared for all color channels. To the contrary, when the proposed method is applied, the color aberration is removed and all red, green, blue, and white cross targets are focused at the correct distance z = −0.377 mm as shown in the bottom row of Fig. 5. Therefore, the correct color hologram synthesis with the same pixel pitch for all color channels by the proposed method is confirmed for the case of a single depth scene.

Figure 3.Light field data of cross target array. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.

Figure 4.Amplitude and phase of hologram without (top) and with (bottom) proposed method.

Figure 5.Numerical reconstructions of holograms synthesized without (top), and with (bottom) proposed method.

Figure 6 shows the light field data of a 3D scene with multiple depths. The scene consists of 4 × 4 resolution targets located at different depths from z = −14.13 mm to z = +14.13 mm with 1.767 mm spacing except z = 0 mm. The light field data has Nu = Nv = 64 orthographic views each of which has Ntx × Nty = 1563 × 1563 pixels with ∆tx = ∆ty = 3.67 µm pixel pitch and FoVx = FoVy = 7.62°. Figure 7 shows the amplitudes and the phase of the synthesized hologram without and with the proposed method. Figure 8 shows the numerical reconstructions at various depths, and focused resolution target images corresponding to three depths z = −3.53 mm, −8.83 mm, and −14.13 mm are highlighted. Figure 8(a) shows that color aberration occurs when the proposed method is not applied. It is also observed that the color aberration becomes more severe as the depth increases as the aberrated depths for green and red channels are given by λBG and λBR times the original depth and thus proportional to the original depth. On the other hand, when the proposed method is applied, the resolution targets are focused at their original depths without any color aberration as shown in Fig. 8(b).

Figure 6.Light field data of resolution target array. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.

Figure 7.Amplitude and phase of hologram without (top) and with (bottom) proposed method.

Figure 8.Numerical reconstructions of holograms synthesized (a) without, and (b) with proposed method.

Finally, the proposed method is tested with a natural scene with continuous depth 3D objects. A 3D scene model is loaded to a software Blender and orthographic images are rendered as shown in Fig. 9. Each orthographic view has 1563 × 1563 pixels with ∆tx = ∆ty = 3.73 µm pixel pitch. Nu × Nv = 64 × 64 orthographic views are rendered within FoVx = FoVy = 7.5°. The central depths of objects are z1 = −0.438 mm (grapes), z2 = −0.568 mm (tangerine, apple), z3 = −0.767 mm (pumpkin), z4 = −0.996 mm (orange), and z5 = −1.315 mm (basket). Figure 10 shows the amplitude and the phase of the synthesized holograms with and without proposed method and Fig. 11 shows their reconstruction results. In Fig. 11, red and blue boxes highlight focused object parts. It is clearly confirmed from Fig. 11 that the reconstructions are blurred (marked A), dispersed (marked B), or false-colored (marked C) all because of the color aberration when the proposed method is not applied. The proposed method solves these problems, generating clear focused reconstructions as shown in the lower part of Fig. 11.

Figure 9.Light field data of natural scene with continuous depth 3D objects. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.

Figure 10.Amplitude and phase of hologram without (top) and with (bottom) proposed method.

Figure 11.Numerical reconstructions of holograms synthesized without (top), and with (bottom) proposed method.

In this paper, we propose a method to calculate a full-color non-hogel-based CGH from light field data of 3D scenes. All color channels of the light field data share the same FoV. But the wavelength dependency of the angle equivalent spatial frequency makes their sampling condition different, making a synthesis of color hologram with a shared pixel pitch complicated. The proposed method resamples the light field data along the spatial frequency axes such that all red, green, and blue color channels have the same sampling grid. The sampling range and interval along the spatial frequency axes are set by the ones of blue and red color channels, respectively, which ensure no loss of information. The proposed method is verified using various 3D scenes including single depth plane test target, multiple plane test targets at different depths, and natural continuous 3D objects distributed in space. The reconstructions of the color holograms synthesized by the proposed method show clear focus of the 3D objects at their depths without any color aberrations, successfully confirming the feasibility of the proposed method.

This work was supported by INHA UNIVERSITY Research Grant.

1. J.-H. Park, “Recent progresses in computer-generated holography for three-dimensional scene,” J. Inf. Disp. 18, 1-12 (2017).
2. Z. Gan, X. Peng and H. Hong, “An evaluation model for analyzing the overlay error of computer-generated holograms,” Curr. Opt. Photon. 4, 277-285 (2020).
3. M. Levoy and P. Hanrahan, “Light field rendering,,” in Proc. SIGGRAPH96: 23rd International Conference on Computer Graphics and Interactive Techniques (New Orleans, LA, USA,Aug. 1996), pp. 31-42.
4. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz and P. Hanrahan. Light field photography with a hand-held plenoptic camera,, Stanford Tech. Rep. CTSR 2005-02, Computer Science Department, Stanford University, 2005.
5. M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33, 2348-2364 (2016).
6. J.-H. Park, M.-S. Kim, G. Baasantseren and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17, 6320-6334 (2009).
7. N. Chen, Z. Ren and E. Y. Lam, “High-resolution Fourier hologram synthesis from photographic images through computing the light field,” Appl. Opt. 55, 1751-1756 (2016).
8. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19, 9086-9101 (2011).
9. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photon. 5, 456-535 (2013).
10. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15, 2722-2729 (1976).
11. J.-H. Park and M. Askari, “Non-hogel-based computer generated hologram from light field using complex field recovery technique from Wigner distribution function,” Opt. Express 27, 2562-2574 (2019).
12. J.-H. Park, “Efficient calculation scheme for high pixel resolution non-hogel-based computer generated hologram from light field,” Opt. Express 28, 6663-6683 (2020).

Article

Article

Curr. Opt. Photon. 2021; 5(4): 409-420

Published online August 25, 2021 https://doi.org/10.3807/COPP.2021.5.4.409

Full-color Non-hogel-based Computer-generated Hologram from Light Field without Color Aberration

Dabin Min, Kyosik Min, Jae-Hyeung Park

Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea

Correspondence to:*jh.park@inha.ac.kr, ORCID 0000-0002-5881-7369

Received: June 18, 2021; Revised: July 9, 2021; Accepted: July 16, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a method to synthesize a color non-hogel-based computer-generated-hologram (CGH) from light field data of a three-dimensional scene with a hologram pixel pitch shared for all color channels. The non-hogel-based CGH technique generates a continuous wavefront with arbitrary carrier wave from given light field data by interpreting the ray angle in the light field to the spatial frequency of the plane wavefront. The relation between ray angle and spatial frequency is, however, dependent on the wavelength, which leads to different spatial frequency sampling grid in the light field data, resulting in color aberrations in the hologram reconstruction. The proposed method sets a hologram pixel pitch common to all color channels such that the smallest blue diffraction angle covers the field of view of the light field. Then a spatial frequency sampling grid common to all color channels is established by interpolating the light field with the spatial frequency range of the blue wavelength and the sampling interval of the red wavelength. The common hologram pixel pitch and light field spatial frequency sampling grid ensure the synthesis of a color hologram without any color aberrations in the hologram reconstructions, or any loss of information contained in the light field. The proposed method is successfully verified using color light field data of various test or natural 3D scenes.

Keywords: Computer-generated hologram, Full-color, Light field

I. INTRODUCTION

We propose a novel method that synthesizes a color computer-generated-hologram (CGH) from light field data. CGH plays an important role in three-dimensional (3D) holographic displays [1, 2]. For the hologram synthesis, 3D objects are represented in various forms, and one of them is the light field. Light field data is expressed as a set of spatial and angular rays of light coming from 3D objects [37]. The representation of these rays is equivalent to an array of views that look at the 3D object from various directions. Creating holograms from light field data has the advantage that the light field acquisition of real objects is easily achieved with a light field camera [4]. Another advantage of the light field CGH is that scene details such as occlusion and material reflection properties are already included in the light field data, and they can be reflected in the hologram with proper processing.

Synthesizing holograms from light field data has been researched for decades [58]. Most of the conventional methods divide the holographic plane into small areas called hogels [9, 10]. Each hogel is processed with a corresponding view and the processed hogels are assembled together, completing the hologram. These hogel based methods, however, generally have a limitation on the number of hogels because of the tradeoff relationship between the number of hogels and the number of pixels in each hogel at a given hologram resolution. This hogel number limitation reduces the maximum spatial resolution of the reconstructed 3D images. Another limitation of the hogel based methods is the phase mismatch. Since reconstruction from each hogel has phase mismatch with the one from the neighboring hogel, continuous wavefront over the hologram plane cannot be reproduced.

Recently, a non-hogel-based CGH method has been introduced [11]. The non-hogel-based CGH method processes all the views in the light field data globally, solving the shortcomings of the hogel-based method. Applicability of arbitrary carrier wave or phase distribution on the 3D object surface is another advantage, enabling optimization of the hologram for each specific application. Although the requirement of a densely sampled large amounts of light field data was problematic in the initial proposal of the non-hogel-based CGH [11], a more efficient calculation scheme has been developed later, reducing the computation time significantly [12].

Despite the advantages, the non-hogel-based CGH has only been demonstrated for a single color. The non-hogel-based CGH method involves sampling and manipulation of the light field data in a spatial frequency domain. Different wavelength results in different spatial frequency for the same ray direction. As red, green, and blue color channels of each view in the light field data share the same field of view (FoV) and observing direction, each color channel has a different spatial frequency sampling grid whose efficient processing is not trivial and has not been developed yet.

In this paper, we propose a method for calculating color non-hogel based CGH. The proposed method interpolates the two-dimensional (2D) spatial frequency grid of green and blue color channels using the sampling interval of a red color channel. Meanwhile, the spatial frequency range is set by the blue channel and the red and green channels are zero-padded to match the range. The proposed spatial frequency grid manipulation using the red sampling interval and blue grid range ensures efficient processing without loss of information. The proposed method is verified by comparing the reconstructions of color CGHs synthesized with and without the proposed method.

II. NON-HOGEL-BASED CGH METHOD

The non-hogel-based CGH technique synthesizes complex fields from light field data of the 3D scenes. In a plane, the light field can be denoted by l(tx, ty, θx, θy), where (tx, ty) is the spatial position of the ray and (θx, θy) is the angular direction measured in radians. The light field l(tx, ty, θx, θy) is fully defined in geometric optics. However, to relate the light field with a hologram defined in wave optics, the non-hogel-based CGH technique interprets the ray direction (θx, θy) of the light field as the spatial frequency (u, v) of the plane wave having the same direction as the ray [11]. The spatial frequency – ray angle relation is given by

where λ is the wavelength. The light field is now represented by L(tx, ty, u, v) = l(tx, ty, θx = λu, θyv) with the spatial position and spatial frequency.

Hologram synthesis from the light field L(tx, ty, u, v) by the non-hogel-based CGH consists of two steps [11, 12]. In the first step, the light field L(tx, ty, u, v) is 2D Fourier transformed along the (u, v) axes, giving

$L˜tx,ty,τx,τy=∬ L t x , t y ,μ,νexp−j2πτxμ+τyνdudν,$

where (τx, τy) represents axes after the 2D Fourier transform over (u, v) axes. The hologram H(x, y) is then synthesized by

$Hx,y=∬ L˜ x+xc2, y+yc2,x−xc,y−ycWxc,yc dxcdyc,$

where xc and yc are spatial positions in the hologram plane and W(xc, yc) represents a carrier wave which can be selected arbitrarily. Using τx = xxc, τy = yyc, tx = xτx / 2, ty = yτy / 2, Eq. (3) can also be rearranged to

$Hx,y=∬ H τx,τy x,ydτxdτy,Hτx,τy tx+ τ x 2,ty+ τ y 2 =L˜tx,ty,τx,τy Wtx− τ x 2,ty− τ y 2 .$

Equation (4) indicates that each (τx, τy) slice of the L~ (tx, ty, τx, τy) is multiplied with W and accumulated in the hologram plane with a corresponding shift (±τx/2, ±τy/2), completing the hologram.

III. SAMPLING WAVELENGTH DEPENDENCY ANALYSIS

Implementation of the non-hogel-based CGH explained in the previous section involves discrete signals with adequate sampling. The wavelength dependency of the sampling requirement makes the color hologram synthesis non-trivial.

Suppose that a light field data l(tx, ty, θx, θy) is prepared with Ntx × Nty × Nθx × Nθy samples with ∆tx, ∆ty, ∆θx, ∆θy sampling intervals. The angular range of the light field data, or FoV is given by FoVx = Nθx∆θx, FoVy = Nθy∆θy. From Eq. (1), the light field represented in spatial frequency L(tx, ty, u, v) has Ntx × Nty × Nu(= Nθx) × Nv(= Nθy) samples with ∆tx, ∆ty, ∆u(=∆θx,/λ), ∆v(=∆θy/λ) sampling intervals. The hologram sampling number Nx × Ny and interval ∆x and ∆y are determined to cover the spatial and angular range of the light field, i.e. Ntxtx × Ntyty in the spatial domain and FoVx × FoVy = Nθx∆θx × Nθy∆θy in the angular domain which gives a condition:

where Eq. (5) is deduced from the maximum diffraction angle of the hologram with a pixel pitch ∆x and ∆y for a wavelength λ. Equations (5) and (6) indicate that for a given FoV the required sampling pitch and the sampling number of the hologram are dependent on the wavelength.

Another sampling consideration is for the τx and τy of the light field. From Eq. (2), the sampling interval ∆τx and ∆τy are given by

where Nu = Nθx, Nv = Nθy, ∆u = ∆θx/λ, and ∆v = ∆θy/λ are used. Note that from Eqs. (5) and (7) the hologram sampling interval ∆x, ∆y and the light field sampling interval ∆τx, ∆τy can be the same and this is advantageous to implement the shift operation in Eq. (4).

IV. PROPOSED FULL-COLOR NON-HOGEL-BASED CGH

Suppose a color light field data is given with the same FoV for all color channels. The red, green, and blue channels of the light field data lR(tx, ty, θx, θy), lG(tx, ty, θx, θy), lB(tx, ty, θx, θy) have the same number of the sampling points Ntx × Nty × Nθx(= Nu) × Nθy(= Nv) with the same sampling intervals ∆tx, ∆ty, ∆θx(= FoVx / Nθx = FoVx / Nu), and ∆θy(= FoVy/Nθy = FoVy / Nv) as illustrated in Fig. 1(a). Because of the wavelength dependency of the angle – spatial frequency relation in Eq. (1), the spatial frequency range u, v contained in the light field is different for each color channel at the same FoV as given by

Figure 1. Proposed method: (a) original color light field data with the same FoV for all color channels, (b) resampling by zero-padding, and (c) interpolation. Vertical axis is represented in ray angle θx.

where λR, λG, and λB are the wavelengths of the red, green, and blue color channel. The sampling interval of the spatial frequency can also be obtained by dividing the range in Eq. (8) by the number of samples Nθx = Nu, Nθy = Nv:

Equations (8) and (9) indicate that the spatial frequency range and the sampling interval in LR(tx, ty, u, v), LG(tx, ty, u, v), and LB(tx, ty, u, v) of the same FoV are inversely proportional to the wavelength, giving largest values for the blue color channel as illustrated in Fig. 2(a). Note that the sampling interval ∆τx, ∆τy is also different for different colors, implementation of the shift operation in Eq. (4) is complicated.

Figure 2. Proposed method: (a) original color light field data with the same FoV for all color channels, (b) resampling by zero-padding, and (c) interpolation. Vertical axis is represented in spatial frequency u.

To synthesize a color hologram without information loss of the light field, the proposed method sets a single common hologram pixel pitch and performs 2D interpolation along the u, v axes of the light field with proper zero-padding operation. First, the hologram pixel pitch is determined for the diffraction angle of each color channel to cover the FoV. From Eq. (5), the pixel pitch requirements of three-color channels are given by

To satisfy three conditions of Eq. (10) together, the hologram pixel pitch is set to be

following the blue condition of the shortened wavelength.

Once the pixel pitch condition is determined, the FoV of each color channel is extended with zero-padded in the proposed method. As mentioned earlier, it is advantageous that the sampling interval ∆τx is equal to the hologram sampling interval ∆x in implementing the shift and accumulation operation of Eq. (4). Considering the hologram pixel pitch condition given in Eq. (11), the FoVs of red and green color channels are manipulated according to Eq. (7) by

as illustrated in Fig. 1(b). Note that the manipulated FoVs, i.e., FoVx,R and FoVx,G, are always larger than the original FoVx.

Finally, each color channel of the light field data LR(tx, ty, u, v), LG(tx, ty, u, v), LB(tx, ty, u, v) is interpolated along the u, v axes with the original sampling interval of the red channel ∆uR and ∆vR given by Eq. (9). Note that the u, v sampling interval common to all color channels is selected to be red channel one ∆uR and ∆vR because it is the smallest according to Eq. (9) and prevents possible information loss due to the increase of the sampling interval. Because the FoV increases in the case of red and green channels, i.e., from FoVx to FoVx,R or FoVx,G and the sampling interval decreases in the case of green and blue channels, i.e., from ∆uG or ∆uB to ∆uR, the number of sampling points along the u, v axes increases in all color channels to

$FoVx,RΔuRλR=FoVx,GΔuRλG=FoVxΔuRλB=NuλRλB=Nu',FoVy,RΔνRλR=FoVy,GΔνRλG=FoVyΔνRλB=NνλRλB=Nν',$

where Eqs. (8), (9), and (12) are used. In Fig. 2 (a), (b), illustrate the extended FoV with zero-padding and the new interpolation of the proposed method, respectively. Note that by the proposed method all three color channels have the same sampling points Nu′ × Nv′ with the same sampling interval ∆uR and ∆vR along u, v axes, which correspond to the same u, v range but different FoVs.

V. VERIFICATION RESULTS

The proposed method is verified by synthesizing color holograms from light field data of various 3D scenes. In all calculations, the red, green, and blue wavelengths are set to be λR = 633 nm, λG = 532 nm, and λB = 488 nm. Figure 3 shows light field data of cross target objects. The scene consists of 4 cross targets which are in the same z = −0.377 mm plane, but have different colors i.e., red, green, blue, and white. The light field data has Nu × Nv = 64 × 64 orthographic views with FoVx = FoVy = 7.5°. Each orthographic view has Ntx × Nty = 300 × 300 pixels with ∆tx = ∆ty = 3.73 µm pixel pitch. Using this light field data, color holograms are synthesized with and without the proposed method. When the proposed method is not applied, all red, green, and blue channels of the hologram are synthesized using the blue wavelength λB = 488 nm to make the hologram pixel pitch the same for all color channels. In the reconstruction, the red, green, and blue wavelengths λR = 633 nm, λG = 532 nm, and λB = 488 nm are used for the corresponding color channel of the hologram. Figure 4 shows the amplitude and phase of the synthesized holograms. Figure 5 gives their reconstruction results. As can be seen in the top row of Fig. 5, when the proposed method is not applied, color aberration occurs as expected. Although the blue cross target is focused at the original distance z = −0.377 mm, the green and red cross targets are focused at aberrated distances z = −0.346 mm and z = −0.291 mm, respectively, which correspond to λBG and λBR times the original distance z = −0.377 mm. This is because the sampling condition determination and the synthesis of all color channels of the hologram are conducted using the blue wavelength to generate a color hologram with a single pixel pitch shared for all color channels. To the contrary, when the proposed method is applied, the color aberration is removed and all red, green, blue, and white cross targets are focused at the correct distance z = −0.377 mm as shown in the bottom row of Fig. 5. Therefore, the correct color hologram synthesis with the same pixel pitch for all color channels by the proposed method is confirmed for the case of a single depth scene.

Figure 3. Light field data of cross target array. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.

Figure 4. Amplitude and phase of hologram without (top) and with (bottom) proposed method.

Figure 5. Numerical reconstructions of holograms synthesized without (top), and with (bottom) proposed method.

Figure 6 shows the light field data of a 3D scene with multiple depths. The scene consists of 4 × 4 resolution targets located at different depths from z = −14.13 mm to z = +14.13 mm with 1.767 mm spacing except z = 0 mm. The light field data has Nu = Nv = 64 orthographic views each of which has Ntx × Nty = 1563 × 1563 pixels with ∆tx = ∆ty = 3.67 µm pixel pitch and FoVx = FoVy = 7.62°. Figure 7 shows the amplitudes and the phase of the synthesized hologram without and with the proposed method. Figure 8 shows the numerical reconstructions at various depths, and focused resolution target images corresponding to three depths z = −3.53 mm, −8.83 mm, and −14.13 mm are highlighted. Figure 8(a) shows that color aberration occurs when the proposed method is not applied. It is also observed that the color aberration becomes more severe as the depth increases as the aberrated depths for green and red channels are given by λBG and λBR times the original depth and thus proportional to the original depth. On the other hand, when the proposed method is applied, the resolution targets are focused at their original depths without any color aberration as shown in Fig. 8(b).

Figure 6. Light field data of resolution target array. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.

Figure 7. Amplitude and phase of hologram without (top) and with (bottom) proposed method.

Figure 8. Numerical reconstructions of holograms synthesized (a) without, and (b) with proposed method.

Finally, the proposed method is tested with a natural scene with continuous depth 3D objects. A 3D scene model is loaded to a software Blender and orthographic images are rendered as shown in Fig. 9. Each orthographic view has 1563 × 1563 pixels with ∆tx = ∆ty = 3.73 µm pixel pitch. Nu × Nv = 64 × 64 orthographic views are rendered within FoVx = FoVy = 7.5°. The central depths of objects are z1 = −0.438 mm (grapes), z2 = −0.568 mm (tangerine, apple), z3 = −0.767 mm (pumpkin), z4 = −0.996 mm (orange), and z5 = −1.315 mm (basket). Figure 10 shows the amplitude and the phase of the synthesized holograms with and without proposed method and Fig. 11 shows their reconstruction results. In Fig. 11, red and blue boxes highlight focused object parts. It is clearly confirmed from Fig. 11 that the reconstructions are blurred (marked A), dispersed (marked B), or false-colored (marked C) all because of the color aberration when the proposed method is not applied. The proposed method solves these problems, generating clear focused reconstructions as shown in the lower part of Fig. 11.

Figure 9. Light field data of natural scene with continuous depth 3D objects. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.

Figure 10. Amplitude and phase of hologram without (top) and with (bottom) proposed method.

Figure 11. Numerical reconstructions of holograms synthesized without (top), and with (bottom) proposed method.

VI. CONCLUSION

In this paper, we propose a method to calculate a full-color non-hogel-based CGH from light field data of 3D scenes. All color channels of the light field data share the same FoV. But the wavelength dependency of the angle equivalent spatial frequency makes their sampling condition different, making a synthesis of color hologram with a shared pixel pitch complicated. The proposed method resamples the light field data along the spatial frequency axes such that all red, green, and blue color channels have the same sampling grid. The sampling range and interval along the spatial frequency axes are set by the ones of blue and red color channels, respectively, which ensure no loss of information. The proposed method is verified using various 3D scenes including single depth plane test target, multiple plane test targets at different depths, and natural continuous 3D objects distributed in space. The reconstructions of the color holograms synthesized by the proposed method show clear focus of the 3D objects at their depths without any color aberrations, successfully confirming the feasibility of the proposed method.

ACKNOWLEDGMENT

This work was supported by INHA UNIVERSITY Research Grant.

Fig 1.

Figure 1.Proposed method: (a) original color light field data with the same FoV for all color channels, (b) resampling by zero-padding, and (c) interpolation. Vertical axis is represented in ray angle θx.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 2.

Figure 2.Proposed method: (a) original color light field data with the same FoV for all color channels, (b) resampling by zero-padding, and (c) interpolation. Vertical axis is represented in spatial frequency u.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 3.

Figure 3.Light field data of cross target array. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 4.

Figure 4.Amplitude and phase of hologram without (top) and with (bottom) proposed method.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 5.

Figure 5.Numerical reconstructions of holograms synthesized without (top), and with (bottom) proposed method.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 6.

Figure 6.Light field data of resolution target array. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 7.

Figure 7.Amplitude and phase of hologram without (top) and with (bottom) proposed method.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 8.

Figure 8.Numerical reconstructions of holograms synthesized (a) without, and (b) with proposed method.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 9.

Figure 9.Light field data of natural scene with continuous depth 3D objects. (a) Single orthographic view, and (b) collection of 64 × 64 orthographic views.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 10.

Figure 10.Amplitude and phase of hologram without (top) and with (bottom) proposed method.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

Fig 11.

Figure 11.Numerical reconstructions of holograms synthesized without (top), and with (bottom) proposed method.
Current Optics and Photonics 2021; 5: 409-420https://doi.org/10.3807/COPP.2021.5.4.409

References

1. J.-H. Park, “Recent progresses in computer-generated holography for three-dimensional scene,” J. Inf. Disp. 18, 1-12 (2017).
2. Z. Gan, X. Peng and H. Hong, “An evaluation model for analyzing the overlay error of computer-generated holograms,” Curr. Opt. Photon. 4, 277-285 (2020).
3. M. Levoy and P. Hanrahan, “Light field rendering,,” in Proc. SIGGRAPH96: 23rd International Conference on Computer Graphics and Interactive Techniques (New Orleans, LA, USA,Aug. 1996), pp. 31-42.
4. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz and P. Hanrahan. Light field photography with a hand-held plenoptic camera,, Stanford Tech. Rep. CTSR 2005-02, Computer Science Department, Stanford University, 2005.
5. M. Yamaguchi, “Light-field and holographic three-dimensional displays,” J. Opt. Soc. Am. A 33, 2348-2364 (2016).
6. J.-H. Park, M.-S. Kim, G. Baasantseren and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17, 6320-6334 (2009).
7. N. Chen, Z. Ren and E. Y. Lam, “High-resolution Fourier hologram synthesis from photographic images through computing the light field,” Appl. Opt. 55, 1751-1756 (2016).
8. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19, 9086-9101 (2011).
9. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photon. 5, 456-535 (2013).
10. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15, 2722-2729 (1976).
11. J.-H. Park and M. Askari, “Non-hogel-based computer generated hologram from light field using complex field recovery technique from Wigner distribution function,” Opt. Express 27, 2562-2574 (2019).
12. J.-H. Park, “Efficient calculation scheme for high pixel resolution non-hogel-based computer generated hologram from light field,” Opt. Express 28, 6663-6683 (2020).

Wonshik Choi,
Editor-in-chief