검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2021; 5(1): 23-31

Published online February 25, 2021 https://doi.org/10.3807/COPP.2021.5.1.023

Copyright © Optical Society of Korea.

Wheel Screen Type Lamina 3D Display System with Enhanced Resolution

Hogil Baek1, Hyunho Kim1, Sungwoong Park1, Hee-Jin Choi2, Sung-Wook Min1

1Department of Information Display, Kyung Hee University, Seoul 02447, Korea
2Department of Physics and Astronomy, Sejong University, Seoul 05006, Korea

Corresponding author: *mins@khu.ac.kr, ORCID 0000-0003-4794-356X

Received: November 3, 2020; Revised: December 11, 2020; Accepted: December 18, 2020

We propose a wheel screen type Lamina 3D display, which realizes a 3D image that can satisfy the accommodation cue by projecting volumetric images encoded by varying polarization states to a multilayered screen. The proposed system is composed of two parts: an encoding part that converts depth information to states of polarization and a decoding part that projects depth images to the corresponded diffusing layer. Though the basic principle of Lamina displays has already been verified by previous studies, those schemes suffered from a bottleneck of inferior resolution of the 3D image due to the blurring on the surfaces of diffusing layers in the stacked volume. In this paper, we propose a new structure to implement the decoding part by adopting a form of the wheel screen. Experimental verification is also provided to support the proposed principle.

Keywords: Depth-fused display, Lamina 3D display, Wheel screen

OCIS codes: (100.6890) Three-dimensional image processing; (110.2960) Image analysis; (120.4570) Optical design of instruments

Since it is expected that an ideal 3D display should provide complete visual information to the observer, for the last few decades various technologies and methods have been adopted [129] to satisfy physiological cues of visual perception [3032]. Nevertheless, the stereoscopic 3D display, which is the most common type and provides the binocular disparity only, can cause visual fatigue due to vergence-accommodation conflicts [3336]. Though the holographic method is considered as an ideal solution to provide whole visual cues [35], it could not reach a commercial level since the recording and decoding of holographic images require lots of system resources. Thus, the volumetric 3D displays, which provide more natural 3D images in real space than conventional stereoscopic 3D displays, with less quantity of data than the holographic method, are considered as a practical resolution to replace conventional stereoscopic 3D displays. Among various volumetric 3D methods, the multilayered volumetric 3D display can provide a 3D volume by stacking several plane images [1226] and can express continuous depth by the depth-fused effect [2229] if the partial images match well on the retina.

Previously, we proposed two types of passive [22] and active [23] Lamina 3D displays to be classified by the structure of the decoding part using selective scattering polarizer films or polarizer windows with the active layers, respectively. For the encoding part, the polarization distributed depth map (PDDM) technique [24] is applied regardless of the types of decoding parts. Thus, the Lamina 3D display system could reconstruct the volumetric 3D image from the polarization filtered images using a combination of the encoding and decoding parts described above. However, there was a bottleneck in that the resolution of the 3D image decreases as the number of layers increases because some images are multi-scattered from other layers during the display process. Thus, it was difficult to increase the number of layers due to the problem in previous Lamina 3D displays. In this paper, to resolve the problem above, we propose an improved method with enhanced 3D resolution using a form of a wheel screen, which can preserve the resolution of 3D images regardless of the number of diffusing layers.

The conceptual diagram illustrating the composition and principle of the proposed Lamina 3D display is shown in Fig. 1. At first, the encoding part consists of an LCoS projector as an image source device that provides RGB data and an LC-SLM as a polarization encoder that converts the depth data to the corresponding polarization states. The polarization state of the PDDM in the LC-SLM is modulated from 0° to 90° according to the depth data expressed in grayscale. Then, the RGB image combined with modulated PDDM through the relay optics becomes a polarization-encoded image. Since only the polarization angle of the image is modulated through the PDDM process, it does not suffer from spatial degradation due to the depth multiplexing. Secondly, in the decoding part, the difference between the polarization state of each pixel of the polarization-encoded image and the polarization decoding axis determines the intensity of the partial image onto be projected to each polarization screen denoted as P1 to P5. Thus, the filtered images can appear differently at discrete locations that can satisfy accommodation cues depending on the polarization decoding axis as shown in Fig. 1.

Figure 1.Conceptual diagram of the system configuration of the prototype.

Figure 2 shows an example of polarization decoding to illustrate specifically the expected intensities of polarization-filtered images in each screen of Fig. 1. The arrows in Fig. 2 indicate the angles of each polarization state. Figure 2(a) shows the polarization map encoded from the depth map and Fig. 2(b) shows the intensities of the polarization-filtered image at each screen. As shown in Fig. 2(b), the intensities of the filtered image are distributed throughout the screens. Therefore, when filtered images are superimposed by the afterimage effect, the 3D image with a depth fusing effect can be reconstructed.

Figure 2.Filtered images according to polarization decoding axis: (a) PDDM on the polarization-encoded image, (b) Intensities ratio of filtered images.
Is=In+If, Ds=Dn1InIs(DnDf)=DnIfIs(DnDf).

Eq. (1) is the condition of the linear depth-weighted blending rule [25] with two depth planes. In Eqs. (1) and (2), In and If denote the intensities of the near and the far planes, respectively. In addition, Is means the summation of all intensities. Eq. (2) represents the depth position Ds derived by the linear depth-weighted blending rule to express the depth position perceived by humans by the intensity ratio of the near and the far planes. In Eq. (2), Dn and Df mean the diopter value of the near and the far planes, respectively.

Though the expression of depth position can be derived from Eq. (2), a new equation for the proposed scheme is necessary since Eq. (2) covers only two depth planes. For that purpose, we derive a formula expanding the previous linear depth-weighted blending rule to several planes as shown in Eq. (3). In Eq. (3), Im and Dm mean the intensity and the diopter value of a single-pixel at each interlayer as the original image passes through the polarization decoding axis according to Malus’ law where m = 1 and m = k are the nearest and the farthest depth planes, respectively.

Ds=Dn m=1k ImI m1 Im(Dn Dm )=Df+ m=1k I m1 Im(Dm Df ).

Figure 3 shows some examples of plotting the depth position Ds using Eq. (3) for various numbers of depth layers. We assume that the nearest and the farthest layers have polarization axes of 0° and 90° and are located 1.6 m and 1.62 m far from the observer, respectively. Besides, the other layers between them have equivalently distributed polarization axes. For example, when 5 layers are used, their decoding axes are 0°, 22.5°, 45°, 67.5°, and 90° as described in Fig. 2. The simulation results show that the depth range of the reconstructed image is almost constant regardless of the number of layers. However, as the number of layers increases in the same volume, the density of the volume increases, and the viewing angle among observation characteristics can be improved as shown in Fig. 4. Conversely, increasing the gap between the layers enlarges the displacements. In other words, the displacements are proportional to the observation angle and to the gap between layers. In order to reduce the displacements, it is recommended to minimize the gap between the layers, but it can be said that it is optimized within a specific viewing angle range even when the observer cannot distinguish the retinal image if the difference is below 1 minute (1/60°) of visual angle [37]. Thus, we implemented a system with 5 layers shown in Fig. 3(c) for an experimental setup.

Figure 3.Results of the position simulation: (a) 3 layers, (b) 4 layers, and (c) 5 layers.
Figure 4.Viewing angle reduction problem due to displacements: (a) low density of volume, (b) higher density of volume, (c) not overlapped condition, and (d) overlapped condition.

Besides, we can also consider another method to compose the wheel screen by allowing inversely rotating polarization decoding axes as shown in Fig. 5, which is a conventional method used for previous types of Lamina displays [22, 23]. In that case, the distribution of pixel-wise intensities will differ from the results shown in Fig. 2. Thus, the reconstructed profile of depth expression is also different. In the comparison of those two depth profiles of the proposed method and the opposite rotation, which are shown in Fig. 3 and Fig. 6, respectively, it can be noted that the proposed method is capable of more detailed depth expression. Therefore, it can be confirmed that the detailed depth of the Lamina system can be expressed by adopting a new rotation method of the polarization decoding axis as shown in Fig. 2.

Figure 5.Filtered images according to polarization decoding axis in the opposite direction: (a) PDDM on the polarization-encoded image, (b) Intensities ratio of filtered images.
Figure 6.Results of the position simulation in the opposite direction: (a) 3 layers, (b) 4 layers, and (c) 5 layers.

Conventionally, the display devices such as LC-SLM have their gamma curve to be adjusted to the non-linear sensitivity of the human visual system. Since the role of the LC-SLM in the proposed system is to apply a PDDM, it is needed to make the LC-SLM provide a linear modulation of polarization without the gamma curve. For that purpose, we experimentally measured and applied the non-linear grayscales to produce linear depth data for the PDDM.

The demo system is composed of an LCoS projector (Sony VPL-HW55ES; 1920 × 1080, 60 Hz), an LC-SLM (Epson L3P14Y-55G00; 1400 × 1050, 60 Hz), a wheel screen consists of 5 polarization layers and is controlled by an AC servo motor (SANYO DENKI; R2AA06020FXH00W), and other optical components such as projection lens and relay optics as shown in Fig. 7. No other synchronization process is required on the system because partial images are output according to each depth information at every moment along the polarization decoding axis of the wheel screen. However, since 3D images with no flicker must be output faster than the human visual response time, it is recommended to rotate the motor quickly so that 3D images are output at least 24 Hz. Also, to provide a layered volume, the wheel screen has black opaque areas between adjacent polarization screens. The black opaque areas with at least the same area as the polarization screen included in the wheel screen separate the polarization screens into discrete layers so that the same partial images do not occur on different polarization screens when the wheel screen rotates. Therefore, the area ratio of the polarization screen and the opaque area must also be considered. For example, in order to output a 3D image at 60 Hz, the wheel screen was configured to provide twice the hertz when the motor rotates once by configuring it with 10 black opaque areas and 10 polarization screens, and the motor is rotated at 1800 rpm. With the above setup, the volume and frame rate of 3D images are 27 mm × 20 mm × 20 mm and 60 Hz, respectively.

Figure 7.Experimental setup of the prototype.

Figure 8 shows the experimental results using the above experimental setup. Firstly, to verify the principles described in the above chapter, a white 3D image with a depth map of five steps as shown is displayed. Figure 8(a) shows the intensities of the filtered images on each screen (P1–P5) and the reconstructed image which is observed to have a stepwise structure as expected according to the decoding axis direction. Figure 8(b) shows the observed white 3D images which are captured from various angles and showing proper perspectives according to the decoding axis direction. Besides, Figs. 8(c) and (d) present experimental results using the RGB and depth data (three dices) shown in Fig. 7. Figure 8(c) shows the filtered image along the polarization decoding axis, and that the intensities of each dice appearing in each layer are different according to the depth data. In the case of the green dice, it should be represented forward following the depth data, so it gradually decreases in intensity as it moves backward, so it does not appear in P5. Conversely, in the case of the red dice, that it stands out from the P3 and is the strongest in P5. In the case of blue dice, it is displayed on all layers, but the intensity is the strongest in P3. Since the multilayered images are superimposed, the blue dice of the reconstructed image by the depth-fused effect is recognized as being located between the green dice and the red dice. Figure 8(d) shows the results of shooting the implemented 3D image at specific angles. When viewing the 3D image from the same position as the central axis (viewing angle: 0°), it looks very clear, while observation that it does not overlap well above 5°. This means that the human eye resolution [37], which is 1 minute (1/60°), is out of the depth-fused effect condition [28, 38, 39], which means that the optimal viewing angle of this system is limited to around 5° as explained through Fig. 4.

Figure 8.Experimental results: (a) Filtered images on each layer to confirm that the same results as the conditions of the simulations, (b) Reconstructed 3D images according to the viewing angle. (c) Filtered images for RGB and depth data in Fig. 7, and (d) Reconstructed 3D image according to the viewing angle for RGB and depth data in Fig. 7.

To improve the picture quality of Fig. 8(d), above all, filtered images must be well overlapped so that the depth-fused effect can be applied well. For this, the projection lens should deliver clear images to the wheel screen through a wide depth of focus (DOF), and the size of the filtered images should be kept constant so that the images can be well matched at the observer’s position. Namely, it limits the depth range by the projection lens’ DOF. To expand the depth range, it is recommended to fabricate a projection lens in a form having the characteristics of a telecentric lens with a wide DOF.

In this paper, we proposed a novel method to enhance the resolution of the depth-fused display by composing a wheel type screen with multiple polarization decoding surfaces. The experimental demo supports the proposed scheme by a reconstruction of depth profiles as simulated. A new rotation method of polarization axes is also proposed to express the detailed depth. We expect that the proposed method can contribute to realizing a volumetric display to present natural 3D images by satisfied accommodation cues.

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (2018R1A2B6005260).

  1. H. Hua, “Enabling Focus Cues in Head-Mounted Displays,” Proc. IEEE 105, 805-824 (2017).
    CrossRef
  2. Waldkirch M. von, P. Lukowicz and G. Tröster, “Defocusing simulations on a retinal scanning display for quasi accommodation-free viewing,” Opt. Express 11, 3220-3233 (2003).
    Pubmed CrossRef
  3. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim and J.-H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23, 32025-32034 (2015).
    Pubmed CrossRef
  4. E. Moon, M. Kim, J. Roh, H. Kim and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22, 6526-6534 (2014).
    Pubmed CrossRef
  5. T. Ando, K. Yamasaki, M. Okamoto, T. Matsumoto and E. Shimizu, “Retinal projection display using holographic optical element,” Proc. SPIE 3956, 211-216 (2000).
    CrossRef
  6. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25, 18508-18525 (2017).
    Pubmed CrossRef
  7. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22, 13484-13491 (2014).
    Pubmed CrossRef
  8. A. Yuuki, K. Itoga and T. Satake, “A new Maxwellian view display for trouble-free accommodation,” J. Soc. Inf. Disp. 20, 581-588 (2012).
    CrossRef
  9. F.-C. Huang, D. Luebke and G. Wetzstein, “The light field stereoscope,” in Proc. ACM SIGGRAPH 2015 Emerging Technologies on - SIGGRAPH '15, (LA, USA, 2015). Article no. 24.
    CrossRef
  10. S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee and B. Lee, “TomoReal: Tomographic Displays,”, arXiv:1804.04619 (2018).
  11. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32, 220 (2013).
    Pubmed CrossRef
  12. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33, 89 (2014).
    CrossRef
  13. K. J. MacKenzie, D. M. Hoffman and S. J. Watt, “Accommodation to multiple-focal-plane displays: Implications for improving stereoscopic displays and for accommodation control,” J. Vis. 10, 22 (2010).
    Pubmed CrossRef
  14. S. Liu, D. Cheng and H. Hua, “An optical see-through head mounted display with addressable focal planes,,” in Proc. 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, (Cambridge, UK, 2008). pp. 33-42.
  15. S. Zhu, P. Jin, W. Qiao and L. Gao, “High-resolution head mounted display using stacked LCDs and birefringent lens,” Proc. SPIE 10676, 106761B (2018).
  16. P. V. Johnson, J. AQ. Parnell, J. Kim, C. D. Saunter, G. D. Love and M. S. Banks, “Dynamic lens and monovision 3D displays to improve viewer comfort,” Opt. Express 24, 11808-11827 (2016).
    Pubmed KoreaMed CrossRef
  17. G. D. Love, D. M. Hoffman, P. J. W. Hands, J. Gao, A. K. Kirby and M. S. Banks, “High-speed switchable lens enables the development of a volumetric stereoscopic display,” Opt. Express 17, 15716-15725 (2009).
    Pubmed KoreaMed CrossRef
  18. D. Kim, S. Lee, S. Moon, J. Cho, Y. Jo and B. Lee, “Hybrid multi-layer displays providing accommodation cues,” Opt. Express 26, 17170-17184 (2018).
    Pubmed CrossRef
  19. W. Cui and L. Gao, “Optical mapping near-eye three-dimensional display with correct focus cues,” Opt. Lett. 42, 2475-2478 (2017).
    Pubmed CrossRef
  20. Waldkirch M. von, P. Lukowicz and G. Tröster, “Oscillating fluid lens in coherent retinal projection displays for extending depth of focus,” Opt. Commun. 253, 407-418 (2005).
    CrossRef
  21. D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Aksit, P. Didyk, K. Myszkowski, D. Luebke and H. Fuchs, “Wide field of view varifocal near-eye display using see-through deformable membrane mirrors,” IEEE Trans. Vis. Comput. Graph. 23, 1322-1331 (2017).
    Pubmed CrossRef
  22. S.-G. Park, S. Yoon, J. Yeom, H. Baek, S.-W. Min and B. Lee, “Lamina 3D display: projection-type depth-fused display using polarization-encoded depth information,” Opt. Express 22, 26162-26172 (2014).
    Pubmed CrossRef
  23. S. Yoon, H. Baek, S.-W. Min, S.-G. Park, M.-K. Park, S.-H. Yoo, H.-R. Kim and B. Lee, “Implementation of active-type Lamina 3D display system,” Opt. Express 23, 15848-15856 (2015).
    Pubmed CrossRef
  24. S.-G. Park, J.-H. Kim and S.-W. Min, “Polarization distributed depth map for depth-fused three-dimensional display,” Opt. Express 19, 4316-4323 (2011).
    Pubmed CrossRef
  25. S. Ravikumar, K. Akeley and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19, 20940-20952 (2011).
    Pubmed KoreaMed CrossRef
  26. S.-G. Park, Y. Yamaguchi, J. Nakamura, B. Lee and Y. Takaki, “Long-range 3D display using a collimated multi-layer display,” Opt. Express 24, 23052-23062 (2016).
    Pubmed CrossRef
  27. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18, 11562-11573 (2010).
    Pubmed CrossRef
  28. A. Tsunakawa, T. Soumiya, H. Yamamoto and S. Suyama, “Perceived depth change of depth-fused 3-D display with changing distance between front and rear planes,” IEICE Trans. Electron. E96-C, 1378-1383 (2013).
    CrossRef
  29. S.-G. Park, J.-H. Jung, Y. Jeong and B. Lee, “Depth-fused display with improved viewing characteristics,” Opt. Express 21, 28758-28770 (2013).
    Pubmed CrossRef
  30. L. O'Hare, T. Zhang, H. T. Nefs and P. B. Hibbard, “Visual Discomfort and Depth-of-Field,” i-Perception 4, 156-169 (2013).
    Pubmed KoreaMed CrossRef
  31. B. Sweet and M. Kaiser, “Depth Perception, Cueing, and Control,,” in Proc. AIAA Modeling and Simulation Technologies Conference, (Portland, Oregon, USA, 2011). pp. 6424.
    CrossRef
  32. K. N. Ogle and J. T. Schwartz, “Depth of Focus of the Human Eye,” J. Opt. Soc. Am. 49, 273-280 (1959).
    Pubmed CrossRef
  33. M. Lambooij, W. IJsselsteijn, M. Fortuin and I. Heynderickx, “Visual discomfort and visual fatigue of stereoscopic displays: a review,” J. Imaging Sci. Technol. 53, 030201-1-030201-4 (2009).
    CrossRef
  34. D. M. Hoffman, A. R. Girshick, K. Akeley and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8, 33 (2008).
    Pubmed KoreaMed CrossRef
  35. G.-A. Koulieris, B. Bui, M. S. Banks and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36, 87 (2017).
    CrossRef
  36. M. Lambooij, M. Fortuin, W. Ijsselsteijn, B. Evans and I. Heynderickx, “Measuring visual fatigue and visual discomfort associated with 3-D displays,” J. Soc. Inf. Display 18, 931-943 (2010). 37 in APA Handbook of Human Systems Integration, (American Psychological Association, WA. USA, 2015). pp. 229-245.
    CrossRef
  37. S. Suyama and H. Yamamoto, “Recent developments in DFD (depth-fused 3D) display and arc 3D display,” Proc. SPIE 9495, 949507 (2015).
    CrossRef
  38. M. Date, Y. Andoh, H. Takada, Y. Ohtani and N. Matsuura, “Invited Paper: Depth reproducibility of multiview depth-fused 3-D display,” J. Soc. Inf. Disp. 18, 470-475 (2010).

Article

Research Paper

Curr. Opt. Photon. 2021; 5(1): 23-31

Published online February 25, 2021 https://doi.org/10.3807/COPP.2021.5.1.023

Copyright © Optical Society of Korea.

Wheel Screen Type Lamina 3D Display System with Enhanced Resolution

Hogil Baek1, Hyunho Kim1, Sungwoong Park1, Hee-Jin Choi2, Sung-Wook Min1

1Department of Information Display, Kyung Hee University, Seoul 02447, Korea
2Department of Physics and Astronomy, Sejong University, Seoul 05006, Korea

Correspondence to:*mins@khu.ac.kr, ORCID 0000-0003-4794-356X

Received: November 3, 2020; Revised: December 11, 2020; Accepted: December 18, 2020

Abstract

We propose a wheel screen type Lamina 3D display, which realizes a 3D image that can satisfy the accommodation cue by projecting volumetric images encoded by varying polarization states to a multilayered screen. The proposed system is composed of two parts: an encoding part that converts depth information to states of polarization and a decoding part that projects depth images to the corresponded diffusing layer. Though the basic principle of Lamina displays has already been verified by previous studies, those schemes suffered from a bottleneck of inferior resolution of the 3D image due to the blurring on the surfaces of diffusing layers in the stacked volume. In this paper, we propose a new structure to implement the decoding part by adopting a form of the wheel screen. Experimental verification is also provided to support the proposed principle.

Keywords: Depth-fused display, Lamina 3D display, Wheel screen

I. INTRODUCTION

Since it is expected that an ideal 3D display should provide complete visual information to the observer, for the last few decades various technologies and methods have been adopted [129] to satisfy physiological cues of visual perception [3032]. Nevertheless, the stereoscopic 3D display, which is the most common type and provides the binocular disparity only, can cause visual fatigue due to vergence-accommodation conflicts [3336]. Though the holographic method is considered as an ideal solution to provide whole visual cues [35], it could not reach a commercial level since the recording and decoding of holographic images require lots of system resources. Thus, the volumetric 3D displays, which provide more natural 3D images in real space than conventional stereoscopic 3D displays, with less quantity of data than the holographic method, are considered as a practical resolution to replace conventional stereoscopic 3D displays. Among various volumetric 3D methods, the multilayered volumetric 3D display can provide a 3D volume by stacking several plane images [1226] and can express continuous depth by the depth-fused effect [2229] if the partial images match well on the retina.

Previously, we proposed two types of passive [22] and active [23] Lamina 3D displays to be classified by the structure of the decoding part using selective scattering polarizer films or polarizer windows with the active layers, respectively. For the encoding part, the polarization distributed depth map (PDDM) technique [24] is applied regardless of the types of decoding parts. Thus, the Lamina 3D display system could reconstruct the volumetric 3D image from the polarization filtered images using a combination of the encoding and decoding parts described above. However, there was a bottleneck in that the resolution of the 3D image decreases as the number of layers increases because some images are multi-scattered from other layers during the display process. Thus, it was difficult to increase the number of layers due to the problem in previous Lamina 3D displays. In this paper, to resolve the problem above, we propose an improved method with enhanced 3D resolution using a form of a wheel screen, which can preserve the resolution of 3D images regardless of the number of diffusing layers.

II. METHODS

The conceptual diagram illustrating the composition and principle of the proposed Lamina 3D display is shown in Fig. 1. At first, the encoding part consists of an LCoS projector as an image source device that provides RGB data and an LC-SLM as a polarization encoder that converts the depth data to the corresponding polarization states. The polarization state of the PDDM in the LC-SLM is modulated from 0° to 90° according to the depth data expressed in grayscale. Then, the RGB image combined with modulated PDDM through the relay optics becomes a polarization-encoded image. Since only the polarization angle of the image is modulated through the PDDM process, it does not suffer from spatial degradation due to the depth multiplexing. Secondly, in the decoding part, the difference between the polarization state of each pixel of the polarization-encoded image and the polarization decoding axis determines the intensity of the partial image onto be projected to each polarization screen denoted as P1 to P5. Thus, the filtered images can appear differently at discrete locations that can satisfy accommodation cues depending on the polarization decoding axis as shown in Fig. 1.

Figure 1. Conceptual diagram of the system configuration of the prototype.

Figure 2 shows an example of polarization decoding to illustrate specifically the expected intensities of polarization-filtered images in each screen of Fig. 1. The arrows in Fig. 2 indicate the angles of each polarization state. Figure 2(a) shows the polarization map encoded from the depth map and Fig. 2(b) shows the intensities of the polarization-filtered image at each screen. As shown in Fig. 2(b), the intensities of the filtered image are distributed throughout the screens. Therefore, when filtered images are superimposed by the afterimage effect, the 3D image with a depth fusing effect can be reconstructed.

Figure 2. Filtered images according to polarization decoding axis: (a) PDDM on the polarization-encoded image, (b) Intensities ratio of filtered images.
Is=In+If, Ds=Dn1InIs(DnDf)=DnIfIs(DnDf).

Eq. (1) is the condition of the linear depth-weighted blending rule [25] with two depth planes. In Eqs. (1) and (2), In and If denote the intensities of the near and the far planes, respectively. In addition, Is means the summation of all intensities. Eq. (2) represents the depth position Ds derived by the linear depth-weighted blending rule to express the depth position perceived by humans by the intensity ratio of the near and the far planes. In Eq. (2), Dn and Df mean the diopter value of the near and the far planes, respectively.

Though the expression of depth position can be derived from Eq. (2), a new equation for the proposed scheme is necessary since Eq. (2) covers only two depth planes. For that purpose, we derive a formula expanding the previous linear depth-weighted blending rule to several planes as shown in Eq. (3). In Eq. (3), Im and Dm mean the intensity and the diopter value of a single-pixel at each interlayer as the original image passes through the polarization decoding axis according to Malus’ law where m = 1 and m = k are the nearest and the farthest depth planes, respectively.

Ds=Dn m=1k ImI m1 Im(Dn Dm )=Df+ m=1k I m1 Im(Dm Df ).

Figure 3 shows some examples of plotting the depth position Ds using Eq. (3) for various numbers of depth layers. We assume that the nearest and the farthest layers have polarization axes of 0° and 90° and are located 1.6 m and 1.62 m far from the observer, respectively. Besides, the other layers between them have equivalently distributed polarization axes. For example, when 5 layers are used, their decoding axes are 0°, 22.5°, 45°, 67.5°, and 90° as described in Fig. 2. The simulation results show that the depth range of the reconstructed image is almost constant regardless of the number of layers. However, as the number of layers increases in the same volume, the density of the volume increases, and the viewing angle among observation characteristics can be improved as shown in Fig. 4. Conversely, increasing the gap between the layers enlarges the displacements. In other words, the displacements are proportional to the observation angle and to the gap between layers. In order to reduce the displacements, it is recommended to minimize the gap between the layers, but it can be said that it is optimized within a specific viewing angle range even when the observer cannot distinguish the retinal image if the difference is below 1 minute (1/60°) of visual angle [37]. Thus, we implemented a system with 5 layers shown in Fig. 3(c) for an experimental setup.

Figure 3. Results of the position simulation: (a) 3 layers, (b) 4 layers, and (c) 5 layers.
Figure 4. Viewing angle reduction problem due to displacements: (a) low density of volume, (b) higher density of volume, (c) not overlapped condition, and (d) overlapped condition.

Besides, we can also consider another method to compose the wheel screen by allowing inversely rotating polarization decoding axes as shown in Fig. 5, which is a conventional method used for previous types of Lamina displays [22, 23]. In that case, the distribution of pixel-wise intensities will differ from the results shown in Fig. 2. Thus, the reconstructed profile of depth expression is also different. In the comparison of those two depth profiles of the proposed method and the opposite rotation, which are shown in Fig. 3 and Fig. 6, respectively, it can be noted that the proposed method is capable of more detailed depth expression. Therefore, it can be confirmed that the detailed depth of the Lamina system can be expressed by adopting a new rotation method of the polarization decoding axis as shown in Fig. 2.

Figure 5. Filtered images according to polarization decoding axis in the opposite direction: (a) PDDM on the polarization-encoded image, (b) Intensities ratio of filtered images.
Figure 6. Results of the position simulation in the opposite direction: (a) 3 layers, (b) 4 layers, and (c) 5 layers.

III. EXPERIMENT

Conventionally, the display devices such as LC-SLM have their gamma curve to be adjusted to the non-linear sensitivity of the human visual system. Since the role of the LC-SLM in the proposed system is to apply a PDDM, it is needed to make the LC-SLM provide a linear modulation of polarization without the gamma curve. For that purpose, we experimentally measured and applied the non-linear grayscales to produce linear depth data for the PDDM.

The demo system is composed of an LCoS projector (Sony VPL-HW55ES; 1920 × 1080, 60 Hz), an LC-SLM (Epson L3P14Y-55G00; 1400 × 1050, 60 Hz), a wheel screen consists of 5 polarization layers and is controlled by an AC servo motor (SANYO DENKI; R2AA06020FXH00W), and other optical components such as projection lens and relay optics as shown in Fig. 7. No other synchronization process is required on the system because partial images are output according to each depth information at every moment along the polarization decoding axis of the wheel screen. However, since 3D images with no flicker must be output faster than the human visual response time, it is recommended to rotate the motor quickly so that 3D images are output at least 24 Hz. Also, to provide a layered volume, the wheel screen has black opaque areas between adjacent polarization screens. The black opaque areas with at least the same area as the polarization screen included in the wheel screen separate the polarization screens into discrete layers so that the same partial images do not occur on different polarization screens when the wheel screen rotates. Therefore, the area ratio of the polarization screen and the opaque area must also be considered. For example, in order to output a 3D image at 60 Hz, the wheel screen was configured to provide twice the hertz when the motor rotates once by configuring it with 10 black opaque areas and 10 polarization screens, and the motor is rotated at 1800 rpm. With the above setup, the volume and frame rate of 3D images are 27 mm × 20 mm × 20 mm and 60 Hz, respectively.

Figure 7. Experimental setup of the prototype.

Figure 8 shows the experimental results using the above experimental setup. Firstly, to verify the principles described in the above chapter, a white 3D image with a depth map of five steps as shown is displayed. Figure 8(a) shows the intensities of the filtered images on each screen (P1–P5) and the reconstructed image which is observed to have a stepwise structure as expected according to the decoding axis direction. Figure 8(b) shows the observed white 3D images which are captured from various angles and showing proper perspectives according to the decoding axis direction. Besides, Figs. 8(c) and (d) present experimental results using the RGB and depth data (three dices) shown in Fig. 7. Figure 8(c) shows the filtered image along the polarization decoding axis, and that the intensities of each dice appearing in each layer are different according to the depth data. In the case of the green dice, it should be represented forward following the depth data, so it gradually decreases in intensity as it moves backward, so it does not appear in P5. Conversely, in the case of the red dice, that it stands out from the P3 and is the strongest in P5. In the case of blue dice, it is displayed on all layers, but the intensity is the strongest in P3. Since the multilayered images are superimposed, the blue dice of the reconstructed image by the depth-fused effect is recognized as being located between the green dice and the red dice. Figure 8(d) shows the results of shooting the implemented 3D image at specific angles. When viewing the 3D image from the same position as the central axis (viewing angle: 0°), it looks very clear, while observation that it does not overlap well above 5°. This means that the human eye resolution [37], which is 1 minute (1/60°), is out of the depth-fused effect condition [28, 38, 39], which means that the optimal viewing angle of this system is limited to around 5° as explained through Fig. 4.

Figure 8. Experimental results: (a) Filtered images on each layer to confirm that the same results as the conditions of the simulations, (b) Reconstructed 3D images according to the viewing angle. (c) Filtered images for RGB and depth data in Fig. 7, and (d) Reconstructed 3D image according to the viewing angle for RGB and depth data in Fig. 7.

IV. DISCUSSION

To improve the picture quality of Fig. 8(d), above all, filtered images must be well overlapped so that the depth-fused effect can be applied well. For this, the projection lens should deliver clear images to the wheel screen through a wide depth of focus (DOF), and the size of the filtered images should be kept constant so that the images can be well matched at the observer’s position. Namely, it limits the depth range by the projection lens’ DOF. To expand the depth range, it is recommended to fabricate a projection lens in a form having the characteristics of a telecentric lens with a wide DOF.

V. CONCLUSION

In this paper, we proposed a novel method to enhance the resolution of the depth-fused display by composing a wheel type screen with multiple polarization decoding surfaces. The experimental demo supports the proposed scheme by a reconstruction of depth profiles as simulated. A new rotation method of polarization axes is also proposed to express the detailed depth. We expect that the proposed method can contribute to realizing a volumetric display to present natural 3D images by satisfied accommodation cues.

ACKNOWLEDGMENT

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (2018R1A2B6005260).

Fig 1.

Figure 1.Conceptual diagram of the system configuration of the prototype.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

Fig 2.

Figure 2.Filtered images according to polarization decoding axis: (a) PDDM on the polarization-encoded image, (b) Intensities ratio of filtered images.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

Fig 3.

Figure 3.Results of the position simulation: (a) 3 layers, (b) 4 layers, and (c) 5 layers.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

Fig 4.

Figure 4.Viewing angle reduction problem due to displacements: (a) low density of volume, (b) higher density of volume, (c) not overlapped condition, and (d) overlapped condition.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

Fig 5.

Figure 5.Filtered images according to polarization decoding axis in the opposite direction: (a) PDDM on the polarization-encoded image, (b) Intensities ratio of filtered images.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

Fig 6.

Figure 6.Results of the position simulation in the opposite direction: (a) 3 layers, (b) 4 layers, and (c) 5 layers.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

Fig 7.

Figure 7.Experimental setup of the prototype.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

Fig 8.

Figure 8.Experimental results: (a) Filtered images on each layer to confirm that the same results as the conditions of the simulations, (b) Reconstructed 3D images according to the viewing angle. (c) Filtered images for RGB and depth data in Fig. 7, and (d) Reconstructed 3D image according to the viewing angle for RGB and depth data in Fig. 7.
Current Optics and Photonics 2021; 5: 23-31https://doi.org/10.3807/COPP.2021.5.1.023

References

  1. H. Hua, “Enabling Focus Cues in Head-Mounted Displays,” Proc. IEEE 105, 805-824 (2017).
    CrossRef
  2. Waldkirch M. von, P. Lukowicz and G. Tröster, “Defocusing simulations on a retinal scanning display for quasi accommodation-free viewing,” Opt. Express 11, 3220-3233 (2003).
    Pubmed CrossRef
  3. H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim and J.-H. Park, “3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation,” Opt. Express 23, 32025-32034 (2015).
    Pubmed CrossRef
  4. E. Moon, M. Kim, J. Roh, H. Kim and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22, 6526-6534 (2014).
    Pubmed CrossRef
  5. T. Ando, K. Yamasaki, M. Okamoto, T. Matsumoto and E. Shimizu, “Retinal projection display using holographic optical element,” Proc. SPIE 3956, 211-216 (2000).
    CrossRef
  6. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25, 18508-18525 (2017).
    Pubmed CrossRef
  7. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22, 13484-13491 (2014).
    Pubmed CrossRef
  8. A. Yuuki, K. Itoga and T. Satake, “A new Maxwellian view display for trouble-free accommodation,” J. Soc. Inf. Disp. 20, 581-588 (2012).
    CrossRef
  9. F.-C. Huang, D. Luebke and G. Wetzstein, “The light field stereoscope,” in Proc. ACM SIGGRAPH 2015 Emerging Technologies on - SIGGRAPH '15, (LA, USA, 2015). Article no. 24.
    CrossRef
  10. S. Lee, Y. Jo, D. Yoo, J. Cho, D. Lee and B. Lee, “TomoReal: Tomographic Displays,”, arXiv:1804.04619 (2018).
  11. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32, 220 (2013).
    Pubmed CrossRef
  12. A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke and H. Fuchs, “Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources,” ACM Trans. Graph. 33, 89 (2014).
    CrossRef
  13. K. J. MacKenzie, D. M. Hoffman and S. J. Watt, “Accommodation to multiple-focal-plane displays: Implications for improving stereoscopic displays and for accommodation control,” J. Vis. 10, 22 (2010).
    Pubmed CrossRef
  14. S. Liu, D. Cheng and H. Hua, “An optical see-through head mounted display with addressable focal planes,,” in Proc. 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, (Cambridge, UK, 2008). pp. 33-42.
  15. S. Zhu, P. Jin, W. Qiao and L. Gao, “High-resolution head mounted display using stacked LCDs and birefringent lens,” Proc. SPIE 10676, 106761B (2018).
  16. P. V. Johnson, J. AQ. Parnell, J. Kim, C. D. Saunter, G. D. Love and M. S. Banks, “Dynamic lens and monovision 3D displays to improve viewer comfort,” Opt. Express 24, 11808-11827 (2016).
    Pubmed KoreaMed CrossRef
  17. G. D. Love, D. M. Hoffman, P. J. W. Hands, J. Gao, A. K. Kirby and M. S. Banks, “High-speed switchable lens enables the development of a volumetric stereoscopic display,” Opt. Express 17, 15716-15725 (2009).
    Pubmed KoreaMed CrossRef
  18. D. Kim, S. Lee, S. Moon, J. Cho, Y. Jo and B. Lee, “Hybrid multi-layer displays providing accommodation cues,” Opt. Express 26, 17170-17184 (2018).
    Pubmed CrossRef
  19. W. Cui and L. Gao, “Optical mapping near-eye three-dimensional display with correct focus cues,” Opt. Lett. 42, 2475-2478 (2017).
    Pubmed CrossRef
  20. Waldkirch M. von, P. Lukowicz and G. Tröster, “Oscillating fluid lens in coherent retinal projection displays for extending depth of focus,” Opt. Commun. 253, 407-418 (2005).
    CrossRef
  21. D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Aksit, P. Didyk, K. Myszkowski, D. Luebke and H. Fuchs, “Wide field of view varifocal near-eye display using see-through deformable membrane mirrors,” IEEE Trans. Vis. Comput. Graph. 23, 1322-1331 (2017).
    Pubmed CrossRef
  22. S.-G. Park, S. Yoon, J. Yeom, H. Baek, S.-W. Min and B. Lee, “Lamina 3D display: projection-type depth-fused display using polarization-encoded depth information,” Opt. Express 22, 26162-26172 (2014).
    Pubmed CrossRef
  23. S. Yoon, H. Baek, S.-W. Min, S.-G. Park, M.-K. Park, S.-H. Yoo, H.-R. Kim and B. Lee, “Implementation of active-type Lamina 3D display system,” Opt. Express 23, 15848-15856 (2015).
    Pubmed CrossRef
  24. S.-G. Park, J.-H. Kim and S.-W. Min, “Polarization distributed depth map for depth-fused three-dimensional display,” Opt. Express 19, 4316-4323 (2011).
    Pubmed CrossRef
  25. S. Ravikumar, K. Akeley and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19, 20940-20952 (2011).
    Pubmed KoreaMed CrossRef
  26. S.-G. Park, Y. Yamaguchi, J. Nakamura, B. Lee and Y. Takaki, “Long-range 3D display using a collimated multi-layer display,” Opt. Express 24, 23052-23062 (2016).
    Pubmed CrossRef
  27. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18, 11562-11573 (2010).
    Pubmed CrossRef
  28. A. Tsunakawa, T. Soumiya, H. Yamamoto and S. Suyama, “Perceived depth change of depth-fused 3-D display with changing distance between front and rear planes,” IEICE Trans. Electron. E96-C, 1378-1383 (2013).
    CrossRef
  29. S.-G. Park, J.-H. Jung, Y. Jeong and B. Lee, “Depth-fused display with improved viewing characteristics,” Opt. Express 21, 28758-28770 (2013).
    Pubmed CrossRef
  30. L. O'Hare, T. Zhang, H. T. Nefs and P. B. Hibbard, “Visual Discomfort and Depth-of-Field,” i-Perception 4, 156-169 (2013).
    Pubmed KoreaMed CrossRef
  31. B. Sweet and M. Kaiser, “Depth Perception, Cueing, and Control,,” in Proc. AIAA Modeling and Simulation Technologies Conference, (Portland, Oregon, USA, 2011). pp. 6424.
    CrossRef
  32. K. N. Ogle and J. T. Schwartz, “Depth of Focus of the Human Eye,” J. Opt. Soc. Am. 49, 273-280 (1959).
    Pubmed CrossRef
  33. M. Lambooij, W. IJsselsteijn, M. Fortuin and I. Heynderickx, “Visual discomfort and visual fatigue of stereoscopic displays: a review,” J. Imaging Sci. Technol. 53, 030201-1-030201-4 (2009).
    CrossRef
  34. D. M. Hoffman, A. R. Girshick, K. Akeley and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8, 33 (2008).
    Pubmed KoreaMed CrossRef
  35. G.-A. Koulieris, B. Bui, M. S. Banks and G. Drettakis, “Accommodation and comfort in head-mounted displays,” ACM Trans. Graph. 36, 87 (2017).
    CrossRef
  36. M. Lambooij, M. Fortuin, W. Ijsselsteijn, B. Evans and I. Heynderickx, “Measuring visual fatigue and visual discomfort associated with 3-D displays,” J. Soc. Inf. Display 18, 931-943 (2010). 37 in APA Handbook of Human Systems Integration, (American Psychological Association, WA. USA, 2015). pp. 229-245.
    CrossRef
  37. S. Suyama and H. Yamamoto, “Recent developments in DFD (depth-fused 3D) display and arc 3D display,” Proc. SPIE 9495, 949507 (2015).
    CrossRef
  38. M. Date, Y. Andoh, H. Takada, Y. Ohtani and N. Matsuura, “Invited Paper: Depth reproducibility of multiview depth-fused 3-D display,” J. Soc. Inf. Disp. 18, 470-475 (2010).
Optical Society of Korea

Current Optics
and Photonics


Wonshik Choi,
Editor-in-chief

Share this article on :

  • line