검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Invited Review Paper

Curr. Opt. Photon. 2023; 7(6): 597-607

Published online December 25, 2023 https://doi.org/10.3807/COPP.2023.7.6.597

Copyright © Optical Society of Korea.

Volumetric 3D Display: Features and Classification

Joonku Hahn, Woonchan Moon, Hosung Jeon, Minwoo Jung, Seongju Lee, Gunhee Lee, Muhan Choi

School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Korea

Corresponding author: *mhchoi@ee.knu.ac.kr, ORCID 0000-0002-5012-4058

Received: November 6, 2023; Revised: November 28, 2023; Accepted: November 29, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Volumetric 3D displays generate voxels to enable users to watch three-dimensional virtual objects from various angles, and they have a significant advantage over other types of 3D displays in terms of realism and the absence of vergence-accommodation conflict (VAC). By virtue of these advantages, various volumetric 3D display technologies incorporating novel approaches have been introduced competitively. As a result, the conventional classification criteria for volumetric 3D technology often fall short in categorizing these innovative methods. In this study, we present an improved classification framework capable of accommodating these new technologies. We expect that a new classification may offer some intuition to identify areas of technical deficiency and contribute to improving the technology.

Keywords: Three-dimensional display, View volume, Viewpoints, Volumetric display, Voxel

OCIS codes: (100.6890) Three-dimensional image processing; (110.6880) Three-dimensional image acquisition; (120.2040) Displays; (220.3620) Lens system design

Volumetric 3D displays have received a lot of interest from both academy and industry because of their ability to provide very realistic three-dimensional images without the vergence-accommodation conflict (VAC). A key concept characterizing volumetric 3D displays is filling the volume with three-dimensional images. From this perspective, the term volumetric 3D display can be extended to encompass almost all autostereoscopic 3D displays that create 3D images by generating voxels in space, such as light field displays, integral imaging, and holographic displays [1]. However, when comparing the strong points and drawbacks of various 3D display technologies to work toward their improvement, such a broad definition isn’t of much practical help. Furthermore, in the recent display industry, the term volumetric 3D display is generally used without including flat light field displays or holographic displays. Almost all volumetric 3D displays generate voxels in space so that users can view virtual objects from any direction. This makes it a suitable solution for developing applications where multiple users surrounding the system can simultaneously watch 3D contents. Traditionally, volumetric 3D displays have been classified into two main categories: swept volume systems and static volume systems [2, 3]. Swept volume systems involve a mechanical motion in screens or mirrors, whereas static volume systems do not involve any mechanical motion. As an example of static volume systems, displays using the two-step up-conversion phenomenon in nonlinear materials can be mentioned [4]. In these systems, visible light points are generated in nonlinear materials by focusing infrared light without mechanical movements. However, such a simplistic criterion may hinder the technical convergence of various techniques, and some systems raise questions about being classified in this conventional framework. For instance, according to the conventional classification, systems based on electrically switchable mirrors stacks or polymer-dispersed liquid crystal panel stacks would also be categorized as static volume systems because mechanical movement is not applied. Nevertheless, they share a common feature of swept volume systems in terms of sweeping the image surface, which is distinctly different from the two-step up-conversion phenomenon. Therefore, classifying all these systems as static volume systems can be problematic. Instead, technologies that use switchable optics are closer to swept volume systems due to their shared characteristic of sweeping the image surface within a defined volume.

At a time when new 3D display technologies are constantly emerging, it is no longer practical to categorize all volumetric displays into swept volume and static volume. For this reason, a modified classification method was proposed in response to the demand for a more comprehensive and universal classification framework [5]. In this work, free-space displays were introduced as the third subset main category under volumetric 3D displays, which spatially control light point sources to present virtual objects. However, this new classification method still presents several inconsistencies. In [5], the authors mainly distinguished volumetric 3D displays from other 3D displays based on the light generating method. This categorization broadly divides 3D displays into three categories: ray optics-based light field displays, point source-based volumetric displays, and wave optics-based holographic displays. According to these criteria, volumetric 3D displays are defined as systems that generate point light sources in space. However, this narrower definition makes it difficult to include systems that are popularly regarded as volumetric 3D displays, such as Nagoya University’s Seelinder display [6] or the University of Southern California’s light field display [7]. Furthermore, as described in [5], the fog display proposed by Rakkolainen and Palovuori [8] was classified as a type of volumetric display, commonly known as a free-space display. This causes a complication because the method of generating light in fog displays is considered a ray optics-based projection optical technology. This leads to a mutual exclusivity problem in the suggested classification framework. Therefore, there is a need for more comprehensive and universally applicable classification criteria that promotes a clearer understanding of the direction of technological advancements.

In this paper, we review the features commonly considered characteristic of volumetric 3D displays by many researchers. Volumetric 3D displays are distinguished by the fact that virtual objects occupy a certain space, and most systems naturally provide simultaneous views of three-dimensional images from a 360-degree perspective. We also propose a new and clearer definition for volumetric 3D displays that emphasizes the topological interrelation that distinguishes between the space occupied by users and that occupied by virtual objects. Finally, we suggest a new way to categorize the technical features of volumetric 3D displays.

Volumetric 3D displays can be clearly distinguished from other types of 3D displays by the following criteria: (1) whether the surface where 3D images are presented is open or closed, and (2) whether the user’s space is positioned inside the display system or if the display surface surrounds the user’s space. In other words, we can define volumetric 3D displays based on the topological relationship between the user’s space and the virtual space.

Firstly, we can categorize displays into two major groups based on whether the display surface where 3D images are displayed is open or closed. In Fig. 1, 1(a) flat or curved surface 3D displays and 1(b) near-eye displays have an open boundary for the display surface. In more detail, both Fig. 1(a) flat or curved surface 3D displays and Fig. 1(b) near-eye displays present 3D image information through an open boundary. However, the former features a fixed display surface in space, while the latter dynamically moves the display surface according to the user’s movements. On the other hand, Fig. 1(c) volumetric 3D displays and Fig. 1(d) immersive 3D displays have a closed display surface. When the display surface is closed, it corresponds to a Fig. 1(c) volumetric 3D display and the user’s space surrounds the virtual space. Conversely, if the virtual space surrounds the user’s space, it is categorized as Fig. 1(d) an immersive 3D display.

Figure 1.Types of 3D displays according to the topological virtual space: (a) Flat or curved surface 3D display, (b) near-eye 3D display, (c) volumetric 3D display, and (d) immersive 3D display.

To refine the previous discussion, the physical separation between user space and virtual space is primarily determined by the display surface created by the optical system. The commonly used term real image refers to the image formed in the user’s space, while virtual image refers to an image formed inside the optical system. In certain volumetric 3D displays, the image is focused on the screen and the screen sweeps the volume where the virtual space is formed. Conversely, in other volumetric 3D displays, users observe the virtual objects inside the optical system and the user space is distinctly separated from the virtual space. Only in a few volumetric 3D displays, virtual objects can appear in the user space and it is possible for users to touch the virtual objects. But even in these exceptional cases, the system is better suited to present virtual objects within a confined space, and it is more convenient for users to surround the virtual space and watch 3D contents within it. Before analyzing the characteristics of volumetric 3D displays in detail, we will look at terminology.

2.1. Viewpoint

The most significant feature of 3D displays is that users perceive different images depending on their viewpoints. In other words, users perceive 3D information through the binocular disparity caused by variations in images from different viewpoints. Of course, there is another 3D depth cue that provides distance information about objects particularly through the accommodation effect. However, in most cases, the axial length of 3D objects is relatively small compared to the user’s distance from the display. This makes binocular disparity and motion parallax more influential as 3D depth cues compared to the accommodation effect.

3D display technologies based on multi-view are typically evaluated based on the number of viewpoints. If each viewpoint provides images with the same resolution, the number of viewpoints directly affects the realistic visualization of virtual objects. Volumetric displays inherently provide 360-degree views, and the quality of the systems can be evaluated as the density of viewpoint, which is the number of viewpoints divided by the viewing angle.

2.2. View Volume

People typically observe images in accordance with the principles of perspective, where the transverse width increases as the distance from the observer increases. Similarly, in volumetric 3D displays, the area that can be seen from a given viewpoint forms a perspective viewing region. The common intersection where the viewing regions corresponding to each viewpoint overlap becomes the view volume, representing the region where virtual objects can be displayed [9].

As shown in Fig. 2(a), most volumetric 3D displays are designed for users to surround the system and form a full view volume as the intersection of viewing regions is determined from all viewpoints. Occasionally, as depicted in Fig. 2(b), it is possible in specific situations to define a partial view volume that users in a local place can observe. In this case, the partial view volume can be larger than the full view volume determined from all viewpoints, and it can even be larger than the optical system’s size. Therefore, as previously mentioned, a volumetric 3D display can display not only virtual images but also real images in some cases.

Figure 2.View volumes determined by the arrangement of viewpoints. (a) Full view volume from all viewpoints and (b) partial view volume from local viewpoints.

2.3. Voxel

A voxel is the smallest unit that makes up the view volume, just as a pixel is the smallest unit in a 2D flat display. The size of voxels is an important parameter in determining the quality of 3D virtual objects. In volumetric 3D displays, virtual objects are often represented by scanning 2D elemental images or point lights. While a single point of light corresponds to a voxel, a 2D elemental image has a specific direction of light emission. So, the combination of several 2D elemental images generates a voxel where light rays with different directions intersect each other. In most cases where 2D elemental images are scanned, voxel sizes are not uniform in the volumetric volume and the variation in voxel sizes can affect the image quality.

In Fig. 3(a), the volumetric 3D display has a rotating screen that emits light omnidirectionally. Here, each pixel of the projected 2D elemental image represents one voxel. In Fig. 3(b), the volumetric 3D display has a rotating screen that reflects incident beams in specific directions. In this case, voxels are represented at the intersections where the light emitted from the 2D elemental images crosses each other. The properties of the voxel vary depending on the type of volumetric 3D display.

Figure 3.Methods to generate voxels from (a) omnidirectional scattering, and (b) directional emission of 2D elemental images.

The most fundamental method for experimentally measuring the characteristics of voxels is to use imaging optical systems with a high numerical aperture (NA) to measure radiance for all positions within the view volume. This method has the advantage of obtaining not only radiant power, which reflects the brightness of the voxels, but also the angular spectrum distribution, which indicates the directional emission characteristics of voxels. However, it has the disadvantage of taking a long time to scan all points within the view volume.

Figure 4 provides several examples of the apparatuses for measuring voxel characteristics. In Fig. 4(a), a device for measuring voxel size and color consisting of a 4f optical system and a spectral sensor is shown [10]. The iris located in front of the sensor serves to determine the location of the voxels being measured, while the iris in the Fourier domain controls the acceptance angle of the measured voxels. Figure 4(b) shows a device with extremely large NA, and it is composed of a parabolic screen, parabolic mirror, and camera with a large field of view [11]. It is especially designed to measure the characteristics of voxels in tabletop displays [12].

Figure 4.Methods to measure the properties of voxels. (a) Apparatus for measuring size and color of the voxel and (b) apparatus for measuring the intensity profile of the voxel in a tabletop holographic display.

We have classified volumetric 3D displays into three categories based on the technology they use to generate 3D virtual objects: Sequentially swept volume systems, viewpoint-surrounding volume systems, and point light voxel systems. Figure 5 illustrates the classification of volumetric 3D displays and provides representative examples.

Figure 5.Classification of volumetric 3D displays.

The sequentially swept volume systems operate by moving 2D elemental images in time to various positions within the designated volume where the virtual object is located. In these systems, 2D elemental images emit light in all directions. Therefore, a voxel simply corresponds to a single pixel of the 2D elemental image that intersects the voxel. Furthermore, the sequentially swept volume systems are incapable of representing one of the 3D depth cues, occlusion, which occurs when an object in the foreground obstructs another object in the background.

The viewpoint-surrounding volume system, like the sequentially swept volume system, uses 2D elemental images to represent virtual objects. However, it differs in that 2D elemental images emit light in specific directions. In other words, it employs directional emission to represent voxels, which means that different pixels from plural 2D elemental images correspond to a single voxel. The viewpoint-surrounding volume system is implemented based on integral imaging or light field technology using plural projection optics to provide a multi-view image. The multi-view enables the representation of occlusion in virtual objects. However, it requires a greater amount of information compared to the sequentially swept system because plural pixels are needed to represent a single voxel.

The point light voxel system distinguishes itself from the previous two types of systems by having elements with a dimensionality of 0D, rather than 2D. In this system, each voxel corresponds to a single point light source used to generate the virtual object. In the point light voxel system, point light sources can be generated either by self-illuminating devices or by illuminating scattering particles. It has the unique feature of being able to display virtual objects in free space not occupied by optical structures, making it a fascinating technology. However, the voxel point light sources must be scanned to represent virtual objects. Therefore, reducing the scanning speed in 3D space is a significant challenge, resulting in the disadvantage of a lower refresh rate. The dimensions of the elements representing voxels and their ability to depict occlusion effects in virtual objects for the three types of volumetric 3D displays are summarized in Table 1. The previously discussed three types of systems are further categorized based on their operating principles. We will describe the characteristics of representative systems based on the suggested details in the classification.

TABLE 1 Characteristics according to the type of volumetric 3D display

Type of Volumetric 3D DisplayDimensions of ElementOcclusion Capability
Sequentially Swept Volume System2DIncapable
Viewpoint-Surrounding Volume System2DCapable
Point Light Voxel System0Da)Incapable

a)Zero dimension means that each voxel is ideally represented by a point of light with no dimensions.



Figure 6 shows the system structures of some representative volumetric 3D displays. In Figs. 6(a) and 6(b), the screen moves rotationally or axially, and 2D elemental images are projected synchronously. In Figs. 6(c) and 6(d), the direction of emission from 2D elemental images is deflected to the side of the cylinder or upward in the hemisphere. In Figs. 6(e) and 6(f), the point light is created by plasma from an infrared pulse laser or scattering from trapped particles.

Figure 6.Representative volumetric 3D displays; (a) Rotationally swept volume system, (b) axially swept volume system, (c) viewpoint surrounding cylinder system, (d) viewpoint surrounding hemisphere system, (e) point light emission voxel system, and (f) point light scattering voxel system.

3.1. Rotationally Swept Volume System

Sequentially swept volume systems can be categorized into two types based on how 2D elemental images are swept through the volume: Rotationally swept volume system and axially swept volume system. Rotationally swept systems have an advantage over axially swept volume systems in terms of mechanical operation, vibration, and durability because they simply rotate the screen without the need for complex mechanical movements.

Favalora et al. [13] proposed the Perspecta system, where a sequence of 2D elemental images is sequentially projected onto a rotating diffusive screen after being reflected by several relay mirrors. To provide distortion-free 3D contents at all screen rotation angles, a raster engine was implemented to convert 2D elemental images into a cylindrical voxel grid. The display has a resolution of 768 × 768 × 198 and operates at a volume refresh rate of 24 Hz.

A rotating LED array system introduced by Wu et al. [14] functions by spinning an LED array and directly displays cross-sectional images of virtual objects corresponding to the LED array’s rotation angle. This system updates 2D elemental images without the need for rasterization operations, synchronizing with the LED array’s rotation speed. The LED array has a resolution of 320 × 256 in full color, and it displays 512 elemental images in a single revolution. The resultant view volume size is 800 mm × 800 mm × 640 mm.

3.2. Axially Swept Volume System

The axially swept volume system is a method of filling the view volume by reciprocating sweeping of 2D elemental images instead of rotating them. One of the most famous systems applying this technology is Voxon’s Voxiebox [15]. In this system, the screen is driven by actuators to perform resonant vertical motion. While the screen is moving up and down, high-speed projectors update the elemental images projected onto the screen. This creates a view volume in the space where the screen is in motion so that users can watch very realistic 3D contents. However, since the light from elemental images emanates omnidirectionally, this system cannot represent the occlusion effect in virtual objects.

DepthCube produced by LightSpace Technologies is known as the world’s first solid-state volumetric 3D display [16]. Although it has no mechanical movement in operation, it can be considered an axially swept volume system due to the fact that 2D elemental images sweep across the view volume. The display employs a high-speed projector that projects a sequence of 3D images onto a stack of 20 shutters, generating 9.6 million voxels at a refresh rate of 40 Hz. The multiplanar anti-aliasing algorithm effectively minimizes the gaps between elemental images among LC slices and creates the illusion of a smooth appearance for the 3D object in the observer’s perception. However, as the viewing angle increases, there can be a degradation in image quality due to aliasing, especially when representing fine lines, such as in wireframe images.

3.3. Viewpoint-surrounding Cylinder System

Viewpoint-surrounding volume systems are divided into two categories: The viewpoint-surrounding cylinder system and the viewpoint-surrounding hemispherical system, based on the relative height between the view volume and viewpoints. In the cylinder system, viewpoints are positioned at the same height as the view volume and encircle it in 360 degrees. On the other hand, in the hemispherical system, viewpoints are positioned higher than the view volume and create a configuration where users look downward. This system is well-suited for applications that display virtual objects on a tabletop.

The Seelinder system proposed by Yendo et al. [6] creates viewpoints using a parallax barrier. In this system, a 1D LED array is aligned along the circumference of the inner cylinder, while the outer cylinder houses the parallax barrier. As the 1D LED array rotates around the central axis of the cylinder, the parallax barrier rotates in the opposite direction at a relatively faster speed. The speed difference between the parallax barrier and the 1-D LED array provides different viewpoints from each slit, thereby generating a view volume within the cylinder. The Seelinder is designed with a parallax interval of one degree and offers 360 different viewpoints.

Another example using a slit is the rotation-slit cylinder display proposed by Jeon et al. [17], where this system has been implemented to improve upon zoetrope technology, enabling the display of three-dimensional virtual objects with varying perspectives based on the viewpoints. This is achieved by rapidly rotating a single slit while projecting elemental images onto the inner surface of the cylinder using a high-speed projector. The elemental images are updated rapidly according to the position of the slit and form a view volume within the cylinder. This approach ensures sufficient separation between the slit and elemental images so that users can comfortably focus on virtual objects. The rotation-slit cylinder display can create a view volume inside a cylindrical structure with a diameter of 300 mm and a height of 500 mm and offers a total of 288 viewpoints.

Jones et al. [7] introduced the 360-degree light field display, which forms a view volume with a high-speed projector projecting elemental images onto a rotating anisotropic holographic diffuser with a 45-degree tilt. This system employs an anisotropic holographic diffuser to achieve a horizontal angular resolution of 1.25 degrees and provides vertical motion parallax through eye tracking. In particular, the advantage of this system is the creation of voxels at the points where the light emitted from the elemental images located on the rotating screen surface intersects. This enables the display of virtual objects with a natural occlusion effect.

A holographic optical element (HOE) has the benefit of achieving optical functions within a film. It also exhibits wavelength selectivity, appearing transparent for wavelengths other than the specific wavelength it is designed by Park et al. [18] implemented a cylinder transparent 3D display using an asymmetric HOE. The asymmetric diffusing HOE has a narrow diffusing angle of approximately 0.7 degrees in the horizontal direction for horizontal parallax. In the vertical direction, it redirects incoming beams from the top to the horizontal direction and has a wide diffusing angle of 17.8 degrees. In this system, a high-speed projector at the bottom of the cylinder projects elemental images upward. At the top of the cylinder, a rotating mirror is positioned to redirect the light so that the elemental image is projected on the HOE screen attached to the side of cylinder. The viewpoint moves around the cylinder in accordance with the mirror’s rotation angle, resulting in the formation of a view volume inside the transparent cylinder. On the other hand, Nakamura et al. [19] implemented a transparent cylindrical display by rotating the HOE screen on the side of the cylinder, instead of using a rotating mirror. In this system, the cylinder-shaped HOE screen forms three viewpoints corresponding to the RGB wavelengths around the cylinder and rotates to provide viewpoints from all 360 degrees.

3.4. Viewpoint-surrounding Hemisphere System

The viewpoint-surrounding hemisphere system is a display system suitable for tabletop applications where viewpoints are created above the view volume where virtual objects are formed. This hemispherical system is based on multi-view technology composed of multiple projection optics and can be implemented using space-division or time-division techniques.

A representative system for tabletop displays using a space-division technique is “fVisiOn,” proposed by Yoshida [20]. In this system, an anisotropic diffusing screen in the shape of a cone is positioned directly beneath the tabletop, and 2D elemental images are projected onto the conic screen from 288 projectors evenly spaced at 1.25-degree intervals along the circumference. The conical screen provides a wide diffusing angle vertically while maintaining a narrow diffusing angle of approximately 0.4 degrees horizontally, enabling the creation of 360-degree viewpoints without crosstalk problems. The optical axes of each projector intersect above the tabletop and form a view volume that can be observed from 360 degrees on the tabletop.

Takaki and Uchida [21] proposed a time-division tabletop display using a rotating decentered Fresnel lens. In this system, a high-speed projector is positioned along the table’s central axis, and the decentered lens redirects the light from each elemental image off-axis to converge at a point on the circumference above the tabletop. This decentered lens rotates around the central axis of the tabletop, and the high-speed projector synchronizes with the rotation to generate 800 viewpoints. Kim et al. [22] proposed a view-sequential tabletop system using an inclined and off-axis anisotropic diffusing screen. This system features a distinctive design of high-speed projector optics, which are folded, and the elemental images are projected onto the rotating screen in the normal direction. As a result, the lights emanating from each elemental image on the screen overlap each other above the tabletop and create a view volume in this region.

Unlike conventional imaging using conventional projection optics, holography technology offers the advantage of focusing virtual objects at multiple depths and has the potential to provide an ideal accommodation effect. The holographic tabletop system proposed by Lim et al. [12] uses a high-speed spatial light modulator (SLM) to reconstruct holographic images and employs rotation scanning optics and parabolic mirrors to create a 360-degree perspective on a tabletop. In the previous study, rotational scanning optics used aspheric lenses to focus light waves, but Heo et al. [23] improved it by using freeform mirror-based reflective optics, enabling the implementation of compact tabletop displays. Holographic tabletops not only offer various viewpoints but also have the advantage of creating natural 3D images that exhibit accommodation effects at each viewpoint.

Implementation of the space-division technique typically requires numerous projection optics, which can be demanding and leads to alignment challenges compared to the time-division technique. However, the time-division technique comes with its own set of issues, such as mechanical movements causing vibration, a lack of optical brightness, and flickering problems.

3.5. Point Light Emission Voxel System

The point light emission voxel refers to a specific 3D volume element in a voxel grid that represents the emission of light from a self-emitting point light source. Representative examples of the point light emission voxel system include laser-induced plasma displays, laser-induced bubble displays, and full-color up-conversion displays.

Laser-induced plasma displays are based on technology where high-power pulse lasers excite plasma at a specific location. By controlling the position of the focal point in the x-, y-, and z-axes, dot arrays can be displayed in 3D space. Research by Kimura et al. [24] showed this display used a linear motor system and a galvanometer mirror for high-speed scanning and controlled the position of the focus in the x-, y-, and z-axes to display an array of dots in 3D space. Ochiai et al. [25] proposed a system for rendering aerial and volumetric graphics using a femtosecond laser that emits light in a laser-induced plasma without the need for special materials. Two methods of rendering graphics with a femtosecond laser were introduced: a hologram generation method using spatial light modulation and a laser beam scanning method using a galvanometer mirror. These displays use airborne plasma to enable realistic and innovative 3D representations, but the aperture of the objective lens determines the maximum working space and angular range of the galvanometer mirror, and the high-speed changes in varifocal lenses can introduce aberration problems.

Laser-induced bubble displays generate 3D images through the formation and control of bubbles. Kumagai et al. [26, 27] proposed a novel volumetric display using femtosecond laser-induced microbubbles as voxels, which can be rendered in a high-viscosity liquid, to overcome the limitations of other volumetric displays in terms of voxel count and multicolor graphics rendering capabilities. The use of high-viscosity liquids enables full-color volumetric graphic rendering consisting of voxels controlled by an illumination light source, while a holographic laser drawing method controls the light intensity and spatial geometry of microbubble voxels.

Full-color up-conversion displays use nonlinear optical crystals to generate multidimensional images. Zhu et al. [28] demonstrated the generation of voxels by frequency up-conversion based on second-harmonic generation (SHG) in nonlinear optical crystals dispersed in solid-state composite materials for the creation of full-color moving objects in a volumetric display. The transparent composite containing randomly orientated nonlinear optical (NLO) crystals showed nearly isotropic frequency up-conversion based on SHG as a proof-of-concept demonstration of a volumetric 3D display that can be observed from any angle without the need for glasses. Also, Mun et al. [29] focused on the development of video-rate color 3D volumetric displays using elemental-migration-assisted full-color-tunable up-conversion nanoparticles (UCNPs). They achieved high efficiency of red, green, and blue orthogonal up-conversion luminescence (UCL) and full-color tunability in the UCNPs with a combination of elemental-migration-assisted color tuning and selective photon blocking.

Each point light emission voxel system has distinct characteristics and offers advantages for specific applications. Laser-induced plasma displays excel at providing detailed 3D representations, while laser-induced bubble displays have strengths in volume display and color representation. On the other hand, full-color up-conversion displays are an excellent choice for applications that require high resolution and full-color images.

3.6. Point Light Scattering Voxel System

The point light scattering voxel system represents a 0D light source in a 3D space by scanning scatterers within that space and applying appropriate colors to them. This system uses techniques such as acoustic tweezers or photophoretic-trap to scan scatterers in space and synchronizes them with RGB illumination beams through a scanner to create 3D images by persistence of vision (POV).

The acoustic tweezer technology used in the point light scattering voxel system is exemplified by the multimodal acoustic trap display (MATD) developed by Hirayama et al. [30] MATD comprises two 16 × 16 ultrasound transducer arrays (UTA) located at the top and bottom of the system. These UTAs control the frequency and phase to create a standing wave that traps 1-mm-radius expanded polystyrene (EPS) particles in space. In this system, EPS particles can be scanned vertically at a speed of 8.75 m/s and horizontally at 3.75 m/s. Synchronized lighting modules are used to provide colors to the particles for the display of 3D POV images. Additionally, MATD offers an outstanding 3D experience by multiplexing ultrasound to provide tactile feedback and audio for hearing that enhances the overall sensory experience.

Unlike acoustic tweezers that manipulate particles based on pressure differences in ultrasound standing waves, photophoretic-trap technology uses thermal forces to levitate optically opaque particles in the air. The side of the particle that is warmer imparts a greater momentum to the particle and creates a force that pushes it away from the heated surface. Therefore, the photophoretic-trap volumetric display proposed by Smalley et al. [31] uses a 405-nm laser passing through a tilted lens at a 1-degree angle to create potential trapping sites (PTS) in the focal region where particles are trapped and levitated. In this system, particles are scanned using an x-y scanner to change the focal point, and external illumination is applied to create colorful virtual objects. The proposed optical trap display allows particles to move up to 1.8 meters per second and can display POV images with a 10 Hz refresh rate on a 180 mm length along a single axis.

In this paper, we proposed a classification method for a new volumetric 3D display and made efforts to eliminate ambiguity in categorizing systems. Nevertheless, there are some cases where it is not easy to classify some systems due to their technical similarities.

Perspecta, proposed in the actuality system, is a representative example of a rotationally swept volume system. However, an improved system structure was proposed in 2007 that integrates light field technology into the existing system to enable the representation of occlusion in virtual objects [32]. The key distinction of this system from conventional technology lies in the use of a screen with a specific diffusing angle so that 2D elemental images emit light in a specific direction rather than in all directions. Therefore, while the improved system has a structural resemblance to the conventional rotationally swept volume system, from a technical perspective, it is classified as a viewpoint-surrounding cylinder system.

Viewpoint-surrounding volume systems are classified into cylinder systems and hemispherical systems depending on the location of the viewpoints. Among them, both USC’s light field display [7] and KNU’s tabletop display [22] share the common feature of providing different views in various directions by projecting elemental images rapidly onto a rotating asymmetric diffusive screen. However, the former is classified as a cylindrical system, while the latter is categorized as a spherical system. USC’s light field display has a high-speed projector positioned above a reflective screen, whereas KNU’s tabletop display has a high-speed projector positioned below a transmissive screen to avoid interference with the viewpoints. The most significant reason these two systems belong to different classifications is that the former has the screen positioned directly on the rotation axis, while the latter is implemented to rotate the screen with some offset from the rotation axis. As the offset increases when an asymmetric diffusive screen rotates, the height of the view volume, where virtual objects are displayed, also increases. As a result, the viewpoints are naturally arranged on top of the view volume. Therefore, the tabletop display is classified as a viewpoint-surrounding hemispherical system.

The Fog display proposed by Rakkolainen and Palovuori [8] in 2005 is occasionally mistaken for a volumetric 3D system because it uses fog to create a partially transparent scattering screen in the air and project 2D images onto it. However, it is not right to classify it as a 3D display because it merely reproduces 2D images on a fog screen. On the other hand, technologies composed of multiple projectors on a fog screen generate virtual objects within the fog screen and provide different views of virtual objects depending on the direction [3335]. Therefore, these systems are classified as viewpoint-surrounding cylinder systems.

Volumetric 3D displays have received significant attention due to their ability to provide very realistic virtual visualizations. Many systems that incorporate cutting-edge technologies have been proposed recently. However, the existing classification methods for volumetric 3D displays have limitations in encompassing these new systems. Therefore, there is an immediate need for a new classification system for volumetric 3D displays. In this paper, we presented a new definition of volumetric 3D displays and provided detailed classifications from a technological perspective. We expect that these classification criteria will lead to a clearer understanding of volumetric 3D displays and serve as a foundation for discussing the direction of future technological advancements.

This work was supported by Alchemist Project grant funded by Korea Evaluation Institute of Industrial Technology (KEIT) & the Korea Government (MOTIE) (Project No. 1415179744, 20019169).

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

  1. G. E. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38, 37-44 (2005).
    CrossRef
  2. K. Langhans, D. Bezecny, D. Homann, C. Vogt, C. Blohm, and K.-H. Scharschmidt, “New portable FELIX 3D display,” Proc. SPIE 3296, 204-216 (1998).
  3. B. G. Blundell and A. J. Schwarz, Volumetric Three-Dimensional Display Systems (Wiley-IEEE Press, USA, 2000), pp. 12-16.
  4. E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A three color, solid-state three-dimensional display,” Science 273, 1185-1189 (1996).
    CrossRef
  5. D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric displays: Turning 3-D inside-out,” Opt. Photonics News 29, 26-33 (2018).
    CrossRef
  6. T. Yendo, N. Kawakami, and S. Tachi, “Seelinder: The cylindrical light field display,” in Proc. ACM SIGGRAPH 2005 emerging technologies (Los Angeles, CA, USA, Jul. 31-Aug. 4, 2005), pp. 16-es.
    CrossRef
  7. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “An interactive 360° light field display,” in Proc. ACM SIGGRAPH emerging technologies (San Diego, CA, USA, Aug. 5-9, 2007), pp. 13-es.
    KoreaMed CrossRef
  8. I. Rakkolainen and K. Palovuori, “Laser scanning for the interactive walk-through fogScreen,” in Proc. 12th Virtual Reality Software and Technology (VRST) (Monterey, CA, USA, Nov. 7-9, 2005), pp. 224-226.
    CrossRef
  9. H. Kim, J. Hahn, and B. Lee, “Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays,” Opt. Express 17, 6389-6396 (2009).
    Pubmed CrossRef
  10. “Procedure for measuring size and color of the voxel of color hologram,” Telecommunications Technology Association, TTAK.KO-10.1022 (2017).
  11. J. Song, D. Heo, and J. Hahn, “Wide-angle voxel measurement method for 3D display using parabolic mirror and fish-eye lens,” in Proc. 32nd Optical Society of Korea (OSK) Winter Annual Meeting (Online Virtual Conference, Feb. 17-19, 2021), pp. paper W2C-III-5.
  12. Y. Lim, K. Hong, H. Kim, H. E. Kim, E.-Y. Chang, S. Lee, T. Kim, J. Nam, H.-G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24, 24999-25009 (2016).
    Pubmed CrossRef
  13. G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100-million-voxel volumetric display,” Proc. SPIE 4712, 300-312 (2002).
  14. J. Wu, C. Yan, X. Xia, J. Hou, H. Li, X. Liu, and W. Zheng, “44.2: An analysis of image uniformity of three-dimensional image based on rotating LED array volumetric display system,” SID Symp. Dig. Tech. Pap. 41, 657-660 (2010).
    CrossRef
  15. S. F. Keane, A. Jackson, G. F. Smith, W. J. Tamblyn, and K. Silverman, “Volumetric 3D display,” U.S. patent 10401636B2 (2019).
  16. A. Sullivan, “LP-1: Late-news poster: The DepthCubeTM solid-state multi-planar volumetric display,” SID Symp. Dig. Tech. Pap. 33, 354-355 (2002).
    CrossRef
  17. H. Jeon, H. Kim, and J. Hahn, “360-degree cylindrical directional display,” in Proc. 15th International Meeting on Information Display (IMID) (EXCO, Daegu, Korea, Aug. 18-21, 2015), pp. paper 60-3.
  18. M. Park, H. Jeon, D. Heo, S. Lim, and J. Hahn, “360-degree mixed reality volumetric display using an asymmetric diffusive holographic optical element,” Opt. Express 30, 47375-47387 (2022).
    Pubmed CrossRef
  19. T. Nakamura, Y. Imai, Y. Yoshimizu, K. Kuramoto, N. Kato, H. Suzuki, Y. Nakahata, and K. Nomoto, “36-1: 360‐degree transparent light field display with highly‐directional holographic screens for fully volumetric 3D video experience,” SID Symp. Dig. Tech. Pap. 54, 514-517 (2023).
    CrossRef
  20. S. Yoshida, “fVisiOn: Glasses-free tabletop 3-D display to provide virtual 3D media naturally alongside real media,” Proc. SPIE 8384, 838411 (2012).
    CrossRef
  21. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20, 8848-8861 (2012).
    Pubmed CrossRef
  22. K. Kim, W. Moon, Y. Im, H. Kim, and J. Hahn, “View-sequential 360-degree table-top display with digital micromirror device,” in Proc. 14th International Meeting on Information Display (IMID) (EXCO, Deagu, Korea, Aug. 26-29, 2014), pp. paper 1-91.
  23. D. Heo, H. Jeon, S. Lim, and J. Hahn, “A wide-field-of-view table-ornament display using electronic holography,” Curr. Opt. Photonics 7, 183-190 (2023).
  24. H. Kimura, T. Uchiyama, and H. Yoshikawa, “Laser produced 3D display in the air,” in Proc. ACM SIGGRAPH 2006 emerging technologies (Boston, MA, USA, Jul. 30-Aug. 3, 2006), pp. 20-es.
    CrossRef
  25. Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35, 17 (2016).
    CrossRef
  26. K. Kumagai, S. Hasegawa, and Y. Hayasaki, “Volumetric bubble display,” Optica 4, 298-302 (2017).
    CrossRef
  27. K. Kumagai, T. Chiba, and Y. Hayasaki, “Volumetric bubble display with a gold-nanoparticle-containing glycerin screen,” Opt. Express 28, 33911-33920 (2020).
    Pubmed CrossRef
  28. B. Zhu, B. Qian, Y. Liu, C. Xu, C. Liu, Q. Chen, J. Zhou, X. Liu, and J. Qiu, “A volumetric full-color display realized by frequency up-conversion of a transparent composite incorporating dispersed nonlinear optical crystals,” NPG Asia Mater. 9, e394 (2017).
    CrossRef
  29. K. R. Mun, J. Kyhm, J. Y. Lee, S. Shin, Y. Zhu, G. Kang, D. Kim, R. Deng, and H. S. Jang, “Elemental-migration-assisted full-color-tunable up-conversion nanoparticles for video-rate three-dimensional volumetric displays,” Nano Lett. 23, 3014-3022 (2023).
    Pubmed CrossRef
  30. R. Hirayama, D. M. Plasencia, N. Masuda, and S. Subramanian, “A volumetric display for visual, tactile and audio presentation using acoustic trapping,” Nature 575, 320-323 (2019).
    Pubmed CrossRef
  31. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553, 486-490 (2018).
    Pubmed CrossRef
  32. O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244-1250 (2007).
    Pubmed CrossRef
  33. C. Lee, S. DiVerdi, and T. Höllerer, “Depth-fused 3-D imagery on an immaterial display,” IEEE Trans. Vis. Comput. Graph. 15, 20-33 (2009).
    Pubmed CrossRef
  34. A. Yagi, M. Imura, Y. Kuroda, and O. Oshiro, “360-degree fog projection interactive display,” in Proc. ACM SIGGRAPH Asia 2011 emerging technologies (Hong Kong, China, Dec. 12-15, 2011), p. Article no. 19.
    CrossRef
  35. H. Jeon, S. Lim, M. Jung, J. Yoon, C. Park, J. Seok, J. Yu, and J. Hahn, “Crosstalk reduction in tabletop multiview display with fog screen,” ETRI J. 44, 686-694 (2022).
    CrossRef

Article

Invited Review Paper

Curr. Opt. Photon. 2023; 7(6): 597-607

Published online December 25, 2023 https://doi.org/10.3807/COPP.2023.7.6.597

Copyright © Optical Society of Korea.

Volumetric 3D Display: Features and Classification

Joonku Hahn, Woonchan Moon, Hosung Jeon, Minwoo Jung, Seongju Lee, Gunhee Lee, Muhan Choi

School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Korea

Correspondence to:*mhchoi@ee.knu.ac.kr, ORCID 0000-0002-5012-4058

Received: November 6, 2023; Revised: November 28, 2023; Accepted: November 29, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Volumetric 3D displays generate voxels to enable users to watch three-dimensional virtual objects from various angles, and they have a significant advantage over other types of 3D displays in terms of realism and the absence of vergence-accommodation conflict (VAC). By virtue of these advantages, various volumetric 3D display technologies incorporating novel approaches have been introduced competitively. As a result, the conventional classification criteria for volumetric 3D technology often fall short in categorizing these innovative methods. In this study, we present an improved classification framework capable of accommodating these new technologies. We expect that a new classification may offer some intuition to identify areas of technical deficiency and contribute to improving the technology.

Keywords: Three-dimensional display, View volume, Viewpoints, Volumetric display, Voxel

I. INTRODUCTION

Volumetric 3D displays have received a lot of interest from both academy and industry because of their ability to provide very realistic three-dimensional images without the vergence-accommodation conflict (VAC). A key concept characterizing volumetric 3D displays is filling the volume with three-dimensional images. From this perspective, the term volumetric 3D display can be extended to encompass almost all autostereoscopic 3D displays that create 3D images by generating voxels in space, such as light field displays, integral imaging, and holographic displays [1]. However, when comparing the strong points and drawbacks of various 3D display technologies to work toward their improvement, such a broad definition isn’t of much practical help. Furthermore, in the recent display industry, the term volumetric 3D display is generally used without including flat light field displays or holographic displays. Almost all volumetric 3D displays generate voxels in space so that users can view virtual objects from any direction. This makes it a suitable solution for developing applications where multiple users surrounding the system can simultaneously watch 3D contents. Traditionally, volumetric 3D displays have been classified into two main categories: swept volume systems and static volume systems [2, 3]. Swept volume systems involve a mechanical motion in screens or mirrors, whereas static volume systems do not involve any mechanical motion. As an example of static volume systems, displays using the two-step up-conversion phenomenon in nonlinear materials can be mentioned [4]. In these systems, visible light points are generated in nonlinear materials by focusing infrared light without mechanical movements. However, such a simplistic criterion may hinder the technical convergence of various techniques, and some systems raise questions about being classified in this conventional framework. For instance, according to the conventional classification, systems based on electrically switchable mirrors stacks or polymer-dispersed liquid crystal panel stacks would also be categorized as static volume systems because mechanical movement is not applied. Nevertheless, they share a common feature of swept volume systems in terms of sweeping the image surface, which is distinctly different from the two-step up-conversion phenomenon. Therefore, classifying all these systems as static volume systems can be problematic. Instead, technologies that use switchable optics are closer to swept volume systems due to their shared characteristic of sweeping the image surface within a defined volume.

At a time when new 3D display technologies are constantly emerging, it is no longer practical to categorize all volumetric displays into swept volume and static volume. For this reason, a modified classification method was proposed in response to the demand for a more comprehensive and universal classification framework [5]. In this work, free-space displays were introduced as the third subset main category under volumetric 3D displays, which spatially control light point sources to present virtual objects. However, this new classification method still presents several inconsistencies. In [5], the authors mainly distinguished volumetric 3D displays from other 3D displays based on the light generating method. This categorization broadly divides 3D displays into three categories: ray optics-based light field displays, point source-based volumetric displays, and wave optics-based holographic displays. According to these criteria, volumetric 3D displays are defined as systems that generate point light sources in space. However, this narrower definition makes it difficult to include systems that are popularly regarded as volumetric 3D displays, such as Nagoya University’s Seelinder display [6] or the University of Southern California’s light field display [7]. Furthermore, as described in [5], the fog display proposed by Rakkolainen and Palovuori [8] was classified as a type of volumetric display, commonly known as a free-space display. This causes a complication because the method of generating light in fog displays is considered a ray optics-based projection optical technology. This leads to a mutual exclusivity problem in the suggested classification framework. Therefore, there is a need for more comprehensive and universally applicable classification criteria that promotes a clearer understanding of the direction of technological advancements.

In this paper, we review the features commonly considered characteristic of volumetric 3D displays by many researchers. Volumetric 3D displays are distinguished by the fact that virtual objects occupy a certain space, and most systems naturally provide simultaneous views of three-dimensional images from a 360-degree perspective. We also propose a new and clearer definition for volumetric 3D displays that emphasizes the topological interrelation that distinguishes between the space occupied by users and that occupied by virtual objects. Finally, we suggest a new way to categorize the technical features of volumetric 3D displays.

II. Main Features of Volumetric 3D displays

Volumetric 3D displays can be clearly distinguished from other types of 3D displays by the following criteria: (1) whether the surface where 3D images are presented is open or closed, and (2) whether the user’s space is positioned inside the display system or if the display surface surrounds the user’s space. In other words, we can define volumetric 3D displays based on the topological relationship between the user’s space and the virtual space.

Firstly, we can categorize displays into two major groups based on whether the display surface where 3D images are displayed is open or closed. In Fig. 1, 1(a) flat or curved surface 3D displays and 1(b) near-eye displays have an open boundary for the display surface. In more detail, both Fig. 1(a) flat or curved surface 3D displays and Fig. 1(b) near-eye displays present 3D image information through an open boundary. However, the former features a fixed display surface in space, while the latter dynamically moves the display surface according to the user’s movements. On the other hand, Fig. 1(c) volumetric 3D displays and Fig. 1(d) immersive 3D displays have a closed display surface. When the display surface is closed, it corresponds to a Fig. 1(c) volumetric 3D display and the user’s space surrounds the virtual space. Conversely, if the virtual space surrounds the user’s space, it is categorized as Fig. 1(d) an immersive 3D display.

Figure 1. Types of 3D displays according to the topological virtual space: (a) Flat or curved surface 3D display, (b) near-eye 3D display, (c) volumetric 3D display, and (d) immersive 3D display.

To refine the previous discussion, the physical separation between user space and virtual space is primarily determined by the display surface created by the optical system. The commonly used term real image refers to the image formed in the user’s space, while virtual image refers to an image formed inside the optical system. In certain volumetric 3D displays, the image is focused on the screen and the screen sweeps the volume where the virtual space is formed. Conversely, in other volumetric 3D displays, users observe the virtual objects inside the optical system and the user space is distinctly separated from the virtual space. Only in a few volumetric 3D displays, virtual objects can appear in the user space and it is possible for users to touch the virtual objects. But even in these exceptional cases, the system is better suited to present virtual objects within a confined space, and it is more convenient for users to surround the virtual space and watch 3D contents within it. Before analyzing the characteristics of volumetric 3D displays in detail, we will look at terminology.

2.1. Viewpoint

The most significant feature of 3D displays is that users perceive different images depending on their viewpoints. In other words, users perceive 3D information through the binocular disparity caused by variations in images from different viewpoints. Of course, there is another 3D depth cue that provides distance information about objects particularly through the accommodation effect. However, in most cases, the axial length of 3D objects is relatively small compared to the user’s distance from the display. This makes binocular disparity and motion parallax more influential as 3D depth cues compared to the accommodation effect.

3D display technologies based on multi-view are typically evaluated based on the number of viewpoints. If each viewpoint provides images with the same resolution, the number of viewpoints directly affects the realistic visualization of virtual objects. Volumetric displays inherently provide 360-degree views, and the quality of the systems can be evaluated as the density of viewpoint, which is the number of viewpoints divided by the viewing angle.

2.2. View Volume

People typically observe images in accordance with the principles of perspective, where the transverse width increases as the distance from the observer increases. Similarly, in volumetric 3D displays, the area that can be seen from a given viewpoint forms a perspective viewing region. The common intersection where the viewing regions corresponding to each viewpoint overlap becomes the view volume, representing the region where virtual objects can be displayed [9].

As shown in Fig. 2(a), most volumetric 3D displays are designed for users to surround the system and form a full view volume as the intersection of viewing regions is determined from all viewpoints. Occasionally, as depicted in Fig. 2(b), it is possible in specific situations to define a partial view volume that users in a local place can observe. In this case, the partial view volume can be larger than the full view volume determined from all viewpoints, and it can even be larger than the optical system’s size. Therefore, as previously mentioned, a volumetric 3D display can display not only virtual images but also real images in some cases.

Figure 2. View volumes determined by the arrangement of viewpoints. (a) Full view volume from all viewpoints and (b) partial view volume from local viewpoints.

2.3. Voxel

A voxel is the smallest unit that makes up the view volume, just as a pixel is the smallest unit in a 2D flat display. The size of voxels is an important parameter in determining the quality of 3D virtual objects. In volumetric 3D displays, virtual objects are often represented by scanning 2D elemental images or point lights. While a single point of light corresponds to a voxel, a 2D elemental image has a specific direction of light emission. So, the combination of several 2D elemental images generates a voxel where light rays with different directions intersect each other. In most cases where 2D elemental images are scanned, voxel sizes are not uniform in the volumetric volume and the variation in voxel sizes can affect the image quality.

In Fig. 3(a), the volumetric 3D display has a rotating screen that emits light omnidirectionally. Here, each pixel of the projected 2D elemental image represents one voxel. In Fig. 3(b), the volumetric 3D display has a rotating screen that reflects incident beams in specific directions. In this case, voxels are represented at the intersections where the light emitted from the 2D elemental images crosses each other. The properties of the voxel vary depending on the type of volumetric 3D display.

Figure 3. Methods to generate voxels from (a) omnidirectional scattering, and (b) directional emission of 2D elemental images.

The most fundamental method for experimentally measuring the characteristics of voxels is to use imaging optical systems with a high numerical aperture (NA) to measure radiance for all positions within the view volume. This method has the advantage of obtaining not only radiant power, which reflects the brightness of the voxels, but also the angular spectrum distribution, which indicates the directional emission characteristics of voxels. However, it has the disadvantage of taking a long time to scan all points within the view volume.

Figure 4 provides several examples of the apparatuses for measuring voxel characteristics. In Fig. 4(a), a device for measuring voxel size and color consisting of a 4f optical system and a spectral sensor is shown [10]. The iris located in front of the sensor serves to determine the location of the voxels being measured, while the iris in the Fourier domain controls the acceptance angle of the measured voxels. Figure 4(b) shows a device with extremely large NA, and it is composed of a parabolic screen, parabolic mirror, and camera with a large field of view [11]. It is especially designed to measure the characteristics of voxels in tabletop displays [12].

Figure 4. Methods to measure the properties of voxels. (a) Apparatus for measuring size and color of the voxel and (b) apparatus for measuring the intensity profile of the voxel in a tabletop holographic display.

III. Classification of Volumetric 3D displays

We have classified volumetric 3D displays into three categories based on the technology they use to generate 3D virtual objects: Sequentially swept volume systems, viewpoint-surrounding volume systems, and point light voxel systems. Figure 5 illustrates the classification of volumetric 3D displays and provides representative examples.

Figure 5. Classification of volumetric 3D displays.

The sequentially swept volume systems operate by moving 2D elemental images in time to various positions within the designated volume where the virtual object is located. In these systems, 2D elemental images emit light in all directions. Therefore, a voxel simply corresponds to a single pixel of the 2D elemental image that intersects the voxel. Furthermore, the sequentially swept volume systems are incapable of representing one of the 3D depth cues, occlusion, which occurs when an object in the foreground obstructs another object in the background.

The viewpoint-surrounding volume system, like the sequentially swept volume system, uses 2D elemental images to represent virtual objects. However, it differs in that 2D elemental images emit light in specific directions. In other words, it employs directional emission to represent voxels, which means that different pixels from plural 2D elemental images correspond to a single voxel. The viewpoint-surrounding volume system is implemented based on integral imaging or light field technology using plural projection optics to provide a multi-view image. The multi-view enables the representation of occlusion in virtual objects. However, it requires a greater amount of information compared to the sequentially swept system because plural pixels are needed to represent a single voxel.

The point light voxel system distinguishes itself from the previous two types of systems by having elements with a dimensionality of 0D, rather than 2D. In this system, each voxel corresponds to a single point light source used to generate the virtual object. In the point light voxel system, point light sources can be generated either by self-illuminating devices or by illuminating scattering particles. It has the unique feature of being able to display virtual objects in free space not occupied by optical structures, making it a fascinating technology. However, the voxel point light sources must be scanned to represent virtual objects. Therefore, reducing the scanning speed in 3D space is a significant challenge, resulting in the disadvantage of a lower refresh rate. The dimensions of the elements representing voxels and their ability to depict occlusion effects in virtual objects for the three types of volumetric 3D displays are summarized in Table 1. The previously discussed three types of systems are further categorized based on their operating principles. We will describe the characteristics of representative systems based on the suggested details in the classification.

TABLE 1. Characteristics according to the type of volumetric 3D display.

Type of Volumetric 3D DisplayDimensions of ElementOcclusion Capability
Sequentially Swept Volume System2DIncapable
Viewpoint-Surrounding Volume System2DCapable
Point Light Voxel System0Da)Incapable

a)Zero dimension means that each voxel is ideally represented by a point of light with no dimensions..



Figure 6 shows the system structures of some representative volumetric 3D displays. In Figs. 6(a) and 6(b), the screen moves rotationally or axially, and 2D elemental images are projected synchronously. In Figs. 6(c) and 6(d), the direction of emission from 2D elemental images is deflected to the side of the cylinder or upward in the hemisphere. In Figs. 6(e) and 6(f), the point light is created by plasma from an infrared pulse laser or scattering from trapped particles.

Figure 6. Representative volumetric 3D displays; (a) Rotationally swept volume system, (b) axially swept volume system, (c) viewpoint surrounding cylinder system, (d) viewpoint surrounding hemisphere system, (e) point light emission voxel system, and (f) point light scattering voxel system.

3.1. Rotationally Swept Volume System

Sequentially swept volume systems can be categorized into two types based on how 2D elemental images are swept through the volume: Rotationally swept volume system and axially swept volume system. Rotationally swept systems have an advantage over axially swept volume systems in terms of mechanical operation, vibration, and durability because they simply rotate the screen without the need for complex mechanical movements.

Favalora et al. [13] proposed the Perspecta system, where a sequence of 2D elemental images is sequentially projected onto a rotating diffusive screen after being reflected by several relay mirrors. To provide distortion-free 3D contents at all screen rotation angles, a raster engine was implemented to convert 2D elemental images into a cylindrical voxel grid. The display has a resolution of 768 × 768 × 198 and operates at a volume refresh rate of 24 Hz.

A rotating LED array system introduced by Wu et al. [14] functions by spinning an LED array and directly displays cross-sectional images of virtual objects corresponding to the LED array’s rotation angle. This system updates 2D elemental images without the need for rasterization operations, synchronizing with the LED array’s rotation speed. The LED array has a resolution of 320 × 256 in full color, and it displays 512 elemental images in a single revolution. The resultant view volume size is 800 mm × 800 mm × 640 mm.

3.2. Axially Swept Volume System

The axially swept volume system is a method of filling the view volume by reciprocating sweeping of 2D elemental images instead of rotating them. One of the most famous systems applying this technology is Voxon’s Voxiebox [15]. In this system, the screen is driven by actuators to perform resonant vertical motion. While the screen is moving up and down, high-speed projectors update the elemental images projected onto the screen. This creates a view volume in the space where the screen is in motion so that users can watch very realistic 3D contents. However, since the light from elemental images emanates omnidirectionally, this system cannot represent the occlusion effect in virtual objects.

DepthCube produced by LightSpace Technologies is known as the world’s first solid-state volumetric 3D display [16]. Although it has no mechanical movement in operation, it can be considered an axially swept volume system due to the fact that 2D elemental images sweep across the view volume. The display employs a high-speed projector that projects a sequence of 3D images onto a stack of 20 shutters, generating 9.6 million voxels at a refresh rate of 40 Hz. The multiplanar anti-aliasing algorithm effectively minimizes the gaps between elemental images among LC slices and creates the illusion of a smooth appearance for the 3D object in the observer’s perception. However, as the viewing angle increases, there can be a degradation in image quality due to aliasing, especially when representing fine lines, such as in wireframe images.

3.3. Viewpoint-surrounding Cylinder System

Viewpoint-surrounding volume systems are divided into two categories: The viewpoint-surrounding cylinder system and the viewpoint-surrounding hemispherical system, based on the relative height between the view volume and viewpoints. In the cylinder system, viewpoints are positioned at the same height as the view volume and encircle it in 360 degrees. On the other hand, in the hemispherical system, viewpoints are positioned higher than the view volume and create a configuration where users look downward. This system is well-suited for applications that display virtual objects on a tabletop.

The Seelinder system proposed by Yendo et al. [6] creates viewpoints using a parallax barrier. In this system, a 1D LED array is aligned along the circumference of the inner cylinder, while the outer cylinder houses the parallax barrier. As the 1D LED array rotates around the central axis of the cylinder, the parallax barrier rotates in the opposite direction at a relatively faster speed. The speed difference between the parallax barrier and the 1-D LED array provides different viewpoints from each slit, thereby generating a view volume within the cylinder. The Seelinder is designed with a parallax interval of one degree and offers 360 different viewpoints.

Another example using a slit is the rotation-slit cylinder display proposed by Jeon et al. [17], where this system has been implemented to improve upon zoetrope technology, enabling the display of three-dimensional virtual objects with varying perspectives based on the viewpoints. This is achieved by rapidly rotating a single slit while projecting elemental images onto the inner surface of the cylinder using a high-speed projector. The elemental images are updated rapidly according to the position of the slit and form a view volume within the cylinder. This approach ensures sufficient separation between the slit and elemental images so that users can comfortably focus on virtual objects. The rotation-slit cylinder display can create a view volume inside a cylindrical structure with a diameter of 300 mm and a height of 500 mm and offers a total of 288 viewpoints.

Jones et al. [7] introduced the 360-degree light field display, which forms a view volume with a high-speed projector projecting elemental images onto a rotating anisotropic holographic diffuser with a 45-degree tilt. This system employs an anisotropic holographic diffuser to achieve a horizontal angular resolution of 1.25 degrees and provides vertical motion parallax through eye tracking. In particular, the advantage of this system is the creation of voxels at the points where the light emitted from the elemental images located on the rotating screen surface intersects. This enables the display of virtual objects with a natural occlusion effect.

A holographic optical element (HOE) has the benefit of achieving optical functions within a film. It also exhibits wavelength selectivity, appearing transparent for wavelengths other than the specific wavelength it is designed by Park et al. [18] implemented a cylinder transparent 3D display using an asymmetric HOE. The asymmetric diffusing HOE has a narrow diffusing angle of approximately 0.7 degrees in the horizontal direction for horizontal parallax. In the vertical direction, it redirects incoming beams from the top to the horizontal direction and has a wide diffusing angle of 17.8 degrees. In this system, a high-speed projector at the bottom of the cylinder projects elemental images upward. At the top of the cylinder, a rotating mirror is positioned to redirect the light so that the elemental image is projected on the HOE screen attached to the side of cylinder. The viewpoint moves around the cylinder in accordance with the mirror’s rotation angle, resulting in the formation of a view volume inside the transparent cylinder. On the other hand, Nakamura et al. [19] implemented a transparent cylindrical display by rotating the HOE screen on the side of the cylinder, instead of using a rotating mirror. In this system, the cylinder-shaped HOE screen forms three viewpoints corresponding to the RGB wavelengths around the cylinder and rotates to provide viewpoints from all 360 degrees.

3.4. Viewpoint-surrounding Hemisphere System

The viewpoint-surrounding hemisphere system is a display system suitable for tabletop applications where viewpoints are created above the view volume where virtual objects are formed. This hemispherical system is based on multi-view technology composed of multiple projection optics and can be implemented using space-division or time-division techniques.

A representative system for tabletop displays using a space-division technique is “fVisiOn,” proposed by Yoshida [20]. In this system, an anisotropic diffusing screen in the shape of a cone is positioned directly beneath the tabletop, and 2D elemental images are projected onto the conic screen from 288 projectors evenly spaced at 1.25-degree intervals along the circumference. The conical screen provides a wide diffusing angle vertically while maintaining a narrow diffusing angle of approximately 0.4 degrees horizontally, enabling the creation of 360-degree viewpoints without crosstalk problems. The optical axes of each projector intersect above the tabletop and form a view volume that can be observed from 360 degrees on the tabletop.

Takaki and Uchida [21] proposed a time-division tabletop display using a rotating decentered Fresnel lens. In this system, a high-speed projector is positioned along the table’s central axis, and the decentered lens redirects the light from each elemental image off-axis to converge at a point on the circumference above the tabletop. This decentered lens rotates around the central axis of the tabletop, and the high-speed projector synchronizes with the rotation to generate 800 viewpoints. Kim et al. [22] proposed a view-sequential tabletop system using an inclined and off-axis anisotropic diffusing screen. This system features a distinctive design of high-speed projector optics, which are folded, and the elemental images are projected onto the rotating screen in the normal direction. As a result, the lights emanating from each elemental image on the screen overlap each other above the tabletop and create a view volume in this region.

Unlike conventional imaging using conventional projection optics, holography technology offers the advantage of focusing virtual objects at multiple depths and has the potential to provide an ideal accommodation effect. The holographic tabletop system proposed by Lim et al. [12] uses a high-speed spatial light modulator (SLM) to reconstruct holographic images and employs rotation scanning optics and parabolic mirrors to create a 360-degree perspective on a tabletop. In the previous study, rotational scanning optics used aspheric lenses to focus light waves, but Heo et al. [23] improved it by using freeform mirror-based reflective optics, enabling the implementation of compact tabletop displays. Holographic tabletops not only offer various viewpoints but also have the advantage of creating natural 3D images that exhibit accommodation effects at each viewpoint.

Implementation of the space-division technique typically requires numerous projection optics, which can be demanding and leads to alignment challenges compared to the time-division technique. However, the time-division technique comes with its own set of issues, such as mechanical movements causing vibration, a lack of optical brightness, and flickering problems.

3.5. Point Light Emission Voxel System

The point light emission voxel refers to a specific 3D volume element in a voxel grid that represents the emission of light from a self-emitting point light source. Representative examples of the point light emission voxel system include laser-induced plasma displays, laser-induced bubble displays, and full-color up-conversion displays.

Laser-induced plasma displays are based on technology where high-power pulse lasers excite plasma at a specific location. By controlling the position of the focal point in the x-, y-, and z-axes, dot arrays can be displayed in 3D space. Research by Kimura et al. [24] showed this display used a linear motor system and a galvanometer mirror for high-speed scanning and controlled the position of the focus in the x-, y-, and z-axes to display an array of dots in 3D space. Ochiai et al. [25] proposed a system for rendering aerial and volumetric graphics using a femtosecond laser that emits light in a laser-induced plasma without the need for special materials. Two methods of rendering graphics with a femtosecond laser were introduced: a hologram generation method using spatial light modulation and a laser beam scanning method using a galvanometer mirror. These displays use airborne plasma to enable realistic and innovative 3D representations, but the aperture of the objective lens determines the maximum working space and angular range of the galvanometer mirror, and the high-speed changes in varifocal lenses can introduce aberration problems.

Laser-induced bubble displays generate 3D images through the formation and control of bubbles. Kumagai et al. [26, 27] proposed a novel volumetric display using femtosecond laser-induced microbubbles as voxels, which can be rendered in a high-viscosity liquid, to overcome the limitations of other volumetric displays in terms of voxel count and multicolor graphics rendering capabilities. The use of high-viscosity liquids enables full-color volumetric graphic rendering consisting of voxels controlled by an illumination light source, while a holographic laser drawing method controls the light intensity and spatial geometry of microbubble voxels.

Full-color up-conversion displays use nonlinear optical crystals to generate multidimensional images. Zhu et al. [28] demonstrated the generation of voxels by frequency up-conversion based on second-harmonic generation (SHG) in nonlinear optical crystals dispersed in solid-state composite materials for the creation of full-color moving objects in a volumetric display. The transparent composite containing randomly orientated nonlinear optical (NLO) crystals showed nearly isotropic frequency up-conversion based on SHG as a proof-of-concept demonstration of a volumetric 3D display that can be observed from any angle without the need for glasses. Also, Mun et al. [29] focused on the development of video-rate color 3D volumetric displays using elemental-migration-assisted full-color-tunable up-conversion nanoparticles (UCNPs). They achieved high efficiency of red, green, and blue orthogonal up-conversion luminescence (UCL) and full-color tunability in the UCNPs with a combination of elemental-migration-assisted color tuning and selective photon blocking.

Each point light emission voxel system has distinct characteristics and offers advantages for specific applications. Laser-induced plasma displays excel at providing detailed 3D representations, while laser-induced bubble displays have strengths in volume display and color representation. On the other hand, full-color up-conversion displays are an excellent choice for applications that require high resolution and full-color images.

3.6. Point Light Scattering Voxel System

The point light scattering voxel system represents a 0D light source in a 3D space by scanning scatterers within that space and applying appropriate colors to them. This system uses techniques such as acoustic tweezers or photophoretic-trap to scan scatterers in space and synchronizes them with RGB illumination beams through a scanner to create 3D images by persistence of vision (POV).

The acoustic tweezer technology used in the point light scattering voxel system is exemplified by the multimodal acoustic trap display (MATD) developed by Hirayama et al. [30] MATD comprises two 16 × 16 ultrasound transducer arrays (UTA) located at the top and bottom of the system. These UTAs control the frequency and phase to create a standing wave that traps 1-mm-radius expanded polystyrene (EPS) particles in space. In this system, EPS particles can be scanned vertically at a speed of 8.75 m/s and horizontally at 3.75 m/s. Synchronized lighting modules are used to provide colors to the particles for the display of 3D POV images. Additionally, MATD offers an outstanding 3D experience by multiplexing ultrasound to provide tactile feedback and audio for hearing that enhances the overall sensory experience.

Unlike acoustic tweezers that manipulate particles based on pressure differences in ultrasound standing waves, photophoretic-trap technology uses thermal forces to levitate optically opaque particles in the air. The side of the particle that is warmer imparts a greater momentum to the particle and creates a force that pushes it away from the heated surface. Therefore, the photophoretic-trap volumetric display proposed by Smalley et al. [31] uses a 405-nm laser passing through a tilted lens at a 1-degree angle to create potential trapping sites (PTS) in the focal region where particles are trapped and levitated. In this system, particles are scanned using an x-y scanner to change the focal point, and external illumination is applied to create colorful virtual objects. The proposed optical trap display allows particles to move up to 1.8 meters per second and can display POV images with a 10 Hz refresh rate on a 180 mm length along a single axis.

IV. Discussion

In this paper, we proposed a classification method for a new volumetric 3D display and made efforts to eliminate ambiguity in categorizing systems. Nevertheless, there are some cases where it is not easy to classify some systems due to their technical similarities.

Perspecta, proposed in the actuality system, is a representative example of a rotationally swept volume system. However, an improved system structure was proposed in 2007 that integrates light field technology into the existing system to enable the representation of occlusion in virtual objects [32]. The key distinction of this system from conventional technology lies in the use of a screen with a specific diffusing angle so that 2D elemental images emit light in a specific direction rather than in all directions. Therefore, while the improved system has a structural resemblance to the conventional rotationally swept volume system, from a technical perspective, it is classified as a viewpoint-surrounding cylinder system.

Viewpoint-surrounding volume systems are classified into cylinder systems and hemispherical systems depending on the location of the viewpoints. Among them, both USC’s light field display [7] and KNU’s tabletop display [22] share the common feature of providing different views in various directions by projecting elemental images rapidly onto a rotating asymmetric diffusive screen. However, the former is classified as a cylindrical system, while the latter is categorized as a spherical system. USC’s light field display has a high-speed projector positioned above a reflective screen, whereas KNU’s tabletop display has a high-speed projector positioned below a transmissive screen to avoid interference with the viewpoints. The most significant reason these two systems belong to different classifications is that the former has the screen positioned directly on the rotation axis, while the latter is implemented to rotate the screen with some offset from the rotation axis. As the offset increases when an asymmetric diffusive screen rotates, the height of the view volume, where virtual objects are displayed, also increases. As a result, the viewpoints are naturally arranged on top of the view volume. Therefore, the tabletop display is classified as a viewpoint-surrounding hemispherical system.

The Fog display proposed by Rakkolainen and Palovuori [8] in 2005 is occasionally mistaken for a volumetric 3D system because it uses fog to create a partially transparent scattering screen in the air and project 2D images onto it. However, it is not right to classify it as a 3D display because it merely reproduces 2D images on a fog screen. On the other hand, technologies composed of multiple projectors on a fog screen generate virtual objects within the fog screen and provide different views of virtual objects depending on the direction [3335]. Therefore, these systems are classified as viewpoint-surrounding cylinder systems.

V. Conclusion

Volumetric 3D displays have received significant attention due to their ability to provide very realistic virtual visualizations. Many systems that incorporate cutting-edge technologies have been proposed recently. However, the existing classification methods for volumetric 3D displays have limitations in encompassing these new systems. Therefore, there is an immediate need for a new classification system for volumetric 3D displays. In this paper, we presented a new definition of volumetric 3D displays and provided detailed classifications from a technological perspective. We expect that these classification criteria will lead to a clearer understanding of volumetric 3D displays and serve as a foundation for discussing the direction of future technological advancements.

Acknowledgments

This work was supported by Alchemist Project grant funded by Korea Evaluation Institute of Industrial Technology (KEIT) & the Korea Government (MOTIE) (Project No. 1415179744, 20019169).

FUNDING

Korea Evaluation Institute of Industrial Technology (KEIT 1415179744); Korea Government (MOTIE 20019169).

DISCLOSURES

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

DATA AVAILABILITY

All data generated or analyzed during this study are included in this published article.

Fig 1.

Figure 1.Types of 3D displays according to the topological virtual space: (a) Flat or curved surface 3D display, (b) near-eye 3D display, (c) volumetric 3D display, and (d) immersive 3D display.
Current Optics and Photonics 2023; 7: 597-607https://doi.org/10.3807/COPP.2023.7.6.597

Fig 2.

Figure 2.View volumes determined by the arrangement of viewpoints. (a) Full view volume from all viewpoints and (b) partial view volume from local viewpoints.
Current Optics and Photonics 2023; 7: 597-607https://doi.org/10.3807/COPP.2023.7.6.597

Fig 3.

Figure 3.Methods to generate voxels from (a) omnidirectional scattering, and (b) directional emission of 2D elemental images.
Current Optics and Photonics 2023; 7: 597-607https://doi.org/10.3807/COPP.2023.7.6.597

Fig 4.

Figure 4.Methods to measure the properties of voxels. (a) Apparatus for measuring size and color of the voxel and (b) apparatus for measuring the intensity profile of the voxel in a tabletop holographic display.
Current Optics and Photonics 2023; 7: 597-607https://doi.org/10.3807/COPP.2023.7.6.597

Fig 5.

Figure 5.Classification of volumetric 3D displays.
Current Optics and Photonics 2023; 7: 597-607https://doi.org/10.3807/COPP.2023.7.6.597

Fig 6.

Figure 6.Representative volumetric 3D displays; (a) Rotationally swept volume system, (b) axially swept volume system, (c) viewpoint surrounding cylinder system, (d) viewpoint surrounding hemisphere system, (e) point light emission voxel system, and (f) point light scattering voxel system.
Current Optics and Photonics 2023; 7: 597-607https://doi.org/10.3807/COPP.2023.7.6.597

TABLE 1 Characteristics according to the type of volumetric 3D display

Type of Volumetric 3D DisplayDimensions of ElementOcclusion Capability
Sequentially Swept Volume System2DIncapable
Viewpoint-Surrounding Volume System2DCapable
Point Light Voxel System0Da)Incapable

a)Zero dimension means that each voxel is ideally represented by a point of light with no dimensions.


References

  1. G. E. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38, 37-44 (2005).
    CrossRef
  2. K. Langhans, D. Bezecny, D. Homann, C. Vogt, C. Blohm, and K.-H. Scharschmidt, “New portable FELIX 3D display,” Proc. SPIE 3296, 204-216 (1998).
  3. B. G. Blundell and A. J. Schwarz, Volumetric Three-Dimensional Display Systems (Wiley-IEEE Press, USA, 2000), pp. 12-16.
  4. E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A three color, solid-state three-dimensional display,” Science 273, 1185-1189 (1996).
    CrossRef
  5. D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric displays: Turning 3-D inside-out,” Opt. Photonics News 29, 26-33 (2018).
    CrossRef
  6. T. Yendo, N. Kawakami, and S. Tachi, “Seelinder: The cylindrical light field display,” in Proc. ACM SIGGRAPH 2005 emerging technologies (Los Angeles, CA, USA, Jul. 31-Aug. 4, 2005), pp. 16-es.
    CrossRef
  7. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “An interactive 360° light field display,” in Proc. ACM SIGGRAPH emerging technologies (San Diego, CA, USA, Aug. 5-9, 2007), pp. 13-es.
    KoreaMed CrossRef
  8. I. Rakkolainen and K. Palovuori, “Laser scanning for the interactive walk-through fogScreen,” in Proc. 12th Virtual Reality Software and Technology (VRST) (Monterey, CA, USA, Nov. 7-9, 2005), pp. 224-226.
    CrossRef
  9. H. Kim, J. Hahn, and B. Lee, “Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays,” Opt. Express 17, 6389-6396 (2009).
    Pubmed CrossRef
  10. “Procedure for measuring size and color of the voxel of color hologram,” Telecommunications Technology Association, TTAK.KO-10.1022 (2017).
  11. J. Song, D. Heo, and J. Hahn, “Wide-angle voxel measurement method for 3D display using parabolic mirror and fish-eye lens,” in Proc. 32nd Optical Society of Korea (OSK) Winter Annual Meeting (Online Virtual Conference, Feb. 17-19, 2021), pp. paper W2C-III-5.
  12. Y. Lim, K. Hong, H. Kim, H. E. Kim, E.-Y. Chang, S. Lee, T. Kim, J. Nam, H.-G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24, 24999-25009 (2016).
    Pubmed CrossRef
  13. G. E. Favalora, J. Napoli, D. M. Hall, R. K. Dorval, M. G. Giovinco, M. J. Richmond, and W. S. Chun, “100-million-voxel volumetric display,” Proc. SPIE 4712, 300-312 (2002).
  14. J. Wu, C. Yan, X. Xia, J. Hou, H. Li, X. Liu, and W. Zheng, “44.2: An analysis of image uniformity of three-dimensional image based on rotating LED array volumetric display system,” SID Symp. Dig. Tech. Pap. 41, 657-660 (2010).
    CrossRef
  15. S. F. Keane, A. Jackson, G. F. Smith, W. J. Tamblyn, and K. Silverman, “Volumetric 3D display,” U.S. patent 10401636B2 (2019).
  16. A. Sullivan, “LP-1: Late-news poster: The DepthCubeTM solid-state multi-planar volumetric display,” SID Symp. Dig. Tech. Pap. 33, 354-355 (2002).
    CrossRef
  17. H. Jeon, H. Kim, and J. Hahn, “360-degree cylindrical directional display,” in Proc. 15th International Meeting on Information Display (IMID) (EXCO, Daegu, Korea, Aug. 18-21, 2015), pp. paper 60-3.
  18. M. Park, H. Jeon, D. Heo, S. Lim, and J. Hahn, “360-degree mixed reality volumetric display using an asymmetric diffusive holographic optical element,” Opt. Express 30, 47375-47387 (2022).
    Pubmed CrossRef
  19. T. Nakamura, Y. Imai, Y. Yoshimizu, K. Kuramoto, N. Kato, H. Suzuki, Y. Nakahata, and K. Nomoto, “36-1: 360‐degree transparent light field display with highly‐directional holographic screens for fully volumetric 3D video experience,” SID Symp. Dig. Tech. Pap. 54, 514-517 (2023).
    CrossRef
  20. S. Yoshida, “fVisiOn: Glasses-free tabletop 3-D display to provide virtual 3D media naturally alongside real media,” Proc. SPIE 8384, 838411 (2012).
    CrossRef
  21. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20, 8848-8861 (2012).
    Pubmed CrossRef
  22. K. Kim, W. Moon, Y. Im, H. Kim, and J. Hahn, “View-sequential 360-degree table-top display with digital micromirror device,” in Proc. 14th International Meeting on Information Display (IMID) (EXCO, Deagu, Korea, Aug. 26-29, 2014), pp. paper 1-91.
  23. D. Heo, H. Jeon, S. Lim, and J. Hahn, “A wide-field-of-view table-ornament display using electronic holography,” Curr. Opt. Photonics 7, 183-190 (2023).
  24. H. Kimura, T. Uchiyama, and H. Yoshikawa, “Laser produced 3D display in the air,” in Proc. ACM SIGGRAPH 2006 emerging technologies (Boston, MA, USA, Jul. 30-Aug. 3, 2006), pp. 20-es.
    CrossRef
  25. Y. Ochiai, K. Kumagai, T. Hoshi, J. Rekimoto, S. Hasegawa, and Y. Hayasaki, “Fairy lights in femtoseconds: Aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields,” ACM Trans. Graph. 35, 17 (2016).
    CrossRef
  26. K. Kumagai, S. Hasegawa, and Y. Hayasaki, “Volumetric bubble display,” Optica 4, 298-302 (2017).
    CrossRef
  27. K. Kumagai, T. Chiba, and Y. Hayasaki, “Volumetric bubble display with a gold-nanoparticle-containing glycerin screen,” Opt. Express 28, 33911-33920 (2020).
    Pubmed CrossRef
  28. B. Zhu, B. Qian, Y. Liu, C. Xu, C. Liu, Q. Chen, J. Zhou, X. Liu, and J. Qiu, “A volumetric full-color display realized by frequency up-conversion of a transparent composite incorporating dispersed nonlinear optical crystals,” NPG Asia Mater. 9, e394 (2017).
    CrossRef
  29. K. R. Mun, J. Kyhm, J. Y. Lee, S. Shin, Y. Zhu, G. Kang, D. Kim, R. Deng, and H. S. Jang, “Elemental-migration-assisted full-color-tunable up-conversion nanoparticles for video-rate three-dimensional volumetric displays,” Nano Lett. 23, 3014-3022 (2023).
    Pubmed CrossRef
  30. R. Hirayama, D. M. Plasencia, N. Masuda, and S. Subramanian, “A volumetric display for visual, tactile and audio presentation using acoustic trapping,” Nature 575, 320-323 (2019).
    Pubmed CrossRef
  31. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553, 486-490 (2018).
    Pubmed CrossRef
  32. O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244-1250 (2007).
    Pubmed CrossRef
  33. C. Lee, S. DiVerdi, and T. Höllerer, “Depth-fused 3-D imagery on an immaterial display,” IEEE Trans. Vis. Comput. Graph. 15, 20-33 (2009).
    Pubmed CrossRef
  34. A. Yagi, M. Imura, Y. Kuroda, and O. Oshiro, “360-degree fog projection interactive display,” in Proc. ACM SIGGRAPH Asia 2011 emerging technologies (Hong Kong, China, Dec. 12-15, 2011), p. Article no. 19.
    CrossRef
  35. H. Jeon, S. Lim, M. Jung, J. Yoon, C. Park, J. Seok, J. Yu, and J. Hahn, “Crosstalk reduction in tabletop multiview display with fog screen,” ETRI J. 44, 686-694 (2022).
    CrossRef
Optical Society of Korea

Current Optics
and Photonics


Min-Kyo Seo,
Editor-in-chief

Share this article on :

  • line