검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Article

Current Optics and Photonics 2020; 4(5): 421-427

Published online October 25, 2020 https://doi.org/10.3807/COPP.2020.4.5.421

Copyright © Optical Society of Korea.

A Privacy-protection Device Using a Directional Backlight and Facial Recognition

Hyeontaek Lee, Hyunsoo Kim, and Hee-Jin Choi*

Department of Physics and Astronomy, Sejong University, Seoul 05006, Korea

Corresponding author: hjchoi@sejong.ac.kr

Received: September 2, 2020; Revised: September 14, 2020; Accepted: September 14, 2020

A novel privacy-protection device to prevent visual hacking is realized by using a directional backlight and facial recognition. The proposed method is able to overcome the limitations of previous privacy-protection methods that simply restrict the viewing angle to a narrow range. The accuracy of user tracking is accomplished by the combination of a time-of-flight sensor and facial recognition with no restriction of detection range. In addition, an experimental demonstration is provided to verify the proposed scheme.

Keywords: Privacy protection, Directional backlight, Face detection range

With rapid progress in display technologies to provide wider viewing angle and higher contrast, the information displayed on a device’s screen can be seen from anywhere by anybody more easily. Therefore, visual hacking to steal essential information on a screen, by peeping on it or capturing it using a telephoto lens, has become much easier and more common [1]. To prevent the visual hacking described above, there have been some methods to restrict the viewing angle within a narrow range [2-4]. However, those kinds of methods are still vulnerable to visual hacking, since they cannot distinguish the original user from the visual hacker. Moreover, since those conventional protections are not switchable, the user can feel uncomfortable due to the narrower viewing angle, even with a situation that no protection is required. In this paper, we propose a novel privacy-protection technique to adopt a directional backlight and facial-recognition system with higher accuracy and extended detection range, by combining a time-of-flight (TOF) sensor with facial recognition. An experimental demonstration is also provided to verify the proposed scheme.

2.1. Facial Recognition Closer than the Minimum Detection Range of the TOF Sensor

To converge the light rays from the display device to the position of the face of the user, it is necessary to acquire the coordinates of that face. For that purpose, a TOF sensor such as the Kinect V2 is commonly used to measure the depth map. However, the TOF sensor has the limitation of a minimum detection range, which means that it cannot detect an object closer than that range. Since that range is about 0.7 m in the case of the Kinect V2 [5], we propose a novel method to extend the range of depth detection by combining a color image and a depth map.

At first, the facial recognition requires a color image of the face, which is provided by a color-image sensor besides the TOF sensor. Thus a calibration between the color image and depth map is necessary. Though a method using a checkerboard is commonly used [6], we used Kinect for Windows SDK 2.0 functions for this study, since we t hink that it is not p ractical for t he u ser to always carry a checkerboard for calibration. Nevertheless, any kind of calibration method can be used if the output is accurate.

After the calibration is finished, in order to calculate the real coordinates (x, y) of the face’s center, we must determine a single pixel located at a particular depth z with horizontal/vertical sizes of sx/y as follows, with horizontal/vertical field of view θx/y and resolution Nx/y of the color sensor, and depth z measured by the TOF sensor as shown in Fig. 1.

Figure 1.Calculation of the true position of the face’s center.

Using the above equation and the calibrated pixel coordinates of the face’s center (xN, yN) we can acquire the three-dimensional coordinates of the face’s center (x, y, z) as follows:

However, as described above, this technique can be used only when the face’s center is beyond the minimum detection range of the TOF sensor. Thus we propose a novel method to retrieve the depth from the distance between the eyes of the observer after an initial recognition of the face’s position.

When the initial recognition of the face’s position is finished, the number of pixels Neyei between the eyes of the observer is saved with the initial depth zi as shown Fig. 2. Then we can derive the equation below from Fig. 2, because the distance between the eyes is fixed.

Figure 2.Retrieving the depth z of the face’s center without the output from the TOF sensor.

Since the horizontal size of a single pixel is proportional to the depth of the face’s center, as described in Eq. (1), we can replace sxi and sx in Eq. (3) with zi and z as below, respectively.

Thus, without the output from the TOF sensor, the depth z of the face position can be retrieved by only obtaining the number of pixels Neye between the eyes from the color image sensor. To verify the proposed scheme, an experimental demonstration to recognize the face’s position closer than the minimum detection range of the TOF sensor is also provided.

2.2. Principles of Controlling the Ray Directions to the Face’s Position

In addition to recognition of the face’s position described above, accurate control of the directions of rays converging to the recognized facial position is also essential. For that purpose, we use a directional backlight system composed of a line light source and a convex-lens array with a focal length of fLA, as shown in Fig. 3.

Figure 3.Structure and principle of the directional backlight system.

Since light rays emitted from the line light source proceed in parallel after passing through the convex-lens array, we can control the ray directions by positioning the line light source [7]. However, in an actual directional backlight system the line light source has a physical width wl, as denoted in Fig. 4. Thus the light rays from the convex-lens array spread with a deviation angle θd, as derived below.

Nevertheless, since we expect that θd is only about 0.2 degrees in our experimental demonstration, we expect that the effect of imperfect collimation shown in Fig. 4 is negligible. Thus, using that principle, we can make all light be converged at the eye positions of a registered observer with pupil size of 2-8 mm, as shown in Fig. 5,[8].

Figure 4.Effect of the width of the line light source on the imperfect collimation.
Figure 5.Principles of the proposed privacy-protection system.

For this purpose, we calculate the position of each line light source using a coordinate system shown in Fig. 6. When a converged position decentered by xo from the center of the 0th elemental lens, and xn away from the nth elemental lens, centers on depth z, the nth line light source must be decentered by Lpn from the center position of the nth elemental lens. Then we can derive Lpn as follows, using a proportional relation.

Figure 6.Calculation of line light source positions.

Since Lpn denotes the decentered position within the nth elemental lens, it is necessary to convert it with a global coordinate using the pitch pel of a single elemental lens, as follows:

Then we can use the following relation between xn and x0.

Combining the above equations, we can calculate the position of each line light source from the true position of the face’s center.

Since control of the ray direction is essential for the proposed system, the viewing angle is one of the most important viewing parameters. Regarding the above principles, the viewing angle θmax of the proposed system can be derived from the focal length fLA of the lens array, the pitch pel of a single elemental lens, the size wLA of the convex-lens array, and the depth z of the recognized face from the convex-lens array shown in Fig. 7.

Figure 7.Analysis of the viewing zone and viewing angle of the proposed method.

To derive the viewing angle, we should consider the leftmost and rightmost positions where the light rays will converge. For that purpose, the first step is to calculate the distance dcross between the convex-lens array and crossing point where the leftmost/rightmost light rays meet.

Then we can expect the maximum size wvz of the viewing zone at a depth of z according to the following equation from the proportionality relation.

Using the above derivations, we can calculate the viewing angle θmax as follows:

Pictures of the experimental setup and face cameras are shown in Fig. 8. We use two different faces, one registered and one unregistered, and located cameras with resolution of 4032 by 3024 pixels at the position of the right eye of each face. Thus we can check whether the proposed system can ban the sight of the unregistered observer while providing proper images to the registered user only. Figure 9 shows the positions of face cameras on the registered face. Regarding that the minimum detection range of the TOF sensor is 0.7 m, we chose two positions (labeled 1 and 2) closer than that, to verify that our method works properly under any circumstance. Behind an LCD panel with a resolution of 1920 by 1080 and a pixel pitch of 0.25 mm, we attach a convex-lens array composed of 13 by 13 elemental lenses, each with pitch pel of 10 mm and focal length fLA of 22 mm, as shown in Fig. 9. The line light source behind the lens array has a width of 0.162 mm. From those parameters, we expect that the proposed system can provide privacy protection within a viewing angle θmax of about 17° when the observer is located 800 mm from the device. Thus, we set the leftmost and rightmost positions of the face camera located 800 mm (positions 3 and 5) from the LCD panel to be the border of the viewing zone, to verify the analysis in chapter III.

Figure 8.Pictures of the experimental setup and face cameras, with registered and unregistered faces.
Figure 9.Positions of the face cameras to capture the observed views.

As a first step of verification, we locate two face cameras with registered and unregistered faces 800 mm from the LCD panel, and check whether the system provides the screen information only to the location of the registered face. The experimental results are shown in Fig. 10: The image displayed on the LCD panel (skull and crossbones) can be captured only at the position of the registered face (denoted with blue solid lines), whereas the face camera at the other position, which represents a visual hacker (denoted with green dashed lines), captures no information from the screen. Therefore, from the experimental results in Fig. 10, it can be verified that the proposed scheme works as expected, when the observer is within the detection range of the TOF sensor.

Figure 10.Experimental results when the face camera is within the detection range of the TOF sensor. The blue solid and green dashed lines represent the locations of the registered and unregistered observers respectively.

The next verification is for a case when the observer is outside of the detection range of TOF sensor. For that purpose, we locate the face camera of the registered observer at positions 1 and 2. In this case, it is expected that the depth of the registered face is retrieved using the parameters Neye, Neyei, and zi as denoted in Eq. (4). In the experiment, the measured depth zi from the TOF sensor is 894 mm and Neyei is 67 pixels, when the registered face is at position 4. Then, from the measured values of Neye at position 1 (121 pixels) and position 2 (111 pixels), we could retrieve the depths z1 and z2 as 495 mm and 540 mm from the TOF sensor respectively. Thus we can confirm that those values are well matched by the experimental conditions shown in Fig. 9. The experimental results for the second case are shown in Fig. 11: The experimental demonstration can provide the screen image only to the eye location of the registered face’s camera (denoted with blue solid lines), while the unregistered face at the other position (denoted with green dashed lines) can capture no image information. Therefore, it can be concluded that the proposed scheme protects the privacy of the registered user securely, regardless of the distance between the observer and the device.

Figure 11.Experimental results when the face camera is closer than the detection range of the TOF sensor. The blue solid and green dashed lines represent the locations of the registered and unregistered observers respectively.

With the rapid progress of mobile devices, the protection of the information in them becomes more important. Though various kinds of securing technology are developed in transmitting the information, the display screen is still vulnerable to simple peeping: visual hacking. In this paper, we have proposed an effective privacy-protection method to overcome the limitation of the detection range of a TOF sensor, and verified it with experimental demonstrations. We expect that the proposed system can be applied to various kinds of display devices to eliminate concerns over visual hacking.

  1. P. Barker, "Visual hacking - why it matters and how to prevent it," Netw. Secur. 42, 14-17 (2019).
  2. H. Yoon, S.-G. Oh, D. S. Kang, J. M. Parck, S. J. Choi, K. Y. Suh, K. Char, and H. H. Lee, "Arrays of Lucius microprisms for directional allocation of light and auto-stereoscopic three-dimensional displays," Nat. Commun. 2, 455 (2011).
    CrossRef
  3. Gaides G. E., Kadoma I. A., Olson D. B., Larson R. A., and Sykora A. R., "Light collimating film," U.S. Patent (2011). 8012567B2
  4. J.-H. Kim, C. H. Lee, S. S. Lee, and K.-C. Lee, "Highly transparent privacy filter film with image distortion," Opt. Express 22, 29799-29804 (2014).
    CrossRef
  5. T. Guzvinecz, V. Szucs, and C. Sik-Lanyi, "Suitability of the Kinect Sensor and Leap Motion Controller―A Literature Review," Sensors 19, 1072 (2019).
    CrossRef
  6. C. Kim, S. Yun, S-.W. Jung, and C. S. Won, "Color and depth image correspondence for Kinect v2," in Advanced Multimedia and Ubiquitous Engineering (Springer, Berlin, Heidelberg, Germany 2015), pp. 111-116.
  7. H. Kwon, and H.-J. Choi, "A time-sequential multi-view autostereoscopic display without resolution loss using a multidirectional backlight unit and an LCD panel," in Proc. SPIE ( 2012), 8288, pp. 82881Y.
  8. S. G. de Groot, and J. W. Gebhard, "Pupil size as determined by adapting luminance," J. Opt. Soc. Am. 42, 492-495 (1952).
    CrossRef

Article

Article

Current Optics and Photonics 2020; 4(5): 421-427

Published online October 25, 2020 https://doi.org/10.3807/COPP.2020.4.5.421

Copyright © Optical Society of Korea.

A Privacy-protection Device Using a Directional Backlight and Facial Recognition

Hyeontaek Lee, Hyunsoo Kim, and Hee-Jin Choi*

Department of Physics and Astronomy, Sejong University, Seoul 05006, Korea

Correspondence to:hjchoi@sejong.ac.kr

Received: September 2, 2020; Revised: September 14, 2020; Accepted: September 14, 2020

Abstract

A novel privacy-protection device to prevent visual hacking is realized by using a directional backlight and facial recognition. The proposed method is able to overcome the limitations of previous privacy-protection methods that simply restrict the viewing angle to a narrow range. The accuracy of user tracking is accomplished by the combination of a time-of-flight sensor and facial recognition with no restriction of detection range. In addition, an experimental demonstration is provided to verify the proposed scheme.

Keywords: Privacy protection, Directional backlight, Face detection range

I. INTRODUCTION

With rapid progress in display technologies to provide wider viewing angle and higher contrast, the information displayed on a device’s screen can be seen from anywhere by anybody more easily. Therefore, visual hacking to steal essential information on a screen, by peeping on it or capturing it using a telephoto lens, has become much easier and more common [1]. To prevent the visual hacking described above, there have been some methods to restrict the viewing angle within a narrow range [2-4]. However, those kinds of methods are still vulnerable to visual hacking, since they cannot distinguish the original user from the visual hacker. Moreover, since those conventional protections are not switchable, the user can feel uncomfortable due to the narrower viewing angle, even with a situation that no protection is required. In this paper, we propose a novel privacy-protection technique to adopt a directional backlight and facial-recognition system with higher accuracy and extended detection range, by combining a time-of-flight (TOF) sensor with facial recognition. An experimental demonstration is also provided to verify the proposed scheme.

II. PRINCIPLE

2.1. Facial Recognition Closer than the Minimum Detection Range of the TOF Sensor

To converge the light rays from the display device to the position of the face of the user, it is necessary to acquire the coordinates of that face. For that purpose, a TOF sensor such as the Kinect V2 is commonly used to measure the depth map. However, the TOF sensor has the limitation of a minimum detection range, which means that it cannot detect an object closer than that range. Since that range is about 0.7 m in the case of the Kinect V2 [5], we propose a novel method to extend the range of depth detection by combining a color image and a depth map.

At first, the facial recognition requires a color image of the face, which is provided by a color-image sensor besides the TOF sensor. Thus a calibration between the color image and depth map is necessary. Though a method using a checkerboard is commonly used [6], we used Kinect for Windows SDK 2.0 functions for this study, since we t hink that it is not p ractical for t he u ser to always carry a checkerboard for calibration. Nevertheless, any kind of calibration method can be used if the output is accurate.

After the calibration is finished, in order to calculate the real coordinates (x, y) of the face’s center, we must determine a single pixel located at a particular depth z with horizontal/vertical sizes of sx/y as follows, with horizontal/vertical field of view θx/y and resolution Nx/y of the color sensor, and depth z measured by the TOF sensor as shown in Fig. 1.

Figure 1. Calculation of the true position of the face’s center.

Using the above equation and the calibrated pixel coordinates of the face’s center (xN, yN) we can acquire the three-dimensional coordinates of the face’s center (x, y, z) as follows:

However, as described above, this technique can be used only when the face’s center is beyond the minimum detection range of the TOF sensor. Thus we propose a novel method to retrieve the depth from the distance between the eyes of the observer after an initial recognition of the face’s position.

When the initial recognition of the face’s position is finished, the number of pixels Neyei between the eyes of the observer is saved with the initial depth zi as shown Fig. 2. Then we can derive the equation below from Fig. 2, because the distance between the eyes is fixed.

Figure 2. Retrieving the depth z of the face’s center without the output from the TOF sensor.

Since the horizontal size of a single pixel is proportional to the depth of the face’s center, as described in Eq. (1), we can replace sxi and sx in Eq. (3) with zi and z as below, respectively.

Thus, without the output from the TOF sensor, the depth z of the face position can be retrieved by only obtaining the number of pixels Neye between the eyes from the color image sensor. To verify the proposed scheme, an experimental demonstration to recognize the face’s position closer than the minimum detection range of the TOF sensor is also provided.

2.2. Principles of Controlling the Ray Directions to the Face’s Position

In addition to recognition of the face’s position described above, accurate control of the directions of rays converging to the recognized facial position is also essential. For that purpose, we use a directional backlight system composed of a line light source and a convex-lens array with a focal length of fLA, as shown in Fig. 3.

Figure 3. Structure and principle of the directional backlight system.

Since light rays emitted from the line light source proceed in parallel after passing through the convex-lens array, we can control the ray directions by positioning the line light source [7]. However, in an actual directional backlight system the line light source has a physical width wl, as denoted in Fig. 4. Thus the light rays from the convex-lens array spread with a deviation angle θd, as derived below.

Nevertheless, since we expect that θd is only about 0.2 degrees in our experimental demonstration, we expect that the effect of imperfect collimation shown in Fig. 4 is negligible. Thus, using that principle, we can make all light be converged at the eye positions of a registered observer with pupil size of 2-8 mm, as shown in Fig. 5,[8].

Figure 4. Effect of the width of the line light source on the imperfect collimation.
Figure 5. Principles of the proposed privacy-protection system.

For this purpose, we calculate the position of each line light source using a coordinate system shown in Fig. 6. When a converged position decentered by xo from the center of the 0th elemental lens, and xn away from the nth elemental lens, centers on depth z, the nth line light source must be decentered by Lpn from the center position of the nth elemental lens. Then we can derive Lpn as follows, using a proportional relation.

Figure 6. Calculation of line light source positions.

Since Lpn denotes the decentered position within the nth elemental lens, it is necessary to convert it with a global coordinate using the pitch pel of a single elemental lens, as follows:

Then we can use the following relation between xn and x0.

Combining the above equations, we can calculate the position of each line light source from the true position of the face’s center.

III. ANALYSIS OF THE VIEWING PARAMETERS

Since control of the ray direction is essential for the proposed system, the viewing angle is one of the most important viewing parameters. Regarding the above principles, the viewing angle θmax of the proposed system can be derived from the focal length fLA of the lens array, the pitch pel of a single elemental lens, the size wLA of the convex-lens array, and the depth z of the recognized face from the convex-lens array shown in Fig. 7.

Figure 7. Analysis of the viewing zone and viewing angle of the proposed method.

To derive the viewing angle, we should consider the leftmost and rightmost positions where the light rays will converge. For that purpose, the first step is to calculate the distance dcross between the convex-lens array and crossing point where the leftmost/rightmost light rays meet.

Then we can expect the maximum size wvz of the viewing zone at a depth of z according to the following equation from the proportionality relation.

Using the above derivations, we can calculate the viewing angle θmax as follows:

IV. EXPERIMENTAL RESULTS

Pictures of the experimental setup and face cameras are shown in Fig. 8. We use two different faces, one registered and one unregistered, and located cameras with resolution of 4032 by 3024 pixels at the position of the right eye of each face. Thus we can check whether the proposed system can ban the sight of the unregistered observer while providing proper images to the registered user only. Figure 9 shows the positions of face cameras on the registered face. Regarding that the minimum detection range of the TOF sensor is 0.7 m, we chose two positions (labeled 1 and 2) closer than that, to verify that our method works properly under any circumstance. Behind an LCD panel with a resolution of 1920 by 1080 and a pixel pitch of 0.25 mm, we attach a convex-lens array composed of 13 by 13 elemental lenses, each with pitch pel of 10 mm and focal length fLA of 22 mm, as shown in Fig. 9. The line light source behind the lens array has a width of 0.162 mm. From those parameters, we expect that the proposed system can provide privacy protection within a viewing angle θmax of about 17° when the observer is located 800 mm from the device. Thus, we set the leftmost and rightmost positions of the face camera located 800 mm (positions 3 and 5) from the LCD panel to be the border of the viewing zone, to verify the analysis in chapter III.

Figure 8. Pictures of the experimental setup and face cameras, with registered and unregistered faces.
Figure 9. Positions of the face cameras to capture the observed views.

As a first step of verification, we locate two face cameras with registered and unregistered faces 800 mm from the LCD panel, and check whether the system provides the screen information only to the location of the registered face. The experimental results are shown in Fig. 10: The image displayed on the LCD panel (skull and crossbones) can be captured only at the position of the registered face (denoted with blue solid lines), whereas the face camera at the other position, which represents a visual hacker (denoted with green dashed lines), captures no information from the screen. Therefore, from the experimental results in Fig. 10, it can be verified that the proposed scheme works as expected, when the observer is within the detection range of the TOF sensor.

Figure 10. Experimental results when the face camera is within the detection range of the TOF sensor. The blue solid and green dashed lines represent the locations of the registered and unregistered observers respectively.

The next verification is for a case when the observer is outside of the detection range of TOF sensor. For that purpose, we locate the face camera of the registered observer at positions 1 and 2. In this case, it is expected that the depth of the registered face is retrieved using the parameters Neye, Neyei, and zi as denoted in Eq. (4). In the experiment, the measured depth zi from the TOF sensor is 894 mm and Neyei is 67 pixels, when the registered face is at position 4. Then, from the measured values of Neye at position 1 (121 pixels) and position 2 (111 pixels), we could retrieve the depths z1 and z2 as 495 mm and 540 mm from the TOF sensor respectively. Thus we can confirm that those values are well matched by the experimental conditions shown in Fig. 9. The experimental results for the second case are shown in Fig. 11: The experimental demonstration can provide the screen image only to the eye location of the registered face’s camera (denoted with blue solid lines), while the unregistered face at the other position (denoted with green dashed lines) can capture no image information. Therefore, it can be concluded that the proposed scheme protects the privacy of the registered user securely, regardless of the distance between the observer and the device.

Figure 11. Experimental results when the face camera is closer than the detection range of the TOF sensor. The blue solid and green dashed lines represent the locations of the registered and unregistered observers respectively.

V. CONCLUSION

With the rapid progress of mobile devices, the protection of the information in them becomes more important. Though various kinds of securing technology are developed in transmitting the information, the display screen is still vulnerable to simple peeping: visual hacking. In this paper, we have proposed an effective privacy-protection method to overcome the limitation of the detection range of a TOF sensor, and verified it with experimental demonstrations. We expect that the proposed system can be applied to various kinds of display devices to eliminate concerns over visual hacking.

Fig 1.

Figure 1.Calculation of the true position of the face’s center.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 2.

Figure 2.Retrieving the depth z of the face’s center without the output from the TOF sensor.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 3.

Figure 3.Structure and principle of the directional backlight system.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 4.

Figure 4.Effect of the width of the line light source on the imperfect collimation.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 5.

Figure 5.Principles of the proposed privacy-protection system.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 6.

Figure 6.Calculation of line light source positions.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 7.

Figure 7.Analysis of the viewing zone and viewing angle of the proposed method.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 8.

Figure 8.Pictures of the experimental setup and face cameras, with registered and unregistered faces.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 9.

Figure 9.Positions of the face cameras to capture the observed views.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 10.

Figure 10.Experimental results when the face camera is within the detection range of the TOF sensor. The blue solid and green dashed lines represent the locations of the registered and unregistered observers respectively.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

Fig 11.

Figure 11.Experimental results when the face camera is closer than the detection range of the TOF sensor. The blue solid and green dashed lines represent the locations of the registered and unregistered observers respectively.
Current Optics and Photonics 2020; 4: 421-427https://doi.org/10.3807/COPP.2020.4.5.421

References

  1. P. Barker, "Visual hacking - why it matters and how to prevent it," Netw. Secur. 42, 14-17 (2019).
  2. H. Yoon, S.-G. Oh, D. S. Kang, J. M. Parck, S. J. Choi, K. Y. Suh, K. Char, and H. H. Lee, "Arrays of Lucius microprisms for directional allocation of light and auto-stereoscopic three-dimensional displays," Nat. Commun. 2, 455 (2011).
    CrossRef
  3. Gaides G. E., Kadoma I. A., Olson D. B., Larson R. A., and Sykora A. R., "Light collimating film," U.S. Patent (2011). 8012567B2
  4. J.-H. Kim, C. H. Lee, S. S. Lee, and K.-C. Lee, "Highly transparent privacy filter film with image distortion," Opt. Express 22, 29799-29804 (2014).
    CrossRef
  5. T. Guzvinecz, V. Szucs, and C. Sik-Lanyi, "Suitability of the Kinect Sensor and Leap Motion Controller―A Literature Review," Sensors 19, 1072 (2019).
    CrossRef
  6. C. Kim, S. Yun, S-.W. Jung, and C. S. Won, "Color and depth image correspondence for Kinect v2," in Advanced Multimedia and Ubiquitous Engineering (Springer, Berlin, Heidelberg, Germany 2015), pp. 111-116.
  7. H. Kwon, and H.-J. Choi, "A time-sequential multi-view autostereoscopic display without resolution loss using a multidirectional backlight unit and an LCD panel," in Proc. SPIE ( 2012), 8288, pp. 82881Y.
  8. S. G. de Groot, and J. W. Gebhard, "Pupil size as determined by adapting luminance," J. Opt. Soc. Am. 42, 492-495 (1952).
    CrossRef
Optical Society of Korea

Current Optics
and Photonics


Min-Kyo Seo,
Editor-in-chief

Share this article on :

  • line