Ex) Article Title, Author, Keywords
Current Optics
and Photonics
Ex) Article Title, Author, Keywords
Current Optics and Photonics 2020; 4(5): 421-427
Published online October 25, 2020 https://doi.org/10.3807/COPP.2020.4.5.421
Copyright © Optical Society of Korea.
Hyeontaek Lee, Hyunsoo Kim, and Hee-Jin Choi*
Corresponding author: hjchoi@sejong.ac.kr
A novel privacy-protection device to prevent visual hacking is realized by using a directional backlight and facial recognition. The proposed method is able to overcome the limitations of previous privacy-protection methods that simply restrict the viewing angle to a narrow range. The accuracy of user tracking is accomplished by the combination of a time-of-flight sensor and facial recognition with no restriction of detection range. In addition, an experimental demonstration is provided to verify the proposed scheme.
Keywords: Privacy protection, Directional backlight, Face detection range
With rapid progress in display technologies to provide wider viewing angle and higher contrast, the information displayed on a device’s screen can be seen from anywhere by anybody more easily. Therefore,
To converge the light rays from the display device to the position of the face of the user, it is necessary to acquire the coordinates of that face. For that purpose, a TOF sensor such as the Kinect V2 is commonly used to measure the depth map. However, the TOF sensor has the limitation of a minimum detection range, which means that it cannot detect an object closer than that range. Since that range is about 0.7 m in the case of the Kinect V2 [5], we propose a novel method to extend the range of depth detection by combining a color image and a depth map.
At first, the facial recognition requires a color image of the face, which is provided by a color-image sensor besides the TOF sensor. Thus a calibration between the color image and depth map is necessary. Though a method using a checkerboard is commonly used [6], we used Kinect for Windows SDK 2.0 functions for this study, since we t hink that it is not p ractical for t he u ser to always carry a checkerboard for calibration. Nevertheless, any kind of calibration method can be used if the output is accurate.
After the calibration is finished, in order to calculate the real coordinates (
Using the above equation and the calibrated pixel coordinates of the face’s center (
However, as described above, this technique can be used only when the face’s center is beyond the minimum detection range of the TOF sensor. Thus we propose a novel method to retrieve the depth from the distance between the eyes of the observer after an initial recognition of the face’s position.
When the initial recognition of the face’s position is finished, the number of pixels
Since the horizontal size of a single pixel is proportional to the depth of the face’s center, as described in Eq. (1), we can replace
Thus, without the output from the TOF sensor, the depth
In addition to recognition of the face’s position described above, accurate control of the directions of rays converging to the recognized facial position is also essential. For that purpose, we use a directional backlight system composed of a line light source and a convex-lens array with a focal length of
Since light rays emitted from the line light source proceed in parallel after passing through the convex-lens array, we can control the ray directions by positioning the line light source [7]. However, in an actual directional backlight system the line light source has a physical width
Nevertheless, since we expect that
For this purpose, we calculate the position of each line light source using a coordinate system shown in Fig. 6. When a converged position decentered by
Since
Then we can use the following relation between
Combining the above equations, we can calculate the position of each line light source from the true position of the face’s center.
Since control of the ray direction is essential for the proposed system, the viewing angle is one of the most important viewing parameters. Regarding the above principles, the viewing angle
To derive the viewing angle, we should consider the leftmost and rightmost positions where the light rays will converge. For that purpose, the first step is to calculate the distance
Then we can expect the maximum size
Using the above derivations, we can calculate the viewing angle
Pictures of the experimental setup and face cameras are shown in Fig. 8. We use two different faces, one registered and one unregistered, and located cameras with resolution of 4032 by 3024 pixels at the position of the right eye of each face. Thus we can check whether the proposed system can ban the sight of the unregistered observer while providing proper images to the registered user only. Figure 9 shows the positions of face cameras on the registered face. Regarding that the minimum detection range of the TOF sensor is 0.7 m, we chose two positions (labeled 1 and 2) closer than that, to verify that our method works properly under any circumstance. Behind an LCD panel with a resolution of 1920 by 1080 and a pixel pitch of 0.25 mm, we attach a convex-lens array composed of 13 by 13 elemental lenses, each with pitch
As a first step of verification, we locate two face cameras with registered and unregistered faces 800 mm from the LCD panel, and check whether the system provides the screen information only to the location of the registered face. The experimental results are shown in Fig. 10: The image displayed on the LCD panel (skull and crossbones) can be captured only at the position of the registered face (denoted with blue solid lines), whereas the face camera at the other position, which represents a visual hacker (denoted with green dashed lines), captures no information from the screen. Therefore, from the experimental results in Fig. 10, it can be verified that the proposed scheme works as expected, when the observer is within the detection range of the TOF sensor.
The next verification is for a case when the observer is outside of the detection range of TOF sensor. For that purpose, we locate the face camera of the registered observer at positions 1 and 2. In this case, it is expected that the depth of the registered face is retrieved using the parameters
With the rapid progress of mobile devices, the protection of the information in them becomes more important. Though various kinds of securing technology are developed in transmitting the information, the display screen is still vulnerable to simple peeping: visual hacking. In this paper, we have proposed an effective privacy-protection method to overcome the limitation of the detection range of a TOF sensor, and verified it with experimental demonstrations. We expect that the proposed system can be applied to various kinds of display devices to eliminate concerns over visual hacking.
Current Optics and Photonics 2020; 4(5): 421-427
Published online October 25, 2020 https://doi.org/10.3807/COPP.2020.4.5.421
Copyright © Optical Society of Korea.
Hyeontaek Lee, Hyunsoo Kim, and Hee-Jin Choi*
Correspondence to:hjchoi@sejong.ac.kr
A novel privacy-protection device to prevent visual hacking is realized by using a directional backlight and facial recognition. The proposed method is able to overcome the limitations of previous privacy-protection methods that simply restrict the viewing angle to a narrow range. The accuracy of user tracking is accomplished by the combination of a time-of-flight sensor and facial recognition with no restriction of detection range. In addition, an experimental demonstration is provided to verify the proposed scheme.
Keywords: Privacy protection, Directional backlight, Face detection range
With rapid progress in display technologies to provide wider viewing angle and higher contrast, the information displayed on a device’s screen can be seen from anywhere by anybody more easily. Therefore,
To converge the light rays from the display device to the position of the face of the user, it is necessary to acquire the coordinates of that face. For that purpose, a TOF sensor such as the Kinect V2 is commonly used to measure the depth map. However, the TOF sensor has the limitation of a minimum detection range, which means that it cannot detect an object closer than that range. Since that range is about 0.7 m in the case of the Kinect V2 [5], we propose a novel method to extend the range of depth detection by combining a color image and a depth map.
At first, the facial recognition requires a color image of the face, which is provided by a color-image sensor besides the TOF sensor. Thus a calibration between the color image and depth map is necessary. Though a method using a checkerboard is commonly used [6], we used Kinect for Windows SDK 2.0 functions for this study, since we t hink that it is not p ractical for t he u ser to always carry a checkerboard for calibration. Nevertheless, any kind of calibration method can be used if the output is accurate.
After the calibration is finished, in order to calculate the real coordinates (
Using the above equation and the calibrated pixel coordinates of the face’s center (
However, as described above, this technique can be used only when the face’s center is beyond the minimum detection range of the TOF sensor. Thus we propose a novel method to retrieve the depth from the distance between the eyes of the observer after an initial recognition of the face’s position.
When the initial recognition of the face’s position is finished, the number of pixels
Since the horizontal size of a single pixel is proportional to the depth of the face’s center, as described in Eq. (1), we can replace
Thus, without the output from the TOF sensor, the depth
In addition to recognition of the face’s position described above, accurate control of the directions of rays converging to the recognized facial position is also essential. For that purpose, we use a directional backlight system composed of a line light source and a convex-lens array with a focal length of
Since light rays emitted from the line light source proceed in parallel after passing through the convex-lens array, we can control the ray directions by positioning the line light source [7]. However, in an actual directional backlight system the line light source has a physical width
Nevertheless, since we expect that
For this purpose, we calculate the position of each line light source using a coordinate system shown in Fig. 6. When a converged position decentered by
Since
Then we can use the following relation between
Combining the above equations, we can calculate the position of each line light source from the true position of the face’s center.
Since control of the ray direction is essential for the proposed system, the viewing angle is one of the most important viewing parameters. Regarding the above principles, the viewing angle
To derive the viewing angle, we should consider the leftmost and rightmost positions where the light rays will converge. For that purpose, the first step is to calculate the distance
Then we can expect the maximum size
Using the above derivations, we can calculate the viewing angle
Pictures of the experimental setup and face cameras are shown in Fig. 8. We use two different faces, one registered and one unregistered, and located cameras with resolution of 4032 by 3024 pixels at the position of the right eye of each face. Thus we can check whether the proposed system can ban the sight of the unregistered observer while providing proper images to the registered user only. Figure 9 shows the positions of face cameras on the registered face. Regarding that the minimum detection range of the TOF sensor is 0.7 m, we chose two positions (labeled 1 and 2) closer than that, to verify that our method works properly under any circumstance. Behind an LCD panel with a resolution of 1920 by 1080 and a pixel pitch of 0.25 mm, we attach a convex-lens array composed of 13 by 13 elemental lenses, each with pitch
As a first step of verification, we locate two face cameras with registered and unregistered faces 800 mm from the LCD panel, and check whether the system provides the screen information only to the location of the registered face. The experimental results are shown in Fig. 10: The image displayed on the LCD panel (skull and crossbones) can be captured only at the position of the registered face (denoted with blue solid lines), whereas the face camera at the other position, which represents a visual hacker (denoted with green dashed lines), captures no information from the screen. Therefore, from the experimental results in Fig. 10, it can be verified that the proposed scheme works as expected, when the observer is within the detection range of the TOF sensor.
The next verification is for a case when the observer is outside of the detection range of TOF sensor. For that purpose, we locate the face camera of the registered observer at positions 1 and 2. In this case, it is expected that the depth of the registered face is retrieved using the parameters
With the rapid progress of mobile devices, the protection of the information in them becomes more important. Though various kinds of securing technology are developed in transmitting the information, the display screen is still vulnerable to simple peeping: visual hacking. In this paper, we have proposed an effective privacy-protection method to overcome the limitation of the detection range of a TOF sensor, and verified it with experimental demonstrations. We expect that the proposed system can be applied to various kinds of display devices to eliminate concerns over visual hacking.