검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Research Paper

Curr. Opt. Photon. 2024; 8(5): 508-514

Published online October 25, 2024 https://doi.org/10.3807/COPP.2024.8.5.508

Copyright © Optical Society of Korea.

A Study on Intrusion Detection Using Deep Learning-based Weight Measurement with Multimode Fiber Speckle Patterns

Hyuek Jae Lee

Department of Information & Communication AI Engineering, Kyungnam University, Changwon 51767, Korea

Corresponding author: *hyuek@kyungnam.ac.kr, ORCID 0000-0002-8697-6914

Received: May 24, 2024; Accepted: July 20, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper presents a deep learning-based weight sensor, using optical speckle patterns of multimode fiber, designed for real-time intrusion detection. The weight sensor has been trained to identify 11 distinct speckle patterns, ranging in weight from 0.0 kg to 2.0 kg, with an interval of 200 g between each pattern. The estimation for untrained weights is based on the generalization capability of deep learning. This results in an average weight error of 243.8 g. Although this margin of error precludes accurate weight measurement, the system’s ability to detect abrupt weight changes makes it a suitable choice for intrusion detection applications. The weight sensor is integrated with the Google Teachable Machine, and real-time intrusion notifications are facilitated by the ThingSpeakTM cloud platform, an open-source Internet of Things (IoT) application developed by MathWorks.

Keywords: Deep learning, Intrusion detection, Multimode fiber, Optical speckle, Weight measurement

OCIS codes: (280.0280) Remote sensing and sensors; (280.4788) Optical sensing and sensors; (280.5475) Pressure measurement

Deep learning has been extensively and significantly applied in a variety of fields, including non-line-of-sight (NLOS) imaging [1], medical imaging [2], optical communications [35], and more. Recently, this interest has extended to fiber optic sensors [69], whose applications are rapidly expanding.

Intrusion detection typically employs sensing methods such as radar, infrared surveillance systems, and CCD cameras [1012]. However, these methods are often costly, have low sensitivity in bad weather, and are limited to clear line-of-sight conditions, making it difficult to detect a wide range of intrusions. In contrast, fiber optic sensors offer distinct advantages, including high sensitivity, remote sensing capability, and immunity to electromagnetic interference. As a result, they have been successfully applied in intrusion detection systems. These sensors are developed using technologies such as the optical time domain reflectometer (OTDR) [13] and the optical fiber interferometer [14]. The former can only be detected when the fiber is cut or the light reflection is sufficient to be detected, while the latter often faces reliability issues.

Methods using speckle patterns in multimode fibers, which are somewhat immune to such problems, have been introduced for a variety of applications, including intrusion detection [1519]. These methods have the additional benefits of being cost-effective and compact. This paper presents a deep learning-based weight sensor that uses the speckle pattern of a multimode fiber and demonstrates its applicability to intrusion detection. The detection system integrates the weight sensor with an image classifier using the Google Teachable Machine [20] and facilitates real-time intrusion notification via the ThingSpeak cloud platform [21], an open source Internet of Things (IoT) application developed by MathWorks.

When coherent light emitted from a laser enters a multimode fiber, interference between modes occurs as it propagates through the fiber. This results in the formation of bright and dark regions on a screen at a certain distance from the fiber’s end-face, which is known as the optical speckle pattern. If M is the total number of modes of light transmitted in the optical fiber, then the initial intensity of light at a point I0 (x, y) on the screen can be expressed as follows [15, 18]:

I0x,y= m=1Ma0mx,yexpjφ0mx,y2,

where a0m (x, y) and φ0m (x, y) are m-th mode amplitude and phase distributions over the x-y projection plane, respectively. When external perturbations are applied to the optical fiber, the modified intensity I(x, y) can be expressed as

Ix,y= m=1Ma0mx,y+Δamx,yexpjφ0mx,y+Δφmx,y2,

where ∆am (x, y) and ∆φm (x, y) are the amplitude and phase variations of the m-th mode, respectively. It can be assumed that the beam propagating through the optical fiber experiences minimal loss (assuming a lossless waveguide), and thus the amplitude variation can be considered as ∆am (x, y) ≈ 0. External pressure changes the refractive index or cross-sectional size of the fiber, thereby affecting the path lengths of the modes and causing a phase-related change, ∆φm (x, y). Consequently, alterations to the speckle patterns occur. However, inappropriate application of external pressure may cause an afterimage to persist even after the pressure is removed. This phenomenon, known as hysteresis, is a result of the interaction between the applied pressure and the underlying material.

Two types of optical weight sensors were constructed for the purpose of investigating the hysteresis characteristics, as depicted in Fig. 1. In Fig. 1(a), the weight is applied directly to the optical fiber using a metal plate. In contrast, in Fig. 1(b), it is designed so that the elastic plate can flex slightly, indirectly affecting the optical fiber without applying direct pressure. The elastic plate is supported at its four corners by pedestals, thereby maintaining a small gap between the plate and the underlying surface. In the direct method illustrated in Fig. 1(a), the metal plate had a top dimension of 300 × 200 mm, a thickness of 1.8 mm, and a weight of 0.8 kg. In Fig. 1(b), the indirect method involved a top dimension of 400 × 400 mm, a thickness of 7.0 mm, and support from four pedestals, each 10.0 mm high. The plate is composed of polypropylene, with an elastic modulus of 16 × 103 kgf/cm2. To investigate hysteresis, a 3.0 kg weight was applied for 10 minutes and then removed to observe the optical response return to its initial state. Figure 2 shows the speckle patterns observed in this experiment, with zero-mean normalized correlation coefficients (ZNCC) of 0.94 and 0.99, respectively. The comparison in Fig. 2 clearly demonstrates that the indirect method in Fig. 1(b) exhibits superior hysteresis performance compared to the direct method in Fig. 1(a).

Figure 1.Two types of optical weight sensors for studying hysteresis characteristics. (a) Direct method, where pressure is applied directly to the multimode fiber using a metal plate, and (b) indirect method, where an elastic plate is directly under pressure and indirectly transmitted to the multimode fiber.

Figure 2.Results of a hysteresis test to measure the return to the initial state after applying a 3 kg weight for 10 minutes: (a) Speckle pattern for the direct method, which has changed greatly (red dotted circles), and (b) speckle pattern for the indirect method, which has changed very little.

This paper employs the classification capabilities of a deep learning neural network to quantify weight based on the optical speckle pattern of a weight sensor constructed with a multimode fiber, as illustrated in Fig. 1(b). While AlexNet [22], VGG [23], ResNet [24], and similar models are commonly used for pattern classification tasks in the field of convolutional neural networks (CNNs), a Teachable Machine [20], a web-based, real-time pattern classification tool developed by Google, was used for its simplicity. The Teachable Machine is a no-code AI learning tool based on the CNN architecture known as MobileNet [25]. It operates in a transfer learning mode using pre-trained results, which facilitates rapid learning and achieving desired results even with limited data. While creating an ideal weight sensor would require extensive data collection and training, this study illustrates the promise of a simplified deep learning-based weight sensor and its potential utility, including applications in intrusion detection systems.

Figure 3 depicts the experimental setup. A laser beam with a wavelength of 532 nm is incident into the multimode fiber. A thermoelectric cooler (TEC) temperature compensator and optical power controller circuit are provided to stabilize the wavelength and optical power. The laser beam, with a measured power of 32.5 mW, was directly applied to an optical connector. The beam then traverses a 15-meter-long multimode fiber with a 62.5 μm core diameter and encounters a weight sensor configuration as depicted in Fig. 1(b). The optical speckle is detected by projecting it from the optical connector end and directly collecting it through a Raspberry Pi camera, which has an image size of 2,450 × 2,450 pixels. Once entered as input data in the Teachable Machine, the image is automatically resized to a 224 × 224 image.

Figure 3.Experimental setup of the proposed deep-learning weight sensor with multimode fiber speckle patterns, and thermoelectric cooler (TEC).

For training purposes, 30 speckle images, ranging from 0.0 to 2.0 kg, were obtained for each 200 g weight increment. This resulted in a total of 330 images. The learning rate was set at 0.01, with 50 epochs, resulting in a final training loss of 0.021. Figure 4(a) presents the confusion matrix, which summarizes the model’s prediction accuracy for each class. Figure 4(b) illustrates the loss per epoch, showing that the model learns well without underfitting or overfitting. The output of the Teachable Machine is expressed as the percentage of similarity with the 11 trained class patterns, indicating the degree to which the input pattern resembles each class.

Figure 4.Confusion matrix and loss-per-epoch plots on the training and test sets. (a) The loss confusion matrix, which provides a summary of the model’s prediction accuracy for each class. (b) The resulting graph of loss per epoch demonstrates that the model learns effectively without underfitting or overfitting.

In order to function as a weight sensor, it is necessary to convert the classified percentage output into a weight value. The predicted weight value, Wpred, can be calculated by considering only the classes with the highest output values, Oimax, and their neighboring classes, Oi−1 and Oi+1, of the Teachable Machine as follows

Wpred=Wi1×Oi1+Wi×Oimax+Wi+1×Oi+1Oi1+Oimax+Oi+1,

where Wi represents the measured weight corresponding to the i-th class. As an illustrative example from Fig. 5, the weight can be estimated as Wpred = (0.0 × 9 + 0.2 × 52 + 0.4 × 6) / (9 + 52 + 6) = 0.191 kg, based on the data obtained.

Figure 5.An example of measuring any arbitrary weight on the fiber optic weight sensor using the Teachable Machine.

Figure 6 illustrates the output of the fiber optic weight sensor, which was obtained by 100 g increments of applied weight, ranging from 0.0 to 2.0 kg, over five iterations. The trained data points exhibited an average error of 11.4 g, indicating highly favorable results. In contrast, the untrained points exhibited a larger average error of 243.8 g. Although the output shows some level of consistency overall, the measured results suggest that it may not be suitable for precise weight sensing. However, because sudden weight changes can be detected, it was possible to implement an intrusion detection system, which will be explained in the next section.

Figure 6.Results of the predicted weight using the proposed weight sensor, which was obtained by 100 g increments of applied weight, ranging from 0.0 to 2.0 kg, over five iterations. The solid red circles represent the trained weights, while the blue dotted circles represent the predictions for the untrained weights.

For the purpose of simple intrusion detection, the system can be trained to operate in just two states: The normal state and the intrusion state. The normal state comprises a limited number of speckle patterns, while the intrusion state exhibits countless types of speckle patterns. Despite the difficulty in clearly grasping the internal principles of the deep learning of the Teachable Machine, all patterns other than the normal speckle patterns are classified as intrusions. Two classes were trained for verification purposes. A total of 30 speckle patterns (representing the normal class) were used at a weight of 0.0 kg, while 300 randomly selected speckle patterns (representing the intrusion class) were used in a range of weights from 0.1 to 2.0 kg. Figure 7 illustrates the test results obtained with 20 g increments of applied weight, ranging from 0.0 to 0.1 kg, and 100 g increments of applied weight, ranging from 0.1 to 2.0 kg, over five iterations. It can be observed that even a slight deviation from 0.0 kg results in classification as an intrusion, indicating high sensitivity to even minor noise and the potential for malfunction. This issue could be mitigated by slightly expanding the range of normal weights. Additionally, the training time of the proposed system is typically less than two minutes, which would facilitate the implementation of an adaptive system using regularly collected speckle patterns of the normal state. Further research in this area will be published in the near future.

Figure 7.Output of intrusion class (%) vs. measured weight (g; log scale), showing the results obtained with 20 g increments of applied weight ranging from 0.0 to 0.1 kg, and with 100 g increments of applied weight ranging from 0.1 to 2.0 kg, over five iterations.

Figure 8(a) depicts the architecture of the real-time intrusion detection system proposed in this paper, which employs a deep learning-based weight sensor. The trained Teachable Machine is exported as a TensorFlow.js model, enabling its integration with Node.js for the development of a customized weight-sensing system [26]. The measured weight is transmitted to the ThingSpeakTM data collection platform over the Internet. ThingSpeakTM is a cloud-based platform service that collects and visualizes live data streams, as well as enabling analysis of the collected data. The platform offers the ability to program a real-time notification signal to be sent to individual Pushover accounts when a predefined threshold is exceeded. Additionally, users who are licensed to use the platform have the ability to send data once per second on ThingSpeakTM.

Figure 8.One potential application of the proposed real-time intrusion detection system. (a) Architecture of the real-time intrusion detection system, (b) data sent to the ThingSpeakTM platform when a 1.83 kg cat crossed the weight sensor, and (c) example of a message sent to a Pushover app.

To verify the implemented system, real-time notifications were examined to ascertain whether they were transmitted to the Pushover account upon the passage of a 1.83 kg cat across the weight sensor. The data collected and transmitted to the ThingSpeakTM platform is illustrated in Fig. 8(b). The data indicates that sudden changes following consecutive small fluctuations around 0.0 kg indicate an intrusion. Given the possibility of malfunction due to noise, the system is configured to send alarm signals to the Pushover app when weights exceeding the 500 g threshold are detected cumulatively three times or more. Although a relatively simple algorithm has been employed here, it can be designed to suit the application’s purpose. The alarm signal sent to the Pushover app is illustrated in Fig. 8(c).

The Google Teachable Machine, as applied to the weight sensor in this paper, is based on MobileNet’s transfer learning model. This model facilitates the reuse of a pre-trained model with a new problem for rapid learning of new patterns. However, it may not perform well when generalized to patterns that are dissimilar to the pre-trained pattern. A notable discrepancy was observed in the predicted weights that were not learned, as shown in Fig. 5. Notwithstanding, it is noteworthy that the results did not deviate completely. Based on these results, although it is difficult to use as an accurate weight sensor, it demonstrated the potential for application as a system that can detect intrusion in real time. ThingSpeakTM, a cloud-based IoT analytics platform, was used for real-time intrusion detection, and real-time intrusion alerts were successfully implemented to notify the user’s Pushover app.

In order to implement an accurate weight sensor, a substantial quantity of sample data is required, and it is necessary to train a deep learning model from scratch. As a means of achieving this, the vision transformer (ViT) model [27], which has recently received considerable attention, appears to be capable of attaining satisfactory performance.

The author declares that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

All data generated or analyzed during this study are included in this published article. The Teachable Machine model that supports the findings of this study is available at https://teachablemachine.withgoogle.com/models/ADgTnzy4i/.

  1. S. Zheng, M. Liao, F. Wang, W. He, X. Peng, and G. Situ, “Non-line-of-sight imaging under white-light illumination: A two-step deep learning approach,” Opt. Express 29, 40091-40105 (2021).
    Pubmed CrossRef
  2. F. Willomitzer, P. V. Rangarajan, F. Li, M. M. Balaji, M. P. Christensen, and O. Cossairt, “Fast non-line-of-sight imaging with high-resolution and wide field of view using synthetic wavelength holography,” Nat. Commun. 12, 6647 (2021).
    Pubmed KoreaMed CrossRef
  3. A. Esteva, A. Robicquet, B. Ramsundar, V. Kuleshov, M. DePristo, K. Chou, C. Cui, G. Corrado, S. Thrun, and J. Dean, “A guide to deep learning in healthcare,” Nat. Med. 25, 24-29 (2019).
    CrossRef
  4. B. Rahmani, D. Loterie, G. Konstantinou, D. Psaltis, and C. Moser, “Multimode optical fiber transmission with a deep learning network,” Light.: Sci. Appl. 7, 69 (2018).
    CrossRef
  5. C. Zhu, E. A. Chan, Y. Wang, W. Peng, R. Guo, B. Zhang, C. Soci, and Y. Chong, “Image reconstruction through a multimode fiber with a simple neural network architecture,” Sci. Rep. 11, 896 (2021).
    CrossRef
  6. T. Pan, J. Ye, H. Liu, F. Zhang, P. Xu, O. Xu, Y. Xu, and Y. Qin, “Non-orthogonal optical multiplexing empowered by deep learning,” Nat. Commun. 15, 1580 (2024).
    Pubmed KoreaMed CrossRef
  7. N. H. Al-Ashwal, K. Soufy, M. E. Hamza, and M. A. Swillam, “Deep Learning for optical sensor applications: A review,” Sensors 23, 6486 (2023).
    CrossRef
  8. K. Wang, Y. Mizuno, X. Dong, W. Kurz, M. Köhler, P. Kienle, H. Lee, M. Jakobi, and A. W. Koch, “Multimode optical fiber sensors: From conventional to machine learning-assisted,” Meas. Sci. Technol. 35, 022002 (2023).
    CrossRef
  9. A. Venketeswaran, N. Lalam, J. Wuenschell, P. R. Ohodnicki Jr., M. Badar, K. P. Chen, P. Lu, Y. Duan, B. Chorpening, and M. Buric, “Recent advances in machine learning for fiber optic sensor applications,” Adv. Intell. Syst. 4, 2100067 (2022).
    CrossRef
  10. R. Dulski, M. Kastek, P. Trzaskawka, T. Piątkowski, M. Szustakowski, and M. Życzkowski, “Concept of data processing in multi-sensor system for perimeter protection,” Proc. SPIE 8019, 80190X (2011).
    CrossRef
  11. X. Yang, F. Zhang, Y. He, P. Liang, and J. Yang, “Human intrusion detection system using mm wave radar,” in Proc. 3rd International Symposium on Computer Technology and Information Science-ISCTIS (Chengdu, China, July 7-9, 2023), pp. 904-911.
    CrossRef
  12. M. N. Uddin and H. Nyeem, “Engineering a multi-sensor surveillance system with secure alerting for next-generation threat detection and response,” Results Eng. 22, 101984 (2024).
    CrossRef
  13. Y. Zhu, J. Li, Q. Wang, C. Yu, L. Tang, and Y. Bai, “Intrusion detection by optical fiber in windy conditions,” IEICE Electron. Express 19, 20220098 (2022).
    CrossRef
  14. H. Hsieh, K.-S. Hsu, T.-L. Jong, and L. Wang, “Multi-zone fiber-optic intrusion detection system with active unbalanced Michelson interferometer used for security of each defended zone,” IEEE Sensors J. 20, 1607-1618 (2020).
    CrossRef
  15. A. Dhall, J. K. Chhabra, and N. S. Aulakh, “Intrusion detection system based on speckle pattern analysis,” Exp. Tech. 29, 25-31 (2006).
    CrossRef
  16. M. J. Murray, A. Davis, C. Kirkendall, and B. Redding, “Speckle-based strain sensing in multimode fiber,” Opt. Express 27, 28494-28506 (2019).
    Pubmed CrossRef
  17. A. R. Cuevas, M. Fontana, L. Rodriguez-Cobo, M. Lomer, and J. M. López-Higuera, “Machine learning for turning optical fiber specklegram sensor into a spatially-resolved sensing system. Proof of concept,” J. Light. Technol. 36, 3733-3788 (2018).
    CrossRef
  18. E. Fujiwara, L. E. da Silva, T. D. Cabral, H. E. de Freitas, Y. T. Wu, and C. M. de B. Cordeiro, “Optical fiber specklegram chemical sensor based on a concatenated multimode fiber structure,” J. Light. Technol. 19, 5041-5047 (2019).
    CrossRef
  19. D. Bender, U. Çakır, and E. Yüce, “Deep learning-based fiber bending recognition for sensor applications,” IEEE Sensors J. 23, 6956-6962 (2023).
    CrossRef
  20. Google Creative Lab, “Teachable Machine,” (Experiments with Google, Published Date: Nov. 2019), https://experiments.withgoogle.com/teachable-machine (Accessed Date: Apr. 29, 2024)
  21. MathWorksTM, “ThingSpeak,” (MathWorksTM), https://kr.mathworks.com/products/thingspeak.html (Accessed Date: Apr. 29, 2024)
  22. A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems 25-NIPS 2012 (Lake Tahoe, CA, USA, Dec. 3-8, 2012), pp. 1097-1105.
  23. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. International Conference on Learning Representations-ICLR 2015 (San Diego, CA, USA, 2015).
  24. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition-CVPR 2016 (Las Vegas, NV, USA, Jun. 27-30, 2016), pp. 770-778.
    CrossRef
  25. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition-CVPR 2018 (Salt Lake City, UT, USA, Jun. 18-23, 2018), pp. 4510-4520.
    CrossRef
  26. I. Russeva, “How to load a Teachable Machine image model in a Node.JS project,” (SashiDo Co., Published date: Nov. 4, 2020), https://blog.sashido.io/how-to-load-a-teachable-machine-image-model-in-a-node-js-project/ (Accessed Date: Apr. 29, 2024)
  27. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16 × 16 words: Transformers for image recognition at scale,” in Proc. International Conference on Learning Representations-ICLR 2021 (Virtual Event, Austria, 2021).

Article

Research Paper

Curr. Opt. Photon. 2024; 8(5): 508-514

Published online October 25, 2024 https://doi.org/10.3807/COPP.2024.8.5.508

Copyright © Optical Society of Korea.

A Study on Intrusion Detection Using Deep Learning-based Weight Measurement with Multimode Fiber Speckle Patterns

Hyuek Jae Lee

Department of Information & Communication AI Engineering, Kyungnam University, Changwon 51767, Korea

Correspondence to:*hyuek@kyungnam.ac.kr, ORCID 0000-0002-8697-6914

Received: May 24, 2024; Accepted: July 20, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a deep learning-based weight sensor, using optical speckle patterns of multimode fiber, designed for real-time intrusion detection. The weight sensor has been trained to identify 11 distinct speckle patterns, ranging in weight from 0.0 kg to 2.0 kg, with an interval of 200 g between each pattern. The estimation for untrained weights is based on the generalization capability of deep learning. This results in an average weight error of 243.8 g. Although this margin of error precludes accurate weight measurement, the system’s ability to detect abrupt weight changes makes it a suitable choice for intrusion detection applications. The weight sensor is integrated with the Google Teachable Machine, and real-time intrusion notifications are facilitated by the ThingSpeakTM cloud platform, an open-source Internet of Things (IoT) application developed by MathWorks.

Keywords: Deep learning, Intrusion detection, Multimode fiber, Optical speckle, Weight measurement

I. INTRODUCTION

Deep learning has been extensively and significantly applied in a variety of fields, including non-line-of-sight (NLOS) imaging [1], medical imaging [2], optical communications [35], and more. Recently, this interest has extended to fiber optic sensors [69], whose applications are rapidly expanding.

Intrusion detection typically employs sensing methods such as radar, infrared surveillance systems, and CCD cameras [1012]. However, these methods are often costly, have low sensitivity in bad weather, and are limited to clear line-of-sight conditions, making it difficult to detect a wide range of intrusions. In contrast, fiber optic sensors offer distinct advantages, including high sensitivity, remote sensing capability, and immunity to electromagnetic interference. As a result, they have been successfully applied in intrusion detection systems. These sensors are developed using technologies such as the optical time domain reflectometer (OTDR) [13] and the optical fiber interferometer [14]. The former can only be detected when the fiber is cut or the light reflection is sufficient to be detected, while the latter often faces reliability issues.

Methods using speckle patterns in multimode fibers, which are somewhat immune to such problems, have been introduced for a variety of applications, including intrusion detection [1519]. These methods have the additional benefits of being cost-effective and compact. This paper presents a deep learning-based weight sensor that uses the speckle pattern of a multimode fiber and demonstrates its applicability to intrusion detection. The detection system integrates the weight sensor with an image classifier using the Google Teachable Machine [20] and facilitates real-time intrusion notification via the ThingSpeak cloud platform [21], an open source Internet of Things (IoT) application developed by MathWorks.

II. PRINCIPLE

When coherent light emitted from a laser enters a multimode fiber, interference between modes occurs as it propagates through the fiber. This results in the formation of bright and dark regions on a screen at a certain distance from the fiber’s end-face, which is known as the optical speckle pattern. If M is the total number of modes of light transmitted in the optical fiber, then the initial intensity of light at a point I0 (x, y) on the screen can be expressed as follows [15, 18]:

I0x,y= m=1Ma0mx,yexpjφ0mx,y2,

where a0m (x, y) and φ0m (x, y) are m-th mode amplitude and phase distributions over the x-y projection plane, respectively. When external perturbations are applied to the optical fiber, the modified intensity I(x, y) can be expressed as

Ix,y= m=1Ma0mx,y+Δamx,yexpjφ0mx,y+Δφmx,y2,

where ∆am (x, y) and ∆φm (x, y) are the amplitude and phase variations of the m-th mode, respectively. It can be assumed that the beam propagating through the optical fiber experiences minimal loss (assuming a lossless waveguide), and thus the amplitude variation can be considered as ∆am (x, y) ≈ 0. External pressure changes the refractive index or cross-sectional size of the fiber, thereby affecting the path lengths of the modes and causing a phase-related change, ∆φm (x, y). Consequently, alterations to the speckle patterns occur. However, inappropriate application of external pressure may cause an afterimage to persist even after the pressure is removed. This phenomenon, known as hysteresis, is a result of the interaction between the applied pressure and the underlying material.

Two types of optical weight sensors were constructed for the purpose of investigating the hysteresis characteristics, as depicted in Fig. 1. In Fig. 1(a), the weight is applied directly to the optical fiber using a metal plate. In contrast, in Fig. 1(b), it is designed so that the elastic plate can flex slightly, indirectly affecting the optical fiber without applying direct pressure. The elastic plate is supported at its four corners by pedestals, thereby maintaining a small gap between the plate and the underlying surface. In the direct method illustrated in Fig. 1(a), the metal plate had a top dimension of 300 × 200 mm, a thickness of 1.8 mm, and a weight of 0.8 kg. In Fig. 1(b), the indirect method involved a top dimension of 400 × 400 mm, a thickness of 7.0 mm, and support from four pedestals, each 10.0 mm high. The plate is composed of polypropylene, with an elastic modulus of 16 × 103 kgf/cm2. To investigate hysteresis, a 3.0 kg weight was applied for 10 minutes and then removed to observe the optical response return to its initial state. Figure 2 shows the speckle patterns observed in this experiment, with zero-mean normalized correlation coefficients (ZNCC) of 0.94 and 0.99, respectively. The comparison in Fig. 2 clearly demonstrates that the indirect method in Fig. 1(b) exhibits superior hysteresis performance compared to the direct method in Fig. 1(a).

Figure 1. Two types of optical weight sensors for studying hysteresis characteristics. (a) Direct method, where pressure is applied directly to the multimode fiber using a metal plate, and (b) indirect method, where an elastic plate is directly under pressure and indirectly transmitted to the multimode fiber.

Figure 2. Results of a hysteresis test to measure the return to the initial state after applying a 3 kg weight for 10 minutes: (a) Speckle pattern for the direct method, which has changed greatly (red dotted circles), and (b) speckle pattern for the indirect method, which has changed very little.

III. DEEP LEARNING-BASED FIBER OPTIC WEIGHT SENSOR

This paper employs the classification capabilities of a deep learning neural network to quantify weight based on the optical speckle pattern of a weight sensor constructed with a multimode fiber, as illustrated in Fig. 1(b). While AlexNet [22], VGG [23], ResNet [24], and similar models are commonly used for pattern classification tasks in the field of convolutional neural networks (CNNs), a Teachable Machine [20], a web-based, real-time pattern classification tool developed by Google, was used for its simplicity. The Teachable Machine is a no-code AI learning tool based on the CNN architecture known as MobileNet [25]. It operates in a transfer learning mode using pre-trained results, which facilitates rapid learning and achieving desired results even with limited data. While creating an ideal weight sensor would require extensive data collection and training, this study illustrates the promise of a simplified deep learning-based weight sensor and its potential utility, including applications in intrusion detection systems.

Figure 3 depicts the experimental setup. A laser beam with a wavelength of 532 nm is incident into the multimode fiber. A thermoelectric cooler (TEC) temperature compensator and optical power controller circuit are provided to stabilize the wavelength and optical power. The laser beam, with a measured power of 32.5 mW, was directly applied to an optical connector. The beam then traverses a 15-meter-long multimode fiber with a 62.5 μm core diameter and encounters a weight sensor configuration as depicted in Fig. 1(b). The optical speckle is detected by projecting it from the optical connector end and directly collecting it through a Raspberry Pi camera, which has an image size of 2,450 × 2,450 pixels. Once entered as input data in the Teachable Machine, the image is automatically resized to a 224 × 224 image.

Figure 3. Experimental setup of the proposed deep-learning weight sensor with multimode fiber speckle patterns, and thermoelectric cooler (TEC).

For training purposes, 30 speckle images, ranging from 0.0 to 2.0 kg, were obtained for each 200 g weight increment. This resulted in a total of 330 images. The learning rate was set at 0.01, with 50 epochs, resulting in a final training loss of 0.021. Figure 4(a) presents the confusion matrix, which summarizes the model’s prediction accuracy for each class. Figure 4(b) illustrates the loss per epoch, showing that the model learns well without underfitting or overfitting. The output of the Teachable Machine is expressed as the percentage of similarity with the 11 trained class patterns, indicating the degree to which the input pattern resembles each class.

Figure 4. Confusion matrix and loss-per-epoch plots on the training and test sets. (a) The loss confusion matrix, which provides a summary of the model’s prediction accuracy for each class. (b) The resulting graph of loss per epoch demonstrates that the model learns effectively without underfitting or overfitting.

In order to function as a weight sensor, it is necessary to convert the classified percentage output into a weight value. The predicted weight value, Wpred, can be calculated by considering only the classes with the highest output values, Oimax, and their neighboring classes, Oi−1 and Oi+1, of the Teachable Machine as follows

Wpred=Wi1×Oi1+Wi×Oimax+Wi+1×Oi+1Oi1+Oimax+Oi+1,

where Wi represents the measured weight corresponding to the i-th class. As an illustrative example from Fig. 5, the weight can be estimated as Wpred = (0.0 × 9 + 0.2 × 52 + 0.4 × 6) / (9 + 52 + 6) = 0.191 kg, based on the data obtained.

Figure 5. An example of measuring any arbitrary weight on the fiber optic weight sensor using the Teachable Machine.

Figure 6 illustrates the output of the fiber optic weight sensor, which was obtained by 100 g increments of applied weight, ranging from 0.0 to 2.0 kg, over five iterations. The trained data points exhibited an average error of 11.4 g, indicating highly favorable results. In contrast, the untrained points exhibited a larger average error of 243.8 g. Although the output shows some level of consistency overall, the measured results suggest that it may not be suitable for precise weight sensing. However, because sudden weight changes can be detected, it was possible to implement an intrusion detection system, which will be explained in the next section.

Figure 6. Results of the predicted weight using the proposed weight sensor, which was obtained by 100 g increments of applied weight, ranging from 0.0 to 2.0 kg, over five iterations. The solid red circles represent the trained weights, while the blue dotted circles represent the predictions for the untrained weights.

For the purpose of simple intrusion detection, the system can be trained to operate in just two states: The normal state and the intrusion state. The normal state comprises a limited number of speckle patterns, while the intrusion state exhibits countless types of speckle patterns. Despite the difficulty in clearly grasping the internal principles of the deep learning of the Teachable Machine, all patterns other than the normal speckle patterns are classified as intrusions. Two classes were trained for verification purposes. A total of 30 speckle patterns (representing the normal class) were used at a weight of 0.0 kg, while 300 randomly selected speckle patterns (representing the intrusion class) were used in a range of weights from 0.1 to 2.0 kg. Figure 7 illustrates the test results obtained with 20 g increments of applied weight, ranging from 0.0 to 0.1 kg, and 100 g increments of applied weight, ranging from 0.1 to 2.0 kg, over five iterations. It can be observed that even a slight deviation from 0.0 kg results in classification as an intrusion, indicating high sensitivity to even minor noise and the potential for malfunction. This issue could be mitigated by slightly expanding the range of normal weights. Additionally, the training time of the proposed system is typically less than two minutes, which would facilitate the implementation of an adaptive system using regularly collected speckle patterns of the normal state. Further research in this area will be published in the near future.

Figure 7. Output of intrusion class (%) vs. measured weight (g; log scale), showing the results obtained with 20 g increments of applied weight ranging from 0.0 to 0.1 kg, and with 100 g increments of applied weight ranging from 0.1 to 2.0 kg, over five iterations.

IV. REAL-TIME INTRUSION DETECTION

Figure 8(a) depicts the architecture of the real-time intrusion detection system proposed in this paper, which employs a deep learning-based weight sensor. The trained Teachable Machine is exported as a TensorFlow.js model, enabling its integration with Node.js for the development of a customized weight-sensing system [26]. The measured weight is transmitted to the ThingSpeakTM data collection platform over the Internet. ThingSpeakTM is a cloud-based platform service that collects and visualizes live data streams, as well as enabling analysis of the collected data. The platform offers the ability to program a real-time notification signal to be sent to individual Pushover accounts when a predefined threshold is exceeded. Additionally, users who are licensed to use the platform have the ability to send data once per second on ThingSpeakTM.

Figure 8. One potential application of the proposed real-time intrusion detection system. (a) Architecture of the real-time intrusion detection system, (b) data sent to the ThingSpeakTM platform when a 1.83 kg cat crossed the weight sensor, and (c) example of a message sent to a Pushover app.

To verify the implemented system, real-time notifications were examined to ascertain whether they were transmitted to the Pushover account upon the passage of a 1.83 kg cat across the weight sensor. The data collected and transmitted to the ThingSpeakTM platform is illustrated in Fig. 8(b). The data indicates that sudden changes following consecutive small fluctuations around 0.0 kg indicate an intrusion. Given the possibility of malfunction due to noise, the system is configured to send alarm signals to the Pushover app when weights exceeding the 500 g threshold are detected cumulatively three times or more. Although a relatively simple algorithm has been employed here, it can be designed to suit the application’s purpose. The alarm signal sent to the Pushover app is illustrated in Fig. 8(c).

V. CONCLUSION

The Google Teachable Machine, as applied to the weight sensor in this paper, is based on MobileNet’s transfer learning model. This model facilitates the reuse of a pre-trained model with a new problem for rapid learning of new patterns. However, it may not perform well when generalized to patterns that are dissimilar to the pre-trained pattern. A notable discrepancy was observed in the predicted weights that were not learned, as shown in Fig. 5. Notwithstanding, it is noteworthy that the results did not deviate completely. Based on these results, although it is difficult to use as an accurate weight sensor, it demonstrated the potential for application as a system that can detect intrusion in real time. ThingSpeakTM, a cloud-based IoT analytics platform, was used for real-time intrusion detection, and real-time intrusion alerts were successfully implemented to notify the user’s Pushover app.

In order to implement an accurate weight sensor, a substantial quantity of sample data is required, and it is necessary to train a deep learning model from scratch. As a means of achieving this, the vision transformer (ViT) model [27], which has recently received considerable attention, appears to be capable of attaining satisfactory performance.

Acknowledgments

The author would like to thank Prof. Chang-Soo Park of GIST for his helpful advice.

FUNDING

This work was supported by Kyungnam University Foundation Grant 2021.

DISCLOSURES

The author declares that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

DATA AVAILABILITY

All data generated or analyzed during this study are included in this published article. The Teachable Machine model that supports the findings of this study is available at https://teachablemachine.withgoogle.com/models/ADgTnzy4i/.

Fig 1.

Figure 1.Two types of optical weight sensors for studying hysteresis characteristics. (a) Direct method, where pressure is applied directly to the multimode fiber using a metal plate, and (b) indirect method, where an elastic plate is directly under pressure and indirectly transmitted to the multimode fiber.
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

Fig 2.

Figure 2.Results of a hysteresis test to measure the return to the initial state after applying a 3 kg weight for 10 minutes: (a) Speckle pattern for the direct method, which has changed greatly (red dotted circles), and (b) speckle pattern for the indirect method, which has changed very little.
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

Fig 3.

Figure 3.Experimental setup of the proposed deep-learning weight sensor with multimode fiber speckle patterns, and thermoelectric cooler (TEC).
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

Fig 4.

Figure 4.Confusion matrix and loss-per-epoch plots on the training and test sets. (a) The loss confusion matrix, which provides a summary of the model’s prediction accuracy for each class. (b) The resulting graph of loss per epoch demonstrates that the model learns effectively without underfitting or overfitting.
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

Fig 5.

Figure 5.An example of measuring any arbitrary weight on the fiber optic weight sensor using the Teachable Machine.
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

Fig 6.

Figure 6.Results of the predicted weight using the proposed weight sensor, which was obtained by 100 g increments of applied weight, ranging from 0.0 to 2.0 kg, over five iterations. The solid red circles represent the trained weights, while the blue dotted circles represent the predictions for the untrained weights.
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

Fig 7.

Figure 7.Output of intrusion class (%) vs. measured weight (g; log scale), showing the results obtained with 20 g increments of applied weight ranging from 0.0 to 0.1 kg, and with 100 g increments of applied weight ranging from 0.1 to 2.0 kg, over five iterations.
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

Fig 8.

Figure 8.One potential application of the proposed real-time intrusion detection system. (a) Architecture of the real-time intrusion detection system, (b) data sent to the ThingSpeakTM platform when a 1.83 kg cat crossed the weight sensor, and (c) example of a message sent to a Pushover app.
Current Optics and Photonics 2024; 8: 508-514https://doi.org/10.3807/COPP.2024.8.5.508

References

  1. S. Zheng, M. Liao, F. Wang, W. He, X. Peng, and G. Situ, “Non-line-of-sight imaging under white-light illumination: A two-step deep learning approach,” Opt. Express 29, 40091-40105 (2021).
    Pubmed CrossRef
  2. F. Willomitzer, P. V. Rangarajan, F. Li, M. M. Balaji, M. P. Christensen, and O. Cossairt, “Fast non-line-of-sight imaging with high-resolution and wide field of view using synthetic wavelength holography,” Nat. Commun. 12, 6647 (2021).
    Pubmed KoreaMed CrossRef
  3. A. Esteva, A. Robicquet, B. Ramsundar, V. Kuleshov, M. DePristo, K. Chou, C. Cui, G. Corrado, S. Thrun, and J. Dean, “A guide to deep learning in healthcare,” Nat. Med. 25, 24-29 (2019).
    CrossRef
  4. B. Rahmani, D. Loterie, G. Konstantinou, D. Psaltis, and C. Moser, “Multimode optical fiber transmission with a deep learning network,” Light.: Sci. Appl. 7, 69 (2018).
    CrossRef
  5. C. Zhu, E. A. Chan, Y. Wang, W. Peng, R. Guo, B. Zhang, C. Soci, and Y. Chong, “Image reconstruction through a multimode fiber with a simple neural network architecture,” Sci. Rep. 11, 896 (2021).
    CrossRef
  6. T. Pan, J. Ye, H. Liu, F. Zhang, P. Xu, O. Xu, Y. Xu, and Y. Qin, “Non-orthogonal optical multiplexing empowered by deep learning,” Nat. Commun. 15, 1580 (2024).
    Pubmed KoreaMed CrossRef
  7. N. H. Al-Ashwal, K. Soufy, M. E. Hamza, and M. A. Swillam, “Deep Learning for optical sensor applications: A review,” Sensors 23, 6486 (2023).
    CrossRef
  8. K. Wang, Y. Mizuno, X. Dong, W. Kurz, M. Köhler, P. Kienle, H. Lee, M. Jakobi, and A. W. Koch, “Multimode optical fiber sensors: From conventional to machine learning-assisted,” Meas. Sci. Technol. 35, 022002 (2023).
    CrossRef
  9. A. Venketeswaran, N. Lalam, J. Wuenschell, P. R. Ohodnicki Jr., M. Badar, K. P. Chen, P. Lu, Y. Duan, B. Chorpening, and M. Buric, “Recent advances in machine learning for fiber optic sensor applications,” Adv. Intell. Syst. 4, 2100067 (2022).
    CrossRef
  10. R. Dulski, M. Kastek, P. Trzaskawka, T. Piątkowski, M. Szustakowski, and M. Życzkowski, “Concept of data processing in multi-sensor system for perimeter protection,” Proc. SPIE 8019, 80190X (2011).
    CrossRef
  11. X. Yang, F. Zhang, Y. He, P. Liang, and J. Yang, “Human intrusion detection system using mm wave radar,” in Proc. 3rd International Symposium on Computer Technology and Information Science-ISCTIS (Chengdu, China, July 7-9, 2023), pp. 904-911.
    CrossRef
  12. M. N. Uddin and H. Nyeem, “Engineering a multi-sensor surveillance system with secure alerting for next-generation threat detection and response,” Results Eng. 22, 101984 (2024).
    CrossRef
  13. Y. Zhu, J. Li, Q. Wang, C. Yu, L. Tang, and Y. Bai, “Intrusion detection by optical fiber in windy conditions,” IEICE Electron. Express 19, 20220098 (2022).
    CrossRef
  14. H. Hsieh, K.-S. Hsu, T.-L. Jong, and L. Wang, “Multi-zone fiber-optic intrusion detection system with active unbalanced Michelson interferometer used for security of each defended zone,” IEEE Sensors J. 20, 1607-1618 (2020).
    CrossRef
  15. A. Dhall, J. K. Chhabra, and N. S. Aulakh, “Intrusion detection system based on speckle pattern analysis,” Exp. Tech. 29, 25-31 (2006).
    CrossRef
  16. M. J. Murray, A. Davis, C. Kirkendall, and B. Redding, “Speckle-based strain sensing in multimode fiber,” Opt. Express 27, 28494-28506 (2019).
    Pubmed CrossRef
  17. A. R. Cuevas, M. Fontana, L. Rodriguez-Cobo, M. Lomer, and J. M. López-Higuera, “Machine learning for turning optical fiber specklegram sensor into a spatially-resolved sensing system. Proof of concept,” J. Light. Technol. 36, 3733-3788 (2018).
    CrossRef
  18. E. Fujiwara, L. E. da Silva, T. D. Cabral, H. E. de Freitas, Y. T. Wu, and C. M. de B. Cordeiro, “Optical fiber specklegram chemical sensor based on a concatenated multimode fiber structure,” J. Light. Technol. 19, 5041-5047 (2019).
    CrossRef
  19. D. Bender, U. Çakır, and E. Yüce, “Deep learning-based fiber bending recognition for sensor applications,” IEEE Sensors J. 23, 6956-6962 (2023).
    CrossRef
  20. Google Creative Lab, “Teachable Machine,” (Experiments with Google, Published Date: Nov. 2019), https://experiments.withgoogle.com/teachable-machine (Accessed Date: Apr. 29, 2024)
  21. MathWorksTM, “ThingSpeak,” (MathWorksTM), https://kr.mathworks.com/products/thingspeak.html (Accessed Date: Apr. 29, 2024)
  22. A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems 25-NIPS 2012 (Lake Tahoe, CA, USA, Dec. 3-8, 2012), pp. 1097-1105.
  23. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. International Conference on Learning Representations-ICLR 2015 (San Diego, CA, USA, 2015).
  24. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition-CVPR 2016 (Las Vegas, NV, USA, Jun. 27-30, 2016), pp. 770-778.
    CrossRef
  25. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition-CVPR 2018 (Salt Lake City, UT, USA, Jun. 18-23, 2018), pp. 4510-4520.
    CrossRef
  26. I. Russeva, “How to load a Teachable Machine image model in a Node.JS project,” (SashiDo Co., Published date: Nov. 4, 2020), https://blog.sashido.io/how-to-load-a-teachable-machine-image-model-in-a-node-js-project/ (Accessed Date: Apr. 29, 2024)
  27. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16 × 16 words: Transformers for image recognition at scale,” in Proc. International Conference on Learning Representations-ICLR 2021 (Virtual Event, Austria, 2021).