Advertisement

OS-ELM-FPGA: An FPGA-Based Online Sequential Unsupervised Anomaly Detector

  • Mineto TsukadaEmail author
  • Masaaki Kondo
  • Hiroki Matsutani
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11339)

Abstract

Autoencoder, a neural-network based dimensionality reduction algorithm has demonstrated its effectiveness in anomaly detection. It can detect whether an input sample is normal or abnormal by just training only with normal data. In general, Autoencoder is built on backpropagation-based neural networks (BP-NNs). When BP-NNs are implemented in edge devices, they are typically specialized only for prediction with weight matrices precomputed offline due to the high computational cost. However, such devices cannot be immediately adapted to time-series trend changes of input data. In this paper, we propose an FPGA-based unsupervised anomaly detector, called OS-ELM-FPGA, that combines Autoencoder and an online sequential learning algorithm OS-ELM. Based on our theoretical analysis of the algorithm, the proposed OS-ELM-FPGA completely eliminates matrix pseudoinversions while improving the learning throughput. Simulation results using open-source datasets show that OS-ELM-FPGA achieves favorable anomaly detection accuracy compared to CPU and GPU implementations of BP-NNs. Learning throughput of OS-ELM-FPGA is 3.47x to 27.99x and 5.22x to 78.06x higher than those of CPU and GPU implementations of OS-ELM. It is also 3.62x to 36.15x and 1.53x to 43.44x higher than those of CPU and GPU implementations of BP-NNs.

Notes

Acknowlegements

This work was supported by JST CREST Grant Number JPMJCR1785, Japan.

References

  1. 1.
    Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. https://github.com/zalandoresearch/fashion-mnist
  2. 2.
    MNIST: Handwritten digit database. http://yann.lecun.com/exdb/mnist/
  3. 3.
    Zhou, C., Paffenroth, C.: Anomaly detection with robust deep autoencoders. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 665–674, August 2017Google Scholar
  4. 4.
    Chicco, D., Sadowski, P., Baldi, P.: Deep autoencoder neural networks for gene ontology annotation predictions. In: Proceedings of the ACM Conference on Bioinformatics, Computational Biology, and Health Informatics, pp. 533–540, September 2014Google Scholar
  5. 5.
    Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980, January 2014Google Scholar
  6. 6.
    Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4), 303–314 (1989)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Hinton, G., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Marco, G., Alberto, T.: On the problem of local minima in backpropagation. IEEE Trans. Pattern Anal. Mach. Intell. 14(1), 76–86 (1992)CrossRefGoogle Scholar
  9. 9.
    Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of the International Joint Conference on Neural Networks, pp. 985–990, July 2004Google Scholar
  10. 10.
    Bosman, H.H.W.J., Liotta, A., Iacca, G., Wörtche, H.J.: Online extreme learning on fixed-point sensor networks. In: Proceedings of the IEEE International Conference on Data Mining Workshops, pp. 319–326, December 2013Google Scholar
  11. 11.
    Bosman, H.H.W.J., Iacca, G., Tejada, A., Wörtche, H.J., Liotta, A.: Spatial anomaly detection in sensor networks using neighborhood information. Inf. Fusion 33, 41–56 (2017)CrossRefGoogle Scholar
  12. 12.
    Abadi, M., et al.: TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, March 2016. https://www.tensorflow.org/
  13. 13.
    Liang, N.Y., Huang, G.B., Saratchandran, P., Sundararajan, N.: A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 17(6), 1411–1423 (2006)CrossRefGoogle Scholar
  14. 14.
    Wang, Q., et al.: Kernel Principal Component Analysis. In: Artificial Neural Networks. pp. 583–588, July 1997Google Scholar
  15. 15.
    Fakoor, R., Ladhak, F., Nazi, A., Huber, M.: Using deep learning to enhance cancer diagnosis and classification. In: Proceedings of the International Conference on Machine Learning, vol. 28, August 2013Google Scholar
  16. 16.
    Decherchi, S., Gastaldo, P., Leoncini, A., Zunino, R.: Efficient digital implementation of extreme learning machines for classification. IEEE Trans. Circ. Syst. II: Express Briefs 59(8), 496–500 (2012)Google Scholar
  17. 17.
    Mayu, S., Takehisa, Y.: Anomaly detection using autoencoders with nonlinear dimensionality reduction. In: Proceedings of the Workshop on Machine Learning for Sensory Data Analysis, pp. 4–11, July 2014Google Scholar
  18. 18.
    Yeam, T.C., Ismail, N., Mashiko, K., Matsuzaki, T.: FPGA implementation of extreme learning machine system for classification. In: Proceedings of the IEEE Region 10 Conference, pp. 1868–1873, November 2017Google Scholar
  19. 19.
    Frances, V., et al.: Hardware implementation of real-time extreme learning machine in FPGA: analysis of precision, resource occupation and performance. Comput. Electr. Eng. 51, 139–156 (2016)CrossRefGoogle Scholar
  20. 20.
    Nair, V., Hinton, G.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the International Conference on Machine Learning, pp. 807–814, June 2010Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Mineto Tsukada
    • 1
    Email author
  • Masaaki Kondo
    • 2
  • Hiroki Matsutani
    • 1
  1. 1.Keio UniversityKohoku-ku, YokohamaJapan
  2. 2.The University of TokyoTokyoJapan

Personalised recommendations