Advertisement

Accelerating Video Processing Inside Embedded Devices to Count Mobility Actors

  • Andrés HerediaEmail author
  • Gabriel Barros-Gavilanes
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1096)

Abstract

The actual number of surveillance cameras and the different methods for counting vehicles originate the question: What is the best place to process video flows? This work analyze techniques to accelerate a counting system for mobility actors like cars, pedestrians, motorcycles, bicycles, buses, and trucks in the context of an Edge computing application using deep learning. To solve this problem this study presents the analysis and implementation of different techniques based on the use of an additional hardware element as is the case of a Vision Processing Unit (VPU) in combination with methods that affect the resolution, bit rate, and time of video processing. For this purpose we consider the Mobilenet-SSD model with two approaches: a pre-trained model with known data sets and a trained model with images from our specific scenarios. Additionally, we compare an optimized model using OpenVINO toolkit and overclock of hardware. The use of SSD-Mobilenet’s model generates different results in terms of accuracy and time of video processing in the system. Results show that the use of an embedded device in combination with a VPU and video processing techniques reach 18.62 Frames per Second (FPS). Thus, video processing time is slightly superior (5.63 min) for a video of 5 min. Optimized model and overclock show improvements too. Recall and precision values of 91% and 97% are reported in the best case (class car) for the vehicle counting system.

Keywords

Computer vision Raspberry Pi Mobilenet Single shot detection Convolutional Neural Network Vision Processing Unit OpenVino Overclock 

References

  1. 1.
  2. 2.
    Coco - common objects in context. http://cocodataset.org/
  3. 3.
    Intel, January 2019. https://www.movidius.com/
  4. 4.
    Tzutalin/Labelimg: Labelimg, code (2015). https://github.com/tzutalin/labelImg
  5. 5.
    Chen, Z., Ellis, T., Velastin, S.A.: Vehicle detection, tracking and classification in urban traffic. In: 2012 15th International IEEE Conference on Intelligent Transportation Systems, pp. 951–956, September 2012.  https://doi.org/10.1109/ITSC.2012.6338852
  6. 6.
    chuanqi305: chuanqi305/mobilenet-ssd, October 2018. https://github.com/chuanqi305/MobileNet-SSD
  7. 7.
    Li, K., Huang, Z., Cheng, Y., Lee, C.: A maximal figure-of-merit learning approach to maximizing mean average precision with deep neural network based classifiers. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4503–4507, May 2014.  https://doi.org/10.1109/ICASSP.2014.6854454
  8. 8.
    Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  9. 9.
    Nikhil Yadav, U.B.: Comparative study of object detection algorithms. Int. Res. J. Eng. Technol. (IRJET). 586–591 (2017).  https://doi.org/10.23919/MVA.2017.7986913
  10. 10.
    Oh, S., Kim, M., Kim, D., Jeong, M., Lee, M.: Investigation on performance and energy efficiency of CNN-based object detection on embedded device. In: 2017 4th International Conference on Computer Applications and Information Processing Technology (CAIPT), pp. 1–4, August 2017.  https://doi.org/10.1109/CAIPT.2017.8320657
  11. 11.
    Othman, N.A., Aydin, I.: A new deep learning application based on movidius NCS for embedded object detection and recognition. In: 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–5, October 2018.  https://doi.org/10.1109/ISMSIT.2018.8567306
  12. 12.
    Peña-González, R.H., Nuño-Maganda, M.A.: Computer vision based real-time vehicle tracking and classification system. In: 2014 IEEE 57th International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 679–682, August 2014.  https://doi.org/10.1109/MWSCAS.2014.6908506
  13. 13.
    Roh, M., Lee, J.: Refining faster-RCNN for accurate object detection. In: 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), pp. 514–517, May 2017.  https://doi.org/10.23919/MVA.2017.7986913
  14. 14.
    Torres-Espinoza, F., Barros-Gavilanes, G., Barros, M.J.: Computer vision classifier and platform for automatic counting: more than cars. In: Ecuador Technical Chapters Meeting 2017; 2nd ETCM Conference, pp. 1–6, October 2017.  https://doi.org/10.1109/ETCM.2017.8247454, https://ieeexplore.ieee.org/document/8247454
  15. 15.
    Wang, R.J., Li, X., Ling, C.X.: Pelee: a real-time object detection system on mobile devices. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 1967–1976. Curran Associates, Inc. (2018). http://papers.nips.cc/paper/7466-pelee-a-real-time-object-detection-system-on-mobile-devices.pdf

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.LIDIUniversidad del AzuayCuencaEcuador
  2. 2.tivo.ec ResearchCuencaEcuador

Personalised recommendations