Advertisement

MHAD: Multi-Human Action Dataset

  • Omar ElharroussEmail author
  • Noor Almaadeed
  • Somaya Al-Maadeed
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1041)

Abstract

This paper presents a framework for a multi-action recognition method. In this framework, we introduce a new approach to detect and recognize the action of several persons within one scene. Also, considering the scarcity of related data, we provide a new data set involving many persons performing different actions in the same video. Our multi-action recognition method is based on a three-dimensional convolution neural network, and it involves a preprocessing phase to prepare the data to be recognized using the 3DCNN model. The new representation of data consists in extracting each person’s sequence during its presence in the scene. Then, we analyze each sequence to detect the actions in it. The experimental results proved to be accurate, efficient, and robust in real-time multi-human action recognition.

Keywords

Human action recognition Multi-human action recognition Convolutional neural network (CNN) Video surveillance 

Notes

Acknowledgements

This publication was made by NPRP Grant# NPRP8-140-2-065 from the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.

References

  1. 1.
    M. Vrigkas, C. Nikou, I.A. Kakadiaris, A review of human activity recognition methods. Front. Robot. AI 2, 28 (2015)CrossRefGoogle Scholar
  2. 2.
    Z.S. Abdallah, M.M. Gaber, B. Srinivasan et al., Activity recognition with evolving data streams: A review. ACM Comput. Surv. (CSUR) 51(4), 71 (2018)CrossRefGoogle Scholar
  3. 3.
    C. Schuldt, I. Laptev, B. Caputo, Recognizing human actions: a local SVM approach, in Proceedings International Conference on Pattern Recognition (Cambridge, 2004), pp. 32–36Google Scholar
  4. 4.
    M. Blank, L. Gorelick, E. Shechtman, M. Irani, R. Basri, Actions as space-time shapes, in Proceedings IEEE International Conference on Computer Vision (Beijing, 2005), pp. 1395–1402Google Scholar
  5. 5.
    D. Weinland, E. Boyer, R. Ronfard, Action recognition from arbitrary views using 3D exemplars (ICCV, 2007)Google Scholar
  6. 6.
    A. Nagendran et al., New system performs persistent wide-area aerial surveillance. http://spie.org/x41092.xml?ArticleID=x41092
  7. 7.
    R.B. Fisher, PETS04 Surveillance Ground Truth Dataset (2004). Available at http://www-prima.inrialpes.fr/PETS04/
  8. 8.
    R. B. Fisher, Behave: Computer-assisted prescreening of video streams for unusual activities (2007). Available at http://homepages.inf.ed.ac.uk/rbf/BEHAVE/
  9. 9.
    R.B. Fisher, PETS07 Benchmark Dataset (2007). Available at http://www.cvg.reading.ac.uk/PETS2007/data.html
  10. 10.
    I. Laptev, M. Marszałek, C. Schmid, B. Rozenfeld, Learning realistic human actions from movies, in CVPR (2008)Google Scholar
  11. 11.
    M. Marszałek, I. Laptev, C. Schmid, Actions in context, in CVPR (2009)Google Scholar
  12. 12.
    M. Rodriguez, J. Ahmed, M. Shah, Action mach: A spatiotemporal maximum average correlation height filter for action recognition, in CVPR (2008), http://server.cs.ucf.edu/˜vision/data.html
  13. 13.
    J. Liu, J. Luo, M. Shah, Recognizing realistic actions from videos "in the wild”, in CVPR (2009), http://serre-lab.clps.brown.edu/resources/HMDB/
  14. 14.
    H. Kuehne, H. Jhuang, E. Garrote, et al. HMDB: A large video database for human motion recognition, in 2011 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 2556–2563Google Scholar
  15. 15.
    F. C. Heilbron, V. Escorcia, B. Ghanem, J.C. Niebles, ActivityNet: A large-scale video benchmark for human activity understanding, in Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Boston, MA, 2015), pp. 961–970Google Scholar
  16. 16.
    A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, F. Li, Large-scale video classification with convolutional neural networks, in Proceedings on 2014 Computer Vision Pattern Recognition (2014), pp. 1725–1732Google Scholar
  17. 17.
    H. Yang, C. Yuan, J. Xing, et al., Scnn: Sequential convolutional neural network for human action recognition in videos, in 2017 IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 355–359Google Scholar
  18. 18.
    K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos, in Proceedings of Neural Information Processing System (2014), pp. 568–576Google Scholar
  19. 19.
    H. Rahmani, A. Mian, M. Shah, Learning a deep model for human action recognition from novel viewpoints. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 667–681CrossRefGoogle Scholar
  20. 20.
    C. Feichtenhofer, A. Pinz, A. Zisserman, Convolutional two-stream network fusion for video action recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1933–1941Google Scholar
  21. 21.
    C.B. Jin, S. Li, H. Kim, Real-time action detection in video surveillance using sub-action descriptor with multi-cnn. arXiv preprint (2017) https://arxiv.org/abs/arXiv:1710.03383CrossRefGoogle Scholar
  22. 22.
    L. Wang, Y. Qiao, X. Tang, Action recognition with trajectory-pooled deep-convolutional descriptors, in CVPR, pp. 4305–4314 (2015)CrossRefGoogle Scholar
  23. 23.
    A. Akula, A. K., Shah, R. Ghosh, Deep learning approach for human action recognition in infrared images. Cogn. Syst. Res. 50, 146–154 (2018)CrossRefGoogle Scholar
  24. 24.
    O. Elharrouss, A. Abbad, D. Moujahid, H. Tairi, Moving object detection zone using a block-based background model. IET Comput. Vis. 12(1), 86–94 (2017)CrossRefGoogle Scholar
  25. 25.
    J. F. Henriques, R. Caseiro, P. Martins et al., High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  26. 26.
    O. Elharrouss, A. Abbad, D. Moujahid, J. Riffi, H. Tairi, A block-based background model for moving object detection. ELCVIA: Electronic Letters on Computer Vision and Image Analysis. 15(3), 0017–31 (2016)CrossRefGoogle Scholar
  27. 27.
    O. ELHarrouss, D. Moujahid, S. E. Elkaitouni, H. Tairi, Moving objects detection based on thresholding operations for video surveillance systems, in 2015 IEEE/ACS 12th International Conference of Computer Systems and Applications (AICCSA), pp. 1–5. IEEE (2015)Google Scholar
  28. 28.
    N. Almaadeed, O. Elharrouss, S. Al-Maadeed, A. Bouridane, A. Beghdadi, A novel approach for robust multi human action detection and recognition based on 3-Dimentional convolutional neural networks. arXiv preprint (2019) https://arxiv.org/abs/1907.11272

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Omar Elharrouss
    • 1
    Email author
  • Noor Almaadeed
    • 1
  • Somaya Al-Maadeed
    • 1
  1. 1.Department of Computer Science and EngineeringQatar UniversityDohaQatar

Personalised recommendations