Encapsulated Features with Multi-objective Deep Belief Networks for Action Classification

  • Paul T. SheebaEmail author
  • S. Murugan
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1040)


Human action classification plays a challenging role in the field of robotics and other human–computer interaction systems. It also helps people in crime analysis, security tasks, and human support systems. The main purpose of this work is to design and implement a system to classify human actions in videos using encapsulated features and multi-objective deep belief network. Encapsulated features include space–time interest points, shape, and coverage factor. Initially, frames having actions had been separated from the input videos by means of structural similarity measure. Later, spatiotemporal interest points, shape and coverage factor are extracted and combined to form encapsulated features. To improve the accuracy in classification, MODBN classifier was designed by combining multi-objective dragonfly algorithm and deep belief network. Datasets such as Weizmann and KTH are used in MODBN classifier to carry the experimentation. Accuracy, sensitivity, and specificity are measured to evaluate the classification network. This proposed classifier with encapsulated features can produce better performance with 99% of accuracy, 97% of sensitivity, and 95% of specificity.


Action recognition SSIM STI DBN DA MODBN 


  1. 1.
    Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes (2007)Google Scholar
  2. 2.
    Tong, M., Li, M., Bai, H., Ma, L., Zhao, M.: DKD–DAD: a novel framework with discriminative kinematic descriptor and deep attention-pooled descriptor for action recognition. Neural Comput. Appl. 1 (2019)Google Scholar
  3. 3.
    Jia, C.C., et al.: Incremental multi-linear discriminant analysis using canonical correlations for action recognition. Neurocomputing 83, 56–63 (2012)CrossRefGoogle Scholar
  4. 4.
    Dollar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior Recognition via Sparse Spatio-Temporal Features, pp. 65–72. IEEE (2005)Google Scholar
  5. 5.
    Schuldt, C., Barbara, L., Stockholm, S.: Recognizing human actions: a local SVM approach. In: Proceedings of 17th International Conference, vol. 3, pp. 32–36 (2004)Google Scholar
  6. 6.
    Moussa, M.M., Hemayed, E.E., El Nemr, H.A., Fayek, M.B.: Human action recognition utilizing variations in skeleton dimensions. Arab. J. Sci. Eng. 43, 597–610 (2018)CrossRefGoogle Scholar
  7. 7.
    Huynh-The, T., Le, B.V., Lee, S., Yoon, Y.: Interactive activity recognition using pose-based spatio–temporal relation features and four-level Pachinko Allocation model. Inf. Sci. (NY) 369, 317–333 (2016)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Kong, Y., Jia, Y.: A hierarchical model for human interaction recognition. In: Proceedings of IEEE International Conference Multimedia Expo, pp. 1–6 (2012)Google Scholar
  9. 9.
    Bregonzio, M., Gong, S., Xiang, T.: Recognising action as clouds of space-time interest points. In: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1948–1955 (2009)Google Scholar
  10. 10.
    Liu, J., Shah, M.: Learning human actions via information maximization. In: 26th IEEE Conference Computer Vision and Pattern Recognition, CVPR (2008)Google Scholar
  11. 11.
    Farhadi, A., Endres, I., Hoiem, D., Forsyth, D.: Describing objects by their attributes. In: CVPR 2009. IEEE Conference, pp. 1778–1785 (2009)Google Scholar
  12. 12.
    Wu, D., Shao, L.: Silhouette analysis-based action recognition via exploiting human poses. IEEE Trans. Circuits Syst. Video Technol. 23, 236–243 (2013)CrossRefGoogle Scholar
  13. 13.
    Rodriguez, M., Orrite, C., Medrano, C., Makris, D.: A time flexible kernel framework for video-based activity recognition. Image Vis. Comput. 48–49, 26–36 (2016)CrossRefGoogle Scholar
  14. 14.
    Li, H., Chen, J., Hu, R.: Multiple feature fusion in convolutional neural networks for action recognition. Wuhan Univ. J. Nat. Sci. 22, 73–78 (2017)CrossRefGoogle Scholar
  15. 15.
    Wang, H., Yuan, C., Hu, W., Ling, H., Yang, W., Sun, C.: Action recognition using nonnegative action component representation and sparse basis selection. IEEE Trans. Image Process. 23(2), 570–581 (2014)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Li, W.X., Vasconcelos, N.: Complex activity recognition via attribute dynamics. Int. J. Comput. Vis. 122, 334–370 (2017)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Nigam, S., Khare, A.: Integration of moment invariants and uniform local binary patterns for human activity recognition in video sequences. Multimed. Tools Appl. 75, 17303–17332 (2016)CrossRefGoogle Scholar
  18. 18.
    Hasan, M., Roy-Chowdhury, A.K.: A continuous learning framework for activity recognition using deep hybrid feature models. IEEE Trans. Multimed. 17, 1909–1922 (2015)CrossRefGoogle Scholar
  19. 19.
    Meng, H., Pears, N., Bailey, C.: Human action classification using SVM_2K classifier on motion features, pp. 458–465 (2006)CrossRefGoogle Scholar
  20. 20.
    Everts, I., Van Gemert, J.C., Gevers, T.: Evaluation of color spatio-temporal interest points for human action recognition. IEEE Trans. Image Process. 23, 1569–1580 (2014)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Laptev, I., Lindeberg, T.: Velocity adaptation of space-time interest points. In: Proceedings of International Conference on Pattern Recognition, vol. 1, pp. 52–56 (2004)Google Scholar
  22. 22.
    Vojt, J.: Deep neural networks and their implementation (2016)Google Scholar
  23. 23.
  24. 24.
  25. 25.
    Sopharak, A., Uyyanonvara, B., Barman, S., Williamson, T.H.: Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Comput. Med. Imaging Graph. 32, 720–727 (2008)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Faculty of Computer Science & EngineeringSathyabama Institute of Science and TechnologyChennaiIndia
  2. 2.Department of Computer Science and EngineeringSathyabama Institute of Science and TechnologyChennaiIndia

Personalised recommendations