Skip to main content
Log in

Action recognition using interrelationships of 3D joints and frames based on angle sine relation and distance features using interrelationships

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Human action recognition is still an uncertain computer vision problem, which could be solved by a robust action descriptor. As a solution, we proposed an action recognition descriptor using only the 3D skeleton joint’s points to perform this unsettle task. Joint’s point interrelationships and frame-frame interrelationships are presented, which is a solution backbone to achieve human action recognition. Here, many joints are related to each other, and frames depend on different frames while performing any action sequence. Joints point spatial information calculates using angle, joint’s sine relation, and distance features, whereas joints point temporal information estimates from frame-frame relations. Experiments are performed over four publicly available databases, i.e., MSR Daily Activity 3D Dataset, UTD Multimodal Human Action Dataset, KARD- Kinect Activity Recognition Dataset, and SBU Kinect Interaction Dataset, and proved that proposed descriptor outperforms as a comparison to state-of-the-art approaches on entire four datasets. Angle, Sine relation, and Distance features are extracted using interrelationships of joints and frames (ASD-R). It is all achieved due to accurately detecting spatial and temporal information of the Joint’s points. Moreover, the Support Vector Machine classifier supports the proposed descriptor to identify the right classification precisely.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Tran, Tuan D, Yamazoe H, Lee J-H (2020) Multi-scale affined-HOF and dimension selection for view-unconstrained action recognition. Applied Intelligence:1–19

  2. Gu Y, Ye X, Sheng W, Yongsheng O, Li Y (2020) Multiple stream deep learning model for human action recognition. Image Vis Comput 93:103818

    Article  Google Scholar 

  3. Majd M, Safabakhsh R (2019) A motion-aware ConvLSTM network for action recognition. Appl Intell 49(7):2515–2521

    Article  Google Scholar 

  4. Liu M, Yuan J (2018) Recognizing human actions as the evolution of pose estimation maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition:1159–1168

  5. Kerboua A, Batouche M (2019) 3D skeleton action recognition for security improvement. International Journal of Intelligent Systems and Applications 11(3):42–52

    Article  Google Scholar 

  6. Ashwini K, Amutha R (2020) Skeletal data based activity recognition system. In 2020 International Conference on Communication and Signal Processing (ICCSP), pp. 444–447. IEEE

  7. Liu J, Wang G, Duan L-Y, Abdiyeva K, Kot AC (2017) Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Trans Image Process 27(4):1586–1599

    Article  MathSciNet  Google Scholar 

  8. Qin Y, Mo L, Li C, Luo J (2020) Skeleton-based action recognition by part-aware graph convolutional networks. Vis Comput 36(3):621–631

    Article  Google Scholar 

  9. Gaglio S, Re GL, Morana M (2014) Human activity recognition process using 3-D posture data. IEEE Transactions on Human-Machine Systems 45(5):586–597

    Article  Google Scholar 

  10. Cippitelli E, Gasparrini S, Gambi E, Spinsante S (2016) A human activity recognition system using skeleton data from rgbd sensors. Computational intelligence and neuroscience 2016:21

    Article  Google Scholar 

  11. Papadopoulos, K, Antunes M, Aouada D, Ottersten B (2017) Enhanced trajectory-based action recognition using human pose. In 2017 IEEE International Conference on Image Processing (ICIP), pp. 1807-1811. IEEE

  12. Zhu W, Lan C, Xing J, Zeng W, Li Y, Shen L, Xie X (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In Thirtieth AAAI Conference on Artificial Intelligence

  13. Song S, Lan C, Xing J, Zeng W, Liu J (2017) An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In Thirty-first AAAI conference on artificial intelligence

  14. Liu J, Wang G, Hu P, Duan L-Y, Kot AC (2017) Global context-aware attention LSTM networks for 3D action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition:1647–1656

  15. Liu J, Shahroudy A, Xu D, Wang G (2016) Spatio-temporal lstm with trust gates for 3d human action recognition. In European conference on computer vision, pp. 816–833. Springer, Cham

  16. Ke Q, An S, Bennamoun M, Sohel F, Boussaid F (2017) Skeletonnet: mining deep part features for 3-d action recognition. IEEE signal processing letters 24(6):731–735

    Article  Google Scholar 

  17. Ke Q, Bennamoun M, An S, Sohel F, Boussaid F (2017) A new representation of skeleton sequences for 3d action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition:3288–3297

  18. Escobedo E, Camara G (2016) A new approach for dynamic gesture recognition using skeleton trajectory representation and histograms of cumulative magnitudes. In 2016 29th SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), pp. 209–216. IEEE

  19. Evangelidis G, Singh G, Horaud R (2014) Skeletal quads: Human action recognition using joint quadruples. In 2014 22nd International Conference on Pattern Recognition, pp. 4513–4518. IEEE

  20. Hussein ME, Torki M, Gowayyed MA, El-Saban M (2013) Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In Twenty-Third International Joint Conference on Artificial Intelligence

  21. Yao L, Yang W, Huang W (2020) A data augmentation method for human action recognition using dense joint motion images. Appl Soft Comput 106713

  22. Du Y, Wang W, Liang W (2015) Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition:1110–1118

  23. Li C, Hou Y, Wang P, Li W (2017) Joint distance maps based action recognition with convolutional neural networks. IEEE Signal Processing Letters 24(5):624–628

    Article  Google Scholar 

  24. Wang P, Li Z, Hou Y, Li W (2016) Action recognition based on joint trajectory maps using convolutional neural networks. In Proceedings of the 24th ACM international conference on Multimedia:102–106. ACM

  25. Chikhaoui Band, Gouineau F (2017) Towards automatic feature extraction for activity recognition from wearable sensors: a deep learning approach. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 693–702. IEEE

  26. Hadfield S, Lebeda K, Bowden R (2017) Hollywood 3d: what are the best 3d features for action recognition? Int J Comput Vis 121(1):95–110

    Article  MathSciNet  Google Scholar 

  27. Wang P, Wang S, Gao Z, Hou Y, Li W (2017) Structured images for RGB-D action recognition. In Proceedings of the IEEE International Conference on Computer Vision:1005–1014

  28. Chaaraoui A, Padilla-Lopez J, Flórez-Revuelta F (2013) Fusion of skeletal and silhouette-based features for human action recognition with RGB-D devices. In Proceedings of the IEEE international conference on computer vision workshops:91–97

  29. Chen C, Jafari R, Kehtarnavaz N (2015) Action recognition from depth sequences using depth motion maps-based local binary patterns. In 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 1092–1099. IEEE

  30. Xu H, Chen E, Liang C, Lin Q, Guan L (2015) Spatio-temporal pyramid model based on depth maps for action recognition. In 2015 IEEE 17th International Workshop on Multimedia Signal Processing (MMSP), pp. 1–6. IEEE

  31. Ijjina EP, Chalavadi KM (2017) Human action recognition in RGB-D videos using motion sequence information and deep learning. Pattern Recogn 72:504–516

    Article  Google Scholar 

  32. Li W, Zhang Z, Liu Z (2010) Action recognition based on a bag of 3d points. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 9–14. IEEE

  33. Chen C, Jafari R, Kehtarnavaz N (2015) Utd-mhad: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In 2015 IEEE International conference on image processing (ICIP), pp. 168–172. IEEE

  34. Yun K, Honorio J, Chattopadhyay D, Berg TL, Samaras D (2012) Two-person interaction detection using body-pose features and multiple instance learning." In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 28–35. IEEE

  35. Hearst MA, Dumais ST, Osuna E, Platt J, Scholkopf B (1998) Support vector machines. IEEE Intelligent Systems and their applications 13(4):18–28

    Article  Google Scholar 

  36. Hu Z, Zhang H, Yang Y, Yang C (2019) An improved method for interest point detection in human activity video. In Journal of Physics: Conference Series 1237(2):022089. IOP Publishing

    Google Scholar 

  37. Gori I, Aggarwal JK, Matthies L, Ryoo MS (2016) Multitype activity recognition in robot-centric scenarios. IEEE Robotics and Automation Letters 1(1):593–600

    Article  Google Scholar 

  38. Tasnim N, Islam M, Baek J-H (2020) Deep learning-based action recognition using 3D skeleton joints information. Inventions 5(3):49

    Article  Google Scholar 

  39. Wang H, Liang W (2017) Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition:499–508

  40. Theodorakopoulos I, Kastaniotis D, Economou G, Fotopoulos S (2014) Pose-based human action recognition via sparse representation in dissimilarity space. J Vis Commun Image Represent 25(1):12–23

    Article  Google Scholar 

  41. Jin K, Jiang M, Kong J, Huo H, Wang X (2017) Action recognition using vague division DMMs. The Journal of Engineering 2017(4):77–84

    Article  Google Scholar 

  42. Zhu, Jiagang, Wei Zou, Liang Xu, Yiming Hu, Zheng Zhu, Manyu Chang, Junjie Huang, Guan Huang, and Dalong Du. Action machine: rethinking action recognition in trimmed videos. arXiv preprint arXiv:1812.05770 (2018)

  43. Zeng S, Lu G, Yan P (2018) Enhancing human action recognition via structural average curves analysis. SIViP 12(8):1551–1558

    Article  Google Scholar 

  44. Dhiman C, Vishwakarma DK (2017) High dimensional abnormal human activity recognition using histogram oriented gradients and zernike moments. In 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pp. 1–4. IEEE

  45. Jalal A, Khalid N, Kim K (2020) Automatic recognition of human interaction via hybrid descriptors and maximum entropy Markov model using depth sensors. Entropy 22(8):817

    Article  Google Scholar 

  46. Islam, M. Shujah, Mansoor Iqbal, Nuzhat Naqvi, Khush Bakhat, M. Mattah Islam, Shamsa Kanwal, and Zhongfu Ye. "CAD: Concatenated action descriptor for one and two person (s), using Silhouette and Silhouette's skeleton. IET Image Processing (2019)

  47. Escalera S, Pujol O, Radeva P (2009) Separability of ternary codes for sparse designs of error-correcting output codes. Pattern Recog Lett 30(3):285–297

    Article  Google Scholar 

  48. Escalera S, Pujol O, Radeva P (2010) On the decoding process in ternary error-correcting output codes. IEEE Trans Pattern Anal Mach Intell 32(7):120–134

    Article  Google Scholar 

  49. Allwein E, Schapire R, Singer Y (2000) Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research 1:113–141

    MathSciNet  MATH  Google Scholar 

  50. Fürnkranz J (2002) Round Robin classification. J Mach Learn Res 2:721–747

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work is supported by the Fundamental Research Funds for the Central Universities (Grant no. WK2350000002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongfu Ye.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Islam, M.S., Bakhat, K., Khan, R. et al. Action recognition using interrelationships of 3D joints and frames based on angle sine relation and distance features using interrelationships. Appl Intell 51, 6001–6013 (2021). https://doi.org/10.1007/s10489-020-02176-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-02176-3

Keywords

Navigation