Signal, Image and Video Processing

, Volume 13, Issue 7, pp 1431–1439 | Cite as

Comprehensive evaluation of skeleton features-based fall detection from Microsoft Kinect v2

  • Mona Saleh Alzahrani
  • Salma Kammoun JarrayaEmail author
  • Hanêne Ben-Abdallah
  • Manar Salamah Ali
Original Paper


Most of the computer vision applications for human activity recognition exploit the fact that body features calculated from a 3D skeleton increase robustness across persons and can lead to higher performance. However, their success in activity recognition, including falls, depends on the correspondence between the human activities and the used joint/part features. To provide for this correspondence, we experimentally evaluate in this paper skeleton features-based fall detection by comparing fall detection performance for different combinations of skeleton features used in previous related works. We determine the skeleton features that best distinguish fall from non-fall frames, and the best performing classifier. In this endeavor, we followed the classical five steps of supervised machine learning: (1) we collected a learning data composed of 42 fall and 37 non-fall videos from FallFree; (2) we extracted and (3) preprocessed the skeleton data of the training set; (4) we extracted each possible skeleton feature; finally (5) we evaluated all extracted and selected features using two main experiments; one of them based on neighborhood component analysis (NCA). In this evaluation, we show that fall detection based on skeleton features has very encouraging accuracy that varies depending on the used features. More specifically, we recommend the following features: 12 features that resulted from NCA experiment, original and normalized distance from Kinect, and the seven features of the upper body part. These features ranked 1st, 2nd, 4th, and 8th on 22 feature sets, with accuracies 99.5%, 99.4%, 97.8%, and 94.5%, respectively. In addition, random forest is the best performing classifier.


Fall detection Skeleton features Feature selection Kinect v2 Neighborhood component feature selection 



  1. 1.
    Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and approaches. Neurocomputing 100, 144–152 (2013)CrossRefGoogle Scholar
  2. 2.
    Bulling, A., Blanke, U., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 46, 33 (2014)CrossRefGoogle Scholar
  3. 3.
    Microsoft. (2016). JointType enumeration. Accessed 28 Oct 2016
  4. 4.
    Kawatsu, C., Li, J., Chung, C. J.: Development of a fall detection system with Microsoft Kinect. In: Kim, J.H., Matson, E., Myung, H., Xu, P. (eds) Robot Intelligence Technology and Applications 2012. Advances in Intelligent Systems and Computing, vol 208. Springer, Berlin, Heidelberg (2013)Google Scholar
  5. 5.
    Lee, C.K., Lee, V.Y.: Fall detection system based on kinect sensor using novel detection and posture recognition algorithm. In: Inclusive Society: Health and Wellbeing in the Community, and Care at Home. Springer, Berlin, pp. 238–244 (2013)CrossRefGoogle Scholar
  6. 6.
    Le, T.-L., Morel, J.: An analysis on human fall detection using skeleton from Microsoft Kinect. In: 2014 IEEE Fifth International Conference on Communications and Electronics (ICCE), pp. 484–489 (2014)Google Scholar
  7. 7.
    Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Progr. Biomed. 117, 489–501 (2014)CrossRefGoogle Scholar
  8. 8.
    Alzahrani, M.S., Jarraya, S.K., Ali, M.S., Ben-Abdallah, H.: Watchful-Eye: a 3D skeleton-based system for fall detection of physically-disabled cane users. Presented at the 7th EAI International Conference on Wireless Mobile Communication and Healthcare (MobiHealth), Austria, Vienna (2017)Google Scholar
  9. 9.
    Alzahrani, M.S., Jarraya, S.K., Ali, M.S., Ben-Abdallah, H.: FallFree: Multiple fall scenario dataset of cane users for monitoring applications using kinect. Presented at the 13th International Conference on Signal-Image Technology and Internet-Based Systems (SITIS), Jaipur, India (2017)Google Scholar
  10. 10.
    Maldonado, C., Ríos, H., Mezura-Montes, E., Marin, A.: Feature selection to detect fallen pose using depth images. In: 2016 International Conference on Electronics, Communications and Computers (CONIELECOMP), pp. 94–100 (2016)Google Scholar
  11. 11.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. ACM SIGKDD Explor. Newsl. 11, 10–18 (2009)CrossRefGoogle Scholar
  12. 12.
    Microsoft. (2016). Coordinate spaces. Accessed 29 Oct 2016
  13. 13.
    Microsoft. (2016). Coordinate mapping. Accessed 29 Oct 2016
  14. 14.
    Zanfir, M., Leordeanu, M., Sminchisescu, C.: The moving pose: an efficient 3D kinematics descriptor for low-latency action recognition and detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2752–2759 (2013)Google Scholar
  15. 15.
    Rhemyst and Rymix. (2011/2017). Kinect SDK Dynamic Time Warping (DTW) Gesture Recognition. Accessed 2 Jan 2017
  16. 16.
    Yang, W., Wang, K., Zuo, W.: Neighborhood component feature selection for high-dimensional data. JCP 7, 161–168 (2012)Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.College of Computer and Information SciencesJouf UniversitySakakaSaudi Arabia
  2. 2.Faculty of Computing and Information TechnologyKing Abdulaziz UniversityJeddahSaudi Arabia
  3. 3.Higher Colleges of TechnologyDubaiUAE
  4. 4.MIRACLSfaxTunisia

Personalised recommendations