Activity Recognition for Elderly Care by Evaluating Proximity to Objects and Human Skeleton Data

  • Julia RichterEmail author
  • Christian Wiede
  • Enes Dayangac
  • Ahsan Shahenshah
  • Gangolf Hirtz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10163)


Recently, researchers have shown an increased interest in the detection of activities of daily living (ADLs) for ambient assisted living (AAL) applications. In this study, we present an algorithm that detects activities related to personal hygiene. The approach is based on the evaluation of pose information and a person’s proximity to objects belonging to the typical equipment of bathrooms, such as sink, toilet and shower. In addition to this high-level reasoning, we developed a skeleton-based algorithm that recognises actions using a supervised learning model. Therefore, we analysed several feature vectors, especially with regard to the representation of joint trajectories in the frequency domain. The results gave evidence that this high-level reasoning algorithm can reliably recognise hygiene-related activities. An evaluation of the skeleton-based algorithm shows that the defined actions were successfully classified with a rate of 96.66%.


Video analysis 3-D image processing Activity recognition Machine learning Pose estimation High-level reasoning Ambient assisted living 



This project is funded by the European Social Fund (ESF). We furthermore would like to express our thanks to all the persons who contributed to this project during the recordings.


  1. 1.
    Beaudry, C., Péteri, R., Mascarilla, L.: Action recognition in videos using frequency analysis of critical point trajectories. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 1445–1449. IEEE (2014)Google Scholar
  2. 2.
    Carlsson, S., Sullivan, J.: Action recognition by shape matching to key frames. In: Workshop on Models versus Exemplars in Computer Vision, vol. 1, p. 18 (2001)Google Scholar
  3. 3.
    Chen, J., Kam, A.H., Zhang, J., Liu, N., Shue, L.: Bathroom activity monitoring based on sound. In: Gellersen, H.-W., Want, R., Schmidt, A. (eds.) Pervasive 2005. LNCS, vol. 3468, pp. 47–61. Springer, Heidelberg (2005). doi: 10.1007/11428572_4 CrossRefGoogle Scholar
  4. 4.
    Fogarty, J., Au, C., Hudson, S.E.: Sensing from the basement: a feasibility study of unobtrusive and low-cost home activity recognition. In: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, pp. 91–100. ACM (2006)Google Scholar
  5. 5.
    Harville, M., Li, D.: Fast, integrated person tracking and activity recognition with plan-view templates from a single stereo camera. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2, pp. II–398. IEEE (2004)Google Scholar
  6. 6.
    Pirsiavash, H., Ramanan, D.: Detecting activities of daily living in first-person camera views. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2847–2854. IEEE (2012)Google Scholar
  7. 7.
    Raptis, M., Sigal, L.: Poselet key-framing: a model for human activity recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2650–2657 (2013)Google Scholar
  8. 8.
    Richter, J., Christian, W., Dayangac, E., Heß, M., Hirtz, G.: Activity recognition based on high-level reasoning: an experimental study evaluating proximity to objects and pose information. In: Fifth International Conference on Pattern Recognition Applications and Methods, ICPRAM, Rome, pp. 415–422 (2016)Google Scholar
  9. 9.
    Richter, J., Christian, W., Hirtz, G.: Mobility assessment of demented people using pose estimation and movement detection. In: Fourth International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, pp. 22–29 (2015)Google Scholar
  10. 10.
    Richter, J., Findeisen, M., Hirtz, G.: Assessment and care system based on people detection for elderly suffering from dementia. In: Consumer Electronics Berlin (ICCE-Berlin), ICCE Berlin 2014. IEEE Fourth International Conference on Consumer Electronics, pp. 59–63. IEEE (2014)Google Scholar
  11. 11.
    Scanaill, C.N., Carew, S., Barralon, P., Noury, N., Lyons, D., Lyons, G.M.: A review of approaches to mobility telemonitoring of the elderly in their living environment. Ann. Biomed. Eng. 34(4), 547–563 (2006)CrossRefGoogle Scholar
  12. 12.
    Schindler, K., Van Gool, L.: Action snippets: how many frames does human action recognition require? In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  13. 13.
    Schüldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 3, pp. 32–36. IEEE (2004)Google Scholar
  14. 14.
    Shotton, J., Girshick, R., Fitzgibbon, A., Sharp, T., Cook, M., Finocchio, M., Moore, R., Kohli, P., Criminisi, A., Kipman, A., et al.: Efficient human pose estimation from single depth images. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2821–2840 (2013)CrossRefGoogle Scholar
  15. 15.
    Steen, E.E., Frenken, T., Frenken, M., Hein, A.: Functional assessment in elderlies homes: early results from a field trial. In: Lebensqualität im Wandel von Demografie und Technik (2013)Google Scholar
  16. 16.
    Wang, Y., Sun, S., Ding, X.: A self-adaptive weighted affinity propagation clustering for key frames extraction on human action recognition. J. Vis. Commun. Image Represent. 33, 193–202 (2015)CrossRefGoogle Scholar
  17. 17.
    Yao, A., Gall, J., Fanelli, G., Van Gool, L.J.: Does human action recognition benefit from pose estimation? In: BMVC, vol. 3, p. 6 (2011)Google Scholar
  18. 18.
    Ye, M., Zhang, Q., Wang, L., Zhu, J., Yang, R., Gall, J.: A survey on human motion analysis from depth data. In: Grzegorzek, M., Theobalt, C., Koch, R., Kolb, A. (eds.) Time-of-Flight and Depth Imaging. LNCS, vol. 8200, pp. 149–187. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-44964-2_8 CrossRefGoogle Scholar
  19. 19.
    Yous, S., Laga, H., Chihara, K., et al.: People detection and tracking with world-z map from a single stereo camera. In: The Eighth International Workshop on Visual Surveillance, VS 2008 (2008)Google Scholar
  20. 20.
    Yu, G., Liu, Z., Yuan, J.: Discriminative orderlet mining for real-time recognition of human-object interaction. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9007, pp. 50–65. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-16814-2_4 Google Scholar
  21. 21.
    Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 2, pp. 28–31. IEEE (2004)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Julia Richter
    • 1
    Email author
  • Christian Wiede
    • 1
  • Enes Dayangac
    • 1
  • Ahsan Shahenshah
    • 1
  • Gangolf Hirtz
    • 1
  1. 1.Department of Electrical Engineering and Information TechnologyTechnische Universität ChemnitzChemnitzGermany

Personalised recommendations