Skip to main content

Actions in Context: System for People with Dementia

  • Conference paper
  • First Online:
Citizen in Sensor Networks (CitiSens 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8313))

Included in the following conference series:

Abstract

In the next forty years, the number of people living with dementia is expected to triple. In the last stages, people affected by this disease become dependent. This hinders the autonomy of the patient and has a huge social impact in time, money and effort. Given this scenario, we propose an ubiquitous system capable of recognizing daily specific actions. The system fuses and synchronizes data obtained from two complementary modalities - ambient and egocentric. The ambient approach consists in a fixed RGB-Depth camera for user and object recognition and user-object interaction, whereas the egocentric point of view is given by a personal area network (PAN) formed by a few wearable sensors and a smartphone, used for gesture recognition. The system processes multi-modal data in real-time, performing paralleled task recognition and modality synchronization, showing high performance recognizing subjects, objects, and interactions, showing its reliability to be applied in real case scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that due to the enrichment given by range data a single Gaussian model suffices for modeling background. Very small improvements have been observed using Gaussian Mixture Models in the studied environments.

  2. 2.

    The extension to a small set of gestures of interest can be easily achieved without a significant loss in performance [11].

  3. 3.

    Notice that the drinking action is not detected because the system is sensitive to the hand orientation.

References

  1. Stone, E.E., Skubic, M.: Evaluation of an inexpensive depth camera for passive in-home fall risk, assessment. In: Pervasive Computing Technologies for Healthcare (PervasiveHealth), pp. 71–11 (2011)

    Google Scholar 

  2. Zhang, C., Tian, Y., Capezuti, E.: Privacy preserving automatic fall detection for elderly using RGBD cameras. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part I. LNCS, vol. 7382, pp. 625–633. Springer, Heidelberg (2012)

    Google Scholar 

  3. Banerjee, T., Keller, J., Skubic, J., Stone, E. E.: Day or nigh activity recognition from video using fuzzy clustering, techniques. IEEE Transactions Fuzzy Systems, pp. 1–1 (2013)

    Google Scholar 

  4. Shotton, J., Fitzgibbon, A., Cook, M., et al.: Real-time human pose recognition in parts from single depth images. In: CVPR, pp. 1297–1304 (2011)

    Google Scholar 

  5. Escalera, S.: Articulated motion and deformable objects 2012. In: Human Behavior Analysis From Depth Maps, pp. 282–292 (2012)

    Google Scholar 

  6. Clapés, A., Escaler, M., Reyes, S.: Multi-modal user identification and object recognition surveillance, system. Pattern Recogn. Lett. 34(7), 799–808 (2013)

    Article  Google Scholar 

  7. Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: The IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan (2009)

    Google Scholar 

  8. Felzenszwalb, P.F., McAllester, D.A., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: CVPR, pp. 1–8 (2008)

    Google Scholar 

  9. Ermes, M., Pärkkä, J., Mäntyjärvi, J., Korhonen, I.: Detection of daily activities and sports with wearable sensors in controlled and uncontrolled conditions. TITB 12(1), 20–26 (2008)

    Google Scholar 

  10. Ouchi, K., Suzuki, T., Doi, M.: A wearable healthcare support system using user’s context. In: Distributed Computing Systems, pp. 791–792 (2002)

    Google Scholar 

  11. Lichtenauer, J., Hendriks, E., Reinders, M.: Sign language recognition by combining statistical DTW and independent classification. IEEE Trans. Pattern Anal. Mach. Intell. 30(11), 2040–2046 (2008)

    Article  Google Scholar 

  12. Jiang, S., Cao, Y., Iyengar, S. et al.: CareNet: An integrated wireless sensor networking environment for remote healthcare. In: Body Area Networks, pp. 9:1–9:3, (2010)

    Google Scholar 

  13. Vintsyuk, T.K.: Speech discrimination by dynamic programming. Kibernetika 4, 81–88 (1968)

    Article  MathSciNet  Google Scholar 

  14. Ming Hsiao, K., West, G., Venkatesh, S., Kumar, M.: Online context recognition in multisensor systems using dynamic time warping. In: ISNIPC, pp. 283–288 (2005)

    Google Scholar 

  15. Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 26(1), 43–49 (1978)

    Article  MATH  Google Scholar 

  16. Pansiot, J., Stoyanov, D., et al.: Ambient and wearable sensor fusion for activity recognition in healthcare monitoring systems. In: 4th International Workshop on Wearable and Implantable Body Sensor Networks, pp. 208–212 (2007)

    Google Scholar 

  17. Stiefmeier, T., Ogris, G., Junker, H., Lukowicz, P., Troster, G.,: Combining motion sensors and ultrasonic hands tracking for continuous activity recognition in a maintenance, scenario. In: Wearable Computers, 2006 10th IEEE International Symposium, pp. 97–104, (2006)

    Google Scholar 

  18. You, S., Neumann, U.: Fusion of vision and gyro tracking for robust augmented reality registration. In: Virtual Reality, pp. 71–78, (2001)

    Google Scholar 

  19. Zhu, C., Sheng, W.: Motion- and location-based online human daily activity recognition. Perv. Mob. Comput. 7, 256–269 (2011)

    Article  Google Scholar 

Download references

Acknowledgments

This work has been partly supported by RECERCAIXA 2011 Ref. REMEDI and TIN2009-14404-C02.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Àlex Pardo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Pardo, À., Clapés, A., Escalera, S., Pujol, O. (2014). Actions in Context: System for People with Dementia. In: Nin, J., Villatoro, D. (eds) Citizen in Sensor Networks. CitiSens 2013. Lecture Notes in Computer Science(), vol 8313. Springer, Cham. https://doi.org/10.1007/978-3-319-04178-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-04178-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-04177-3

  • Online ISBN: 978-3-319-04178-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics