Advertisement

The MOBOT Platform – Showcasing Multimodality in Human-Assistive Robot Interaction

  • Eleni EfthimiouEmail author
  • Stavroula-Evita Fotinea
  • Theodore Goulas
  • Athanasia-Lida Dimou
  • Maria Koutsombogera
  • Vassilis Pitsikalis
  • Petros Maragos
  • Costas Tzafestas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9738)

Abstract

Acquisition and annotation of a multimodal-multisensory data set of human-passive rollator-carer interactions have enabled the analysis of related human behavioural patterns and the definition of the MOBOT human-robot communication model. The MOBOT project has envisioned the development of cognitive robotic assistant prototypes that act proactively, adaptively and interactively with respect to elderly humans with slight walking and cognitive difficulties. To meet the project’s goals, a multimodal action recognition system is being developed to monitor, analyse and predict user actions with a high level of accuracy and detail. In the same framework, the analysis of human behaviour data that have become available through the project’s multimodal-multisensory corpus, have led to the modelling of Human-Robot Communication in order to achieve an effective, natural interaction between users and the assistive robotic platform. Here, we discuss how the project’s communication model has been integrated in the robotic platform in order to support a natural multimodal human-robot interaction.

Keywords

Multisensory data Multimodal semantics Multimodal annotation scheme Multimodal HRI model Multimodal human-robot communication Natural HRI 

Notes

Acknowledgements

The work leading to these results has received funding from the European Union under grant agreement n° 600796 (FP7-ICT MOBOT project).

References

  1. 1.
    Hirvensalo, Μ., Rantanen, T., Heikkinen, E.: Mobility difficulties and physical activity as predictors of mortality and loss of independence in the community-living older population. J. Am. Geriartric Soc. 48, 493–498 (2005)CrossRefGoogle Scholar
  2. 2.
    Chuy, O.J., Hirata, Y., Wang, Z., Kosuge, K.: Approach in assisting a sit-to-stand movement using robotic walking support system. In: IEE/RSJ International Conference on Intelligent Robots and Systems, China (2006)Google Scholar
  3. 3.
    Chugo, D., Asawa, T., Kitamura, T., Jia, S., Takase, K.: A rehabilitation walker with standing and walking assistance. In: IEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France (2008)Google Scholar
  4. 4.
    Wakita, K., Huang, J., Di, P., Sekiyama, K., Fukuda, T.: Human walking-intention-based motion control of an omnidirectional-type cane robot. IEEE/ASME Trans. Mechatron. 18(1), 285–296 (2013)CrossRefGoogle Scholar
  5. 5.
    Hirata, Y., Komatsuda, S., Kosuge, K.: Fall prevention control of passive intelligent walker based on human model. In: IEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Nice, France (2008)Google Scholar
  6. 6.
    Fotinea, S-E., Efthimiou, E., Koutsombogera, M., Dimou, A-L., Goulas, T., Maragos, P., Tzafestas, C.: The MOBOT human-robot communication model. In: Proceedings of 6th IEEE Conference on Cognitive Infocommunications (CogInfoCom 2015), Győr, Hungary, 19–21 October (2015)Google Scholar
  7. 7.
    Papageorgiou, X.S., Tzafestas, C.S., Maragos, P., Pavlakos, G., Chalvatzaki, G., Moustris, G., Kokkinos, I., Peer, A., Stanczyk, B., Fotinea, E.-S., Efthimiou, E.: Advances in intelligent mobility assistance robot integrating multimodal sensory processing. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2014/HCII 2014 Part III. LNCS, vol. 8515, pp. 694–703. Springer, Heidelberg (2014)Google Scholar
  8. 8.
    Fotinea, E.-S., Efthimiou, E., Dimou, A.-L., Goulas, T., Karioris, P., Peer, A., Maragos, P., Tzafestas, C., Kokkinos, I., Hauer, K., Mombaur, K., Koumpouros, I., Stanzyk, B.: Data acquisition towards defining a multimodal interaction model for human – assistive robot communication. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2014, Part III. LNCS, vol. 8515, pp. 613–624. Springer, Heidelberg (2014)Google Scholar
  9. 9.
    Trulls, E., Kokkinos, I., Sanfeliu, A., Moreno, F.: Dense segmentation-aware descriptors. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)Google Scholar
  10. 10.
    Trulls, E., Kokkinos, I., Sanfeliu, A., Moreno, F.: Superpixel-grounded deformable part models. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)Google Scholar
  11. 11.
    Boussaid, H., Kokkinos, I., Paragios, N.: Discriminative learning of de formable contour models. In: International Symposium on Biomedical Imaging (ISBI) (2014)Google Scholar
  12. 12.
    Pitsikalis, V., Katsamanis, A., Theodorakis, S., Maragos, P.: Multimodal gesture recognition via multiple hypotheses rescoring. J. Mach. Learn. Res. 16(1), 255–284 (2015). http://www.jmlr.org/papers/volume16/pitsikalis15a/pitsikalis15a.pdf MathSciNetGoogle Scholar
  13. 13.
    Kardaris, R.N., Pitsikalis, V., Mavroudi, E., Katsamanis, A., Tsiami, A., Maragos, P.: Multimodal human action recognition in assistive human-robot interaction. In: Proceedings of 41st International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), Shanghai, China (2016)Google Scholar
  14. 14.
    Skordilis, Z.I., Tsiami, A., Maragos, P., Potamianos, G., Spelgatti, L., Sannino, R.: Multichannel speech enhancement using MEMS microphones. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-2015), Brisbane, Australia, April 2015Google Scholar
  15. 15.
    Corredor, J., Sofrony, J., Peer, A.: Deciding on optimal assistance policies in haptic shared control tasks. In: IEEE International Conference on Robotics and Automation, pp. 2679–2684 (2014). http://dx.doi.org/10.1109/ICRA.2014.6907243
  16. 16.
    Ho Hoang, K.-L., Corradi, D., Mombaur, K.: Identification and classification of geriatric gait patterns - multi-contact capturability. In: French-German-Japanese Conference in Humanoid and Legged Robots, 12–14 May 2014, Heidelberg, Germany (2014)Google Scholar
  17. 17.
    Papageorgiou, X.S., Chalvatzaki, G., Tzafestas, C.S., Maragos, P.: Hidden Markov modeling of human normal gait using laser range finder for a mobility assistance robot. In: Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA-2014), Hong Kong, China, pp. 482–487, June 2014. http://dx.doi.org/10.1109/ICRA.2014.6906899
  18. 18.
    MOBOT Deliverable D5.3 - Report on performance metrics and first evaluation study. http://www.mobot-project.eu/userfiles/downloads/Deliverables/MOBOT_WP5_D5.3.pdf
  19. 19.
    MOBOT Deliverable D5.1: Preliminary report on use cases and user needs. http://www.mobot-project.eu/userfiles/downloads/Deliverables/MOBOT_WP5_D5.1_v1.5.pdf
  20. 20.
    MOBOT Deliverable D5.2: Report on use cases, performance metrics and user study preparations. http://www.mobot-project.eu/userfiles/downloads/Deliverables/MOBOT_WP5_D5.2_v1.5.pdf

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Eleni Efthimiou
    • 1
    Email author
  • Stavroula-Evita Fotinea
    • 1
  • Theodore Goulas
    • 1
  • Athanasia-Lida Dimou
    • 1
  • Maria Koutsombogera
    • 1
  • Vassilis Pitsikalis
    • 2
  • Petros Maragos
    • 2
  • Costas Tzafestas
    • 2
  1. 1.Institute for Language and Speech Processing/ATHENA RCAthensGreece
  2. 2.Institute of Communication and Computer Systems–NTUAAthensGreece

Personalised recommendations