Advertisement

A Scalable Architecture to Design Multi-modal Interactions for Qualitative Robot Navigation

  • Luca Buoncompagni
  • Suman Ghosh
  • Mateus Moura
  • Fulvio Mastrogiovanni
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11298)

Abstract

The paper discusses an approach for teleoperating a mobile robot based on qualitative spatial relations, which are instructed through speech-based and deictic commands. Given a workspace containing a robot, a user and some objects, we exploit fuzzy reasoning criteria to describe the pertinence map between the locations in the workspace and qualitative commands incrementally acquired. We discuss the modularity features of the used reasoning technique through some use cases addressing a conjunction of spatial kernels. In particular, we address the problem of finding a suitable target location from a set of qualitative spatial relations based on symbolic reasoning and Monte Carlo simulations. Our architecture is analyzed in a scenario considering simple kernels and an almost-perfect perception of the environment. Nevertheless, the presented approach is modular and scalable, and it could be also exploited to design application where multi-modal qualitative interactions are considered.

Keywords

Robot teleoperation Fuzzy spatial relations Multi-modal robot interaction 

References

  1. 1.
    Antunes, A., Pizzuto, G., Cangelosi, A.: Communication with speech and gestures: applications of recurrent neural networks to robot language learning. In: Proceedings of GLU 2017 International Workshop on Grounding Language Understanding, pp. 4–7 (2017)Google Scholar
  2. 2.
    Bloch, I.: Fuzzy relative position between objects in image processing: a morphological approach. IEEE Trans. Pattern Anal. Mach. Intell. 21(7), 657–664 (1999)CrossRefGoogle Scholar
  3. 3.
    Bloch, I., Saffiotti, A.: Why robots should use fuzzy mathematical morphology. In: Proceedings of the 1st International ICSC-NAISO Congress on Neuro-Fuzzy Technologies, La Havana, Cuba (2002)Google Scholar
  4. 4.
    Buoncompagni, L., Mastrogiovanni, F.: An open framework to develop and validate techniques for speech analysis. In: Proceedings of the 3rd Italian Workshop on Artificial Intelligence and Robotics A workshop of the XV International Conference of the Italian Association for Artificial Intelligence (AI*IA 2016), vol. 1834, pp. 15–20. Genova, Italy, CEUR-WS (2016). http://ceur-ws.org/Vol-1834/
  5. 5.
    Cruz, F., Parisi, G.I., Twiefel, J., Wermter, S.: Multi-modal integration of dynamic audiovisual patterns for an interactive reinforcement learning scenario. In: Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pp. 759–766. IEEE (2016)Google Scholar
  6. 6.
    Delaye, A., Anquetil, E.: Learning of fuzzy spatial relations between handwritten patterns. Int. J. Data Min., Model. Manag. 6(2), 127–147 (2014)Google Scholar
  7. 7.
    Huang, C.M., Mutlu, B.: Learning-based modeling of multimodal behaviors for humanlike robots. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction, pp. 57–64. ACM (2014)Google Scholar
  8. 8.
    Lucignano, L., Cutugno, F., Rossi, S., Finzi, A.: A dialogue system for multimodal human-robot interaction. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, pp. 197–204. ACM (2013)Google Scholar
  9. 9.
    Olsen, D.R., Goodrich, M.A.: Metrics for evaluating human-robot interactions. In: Proceedings of PERMIS. vol. 2003, p. 4 (2003)Google Scholar
  10. 10.
    Poncela, A., Gallardo-Estrella, L.: Command-based voice teleoperation of a mobile robot via a human-robot interface. Robotica 33(1), 1–18 (2015)CrossRefGoogle Scholar
  11. 11.
    Potenza, A., Kiselev, A., Loutfi, A., Saffiotti, A.: Towards sliding autonomy in mobile robotic telepresence: a position paper. In: ECCE 2017-European Conference on Cognitive Ergonomics, 20–22 September 2017, Umeå University, Sweden (2017)Google Scholar
  12. 12.
    Prescott, T.J., Mitchinson, B., Conran, S.: Miro: An animal-like companion robot with a biomimetic brain-based control system. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 50–51. ACM (2017)Google Scholar
  13. 13.
    Ross, R.J., Shi, H., Vierhuff, T., Krieg-Brückner, B., Bateman, J.: Towards dialogue based shared control of navigating robots. In: Freksa, C., Knauff, M., Krieg-Brückner, B., Nebel, B., Barkowsky, T. (eds.) Spatial Cognition 2004. LNCS (LNAI), vol. 3343, pp. 478–499. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-32255-9_26CrossRefGoogle Scholar
  14. 14.
    Srimal, P.A.S., Muthugala, M.V.J., Jayasekara, A.B.P.: Deictic gesture enhanced fuzzy spatial relation grounding in natural language. In: 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8. IEEE (2017)Google Scholar
  15. 15.
    Tan, J., Ju, Z., Liu, H.: Grounding spatial relations in natural language by fuzzy representation for human-robot interaction. In: 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1743–1750. IEEE (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Luca Buoncompagni
    • 1
  • Suman Ghosh
    • 1
  • Mateus Moura
    • 1
  • Fulvio Mastrogiovanni
    • 1
  1. 1.Department of Informatics, Bioengineering, Robotics and Systems EngineeringUniversity of GenoaGenoaItaly

Personalised recommendations