Advertisement

From HMI to HRI: Human-Vehicle Interaction Design for Smart Cockpit

  • Xiaohua SunEmail author
  • Honggao Chen
  • Jintian Shi
  • Weiwei Guo
  • Jingcheng Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10902)

Abstract

HMI is used to refer to human-vehicle interaction design from the perspective of taking car as a machine. However, with the quick increase of demand for smart cockpit, it would put strong constraints to the design of intelligent interactions and connected services if we still design from the perspective of control-oriented interface with a machine. By switching the concept from Human Machine Interaction (HMI) to Human Robot Interaction (HRI) can instead greatly open up the space of innovation for the development of natural interactions with the car as an intelligent system. This also make it possible to further focus on topics such as adaptive learning of the system through smart interaction. Designing from the perspective of human robot interaction is even more important for autonomous vehicles, which can provide to users a more consistent intelligent experience from driving control to in-vehicle functions and connected services. we introduce in this paper our approach in designing human-vehicle interaction from the HRI perspective, which is further composed of three parts: the intelligent sensing, predicting, and decision-making module, the adaptive user interface module, and the intelligent voice module.

Keywords

HRI Human-vehicle interaction Smart interaction  Smart cockpit 

Notes

Acknowledgments

This paper was supported by the Funds Project of Shanghai High Peak IV Program (Grant DA17003).

References

  1. 1.
    Akamatsu, M., Green, P., Bengler, K.: Automotive technology and human factors research: past, present, and future. Int. J. Veh. Technol. 2013(3), 1 (2013)Google Scholar
  2. 2.
  3. 3.
    Saxena, A., Jain, A., Sener, O., Jami, A., Misra, D.K., Koppula, H.S.: Robobrain: large-scale knowledge engine for robots. arXiv preprint arXiv:1412.0691 (2014)
  4. 4.
    Thrun, S., Mitchell, T.M.: Lifelong robot learning. Robot. Autonom. Syst. 15(1–2), 25–46 (1995)CrossRefGoogle Scholar
  5. 5.
    Senft, E., Lemaignan, S., Baxter, P.E., Belpaeme, T.: Leveraging human inputs in interactive machine learning for human robot interaction. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 281–282. ACM (2017)Google Scholar
  6. 6.
    Kober, J., Peters, J.: Reinforcement learning in robotics: a survey. In: Wierin, M., van Otterlo, M. (eds.) Reinforcement Learning, pp. 579–610. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-27645-3_18CrossRefGoogle Scholar
  7. 7.
    Cakmak, M., Chao, C., Thomaz, A.L.: Designing interactions for robot active learners. IEEE Trans. Autonom. Ment. Dev. 2(2), 108–118 (2010)CrossRefGoogle Scholar
  8. 8.
    Turk, M.: Multimodal interaction: a review. Pattern Recogn. Lett. 36, 189–195 (2014)CrossRefGoogle Scholar
  9. 9.
    Pearl, C.: Designing Voice User Interfaces: Principles of Conversational Experiences. O’Reilly Media Inc., Sebastopol (2016)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Xiaohua Sun
    • 1
    Email author
  • Honggao Chen
    • 1
  • Jintian Shi
    • 1
  • Weiwei Guo
    • 1
  • Jingcheng Li
    • 1
  1. 1.College of Design and InnovationTongji UniversityShanghaiChina

Personalised recommendations