Advertisement

How Efficiency and Naturalness Change in Multimodal Interaction in Mobile Navigation Apps

  • Jianren Ling
  • Zhuochao PengEmail author
  • Lu Yin
  • Xiaojing Yuan
Conference paper
  • 5 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1217)

Abstract

Multimodal interaction has been verified to improve user experience, but the challenge of resource competition and information integration in this human-centered interaction mode may negatively affect the efficiency and naturalness. This paper aims to figure out how efficiency and naturalness change in multimodal interaction in mobile navigation apps, which are mostly used as interactive systems for cars’ navigation tasks. The study was conducted based on the Amap app, with which participants should use the single touch-screen interaction and the gesture-combined-voice interaction, respectively, to complete the same route setting task in the driving environment. With the records of the operation time and participants’ subjective scores of two interaction modes, we analyzed the differences and changes in efficiency and naturalness. The results showed that multimodal interaction was more efficient than single-modality interaction; however, the naturalness of multimodal interaction did not improve significantly.

Keywords

Multimodal interaction User experience Efficiency Naturalness 

References

  1. 1.
    Zhou, Z.M., Ji, A.H.: Touch human-machine interface engineering design and application. China Electric Power Press, Beijing (2013). (in Chinese)Google Scholar
  2. 2.
    Fang, Z.G., Wang, J.: The new trend of human-machine interaction: multimedia and multimodal. Chin. J. Ergon. 4(2), 34–38 (1998). (in Chinese)Google Scholar
  3. 3.
    Ma, W.J., Fang, Z.G.: Human-computer interaction style and its evolution. Aeronaut. Comput. Technol. 29(3), 16–20 (1999). (in Chinese)Google Scholar
  4. 4.
    Yin, C.: The design and application of natural interaction based on event prototype derivation. Hunan University, Changsha (2014). (in Chinese)Google Scholar
  5. 5.
    Dong, S.H., Wang, J., Dai, G.Z.: Human-computer interaction and multimodal user interface. Science Press, Beijing (1999). (in Chinese)Google Scholar
  6. 6.
    Cun, W.Z.: Research of human cognition and behavior based on multi-model. Guizhou University, Guiyang (2018). (in Chinese)Google Scholar
  7. 7.
    Dong, S.H.: Progress and challenge of human-computer interaction. J. Comput.-Aid. Des. Comput. Graph. 16(1), 1–13 (2004). (in Chinese)MathSciNetGoogle Scholar
  8. 8.
    Bunt, H., Beun, R.-J., Borghuis, T.: Multimodal human-computer communication systems, techniques, and experiments. Springer, Berlin (1998)CrossRefGoogle Scholar
  9. 9.
    Quek, F., McNeill, D., Bryll, R., Duncan, S., Ma, X.F., Kirbas, C., McCullough, K.E., Ansari, R.: Multimodal human discourse: gesture and speech. ACM Trans. Comput.-Human Interact. 9(3), 171–193 (2002)Google Scholar
  10. 10.
    Obrenovic, Z., Starcevic, D.: Modeling multimodal human-computer interaction. Computer. 37(9), 65–72 (2004)CrossRefGoogle Scholar
  11. 11.
    Kumar, P., Agrawal, A., Prasad, S.: Multimodal interface for temporal pattern based interactive large volumetric visualization. In: TENCON 2017–2017 IEEE Region 10 Conference, pp. 1239–1244. IEEE Press, New York (2017)Google Scholar
  12. 12.
    Ji, T.Q.: Multi-channel human-machine intelligent interaction technology. Nanjing University, Nanjing (2018). (in Chinese)Google Scholar
  13. 13.
    Xiao, B., Girand, C., Oviatt, S.: Multimodal integration patterns in children. In: Proceedings of 7th International Conference on Spoken Language Processing, pp. 629–632. Causal Productions Pty Ltd., Adelaide (2002)Google Scholar
  14. 14.
    Xiao, B., Lunsford, R., Coulston, R., Wesson, M., Oviatt, S.: Modeling multimodal integration patterns and performance in seniors: toward adaptive processing of individual differences. In: Proceedings of the 5th International Conference on Multimodal Interfaces, pp. 265–272. ACM, New York (2003)Google Scholar
  15. 15.
    Oviatt, S., Lunsford, R., Coulston, R.: Individual differences in multimodal integration patterns: what are they and why do they exist? In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 241–249. ACM, New York (2005)Google Scholar
  16. 16.
    Bohus, D., Horvitz, E.: Facilitating multiparty dialog with gaze, gesture, and speech. In: Proceedings of International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, pp. 1–8. ACM, New York (2010)Google Scholar
  17. 17.
    Turk, M.: Multimodal interaction: a review. Pattern Recogn. Lett. 36, 189–195 (2014)CrossRefGoogle Scholar
  18. 18.
    Yang, M.H., Tao, J.H.: Intelligence methods of multi-modal information fusion in human-computer interaction. Sci. Sin. Inform. 48(4), 433–448 (2018). (in Chinese)CrossRefGoogle Scholar
  19. 19.
    Zhang, C., Zhao, J.H.: Multimodal interaction design of automobile navigation. Pack. Eng. 36(22), 67–70 (2015). (in Chinese)Google Scholar
  20. 20.
    Diaper, D.: Scenarios and task analysis. Interact. Comput. 14(4), 379–395 (2002)CrossRefGoogle Scholar
  21. 21.
    Most Drivers Who Own Cars with Built-in GPS Systems Use Phones for Directions. https://money.cnn.com/2016/10/10/autos/car-navigation-frustration/
  22. 22.
    Why Do Most People Use Mobile Phones Instead of Car Navigation? https://new.qq.com/omn/20191007/20191007A0D0SP00. (in Chinese)
  23. 23.
    2018 China Mobile Map Industry Annual Report. https://www.useit.com.cn/thread-22437-1-1.html. (in Chinese)
  24. 24.
    Yang, Y.: Research on multi-channel interaction of handheld mobile device based on voice interaction. Beijing University of Posts and Telecommunications, Beijing (2017). (in Chinese)Google Scholar
  25. 25.
    Tunnell, G.B.: Three dimensions of naturalness: an expanded definition of field research. Psychol. Bull. 84(3), 426–437 (1977)CrossRefGoogle Scholar
  26. 26.
    Dall, R., Yamagishi, J., King, S.: Rating naturalness in speech synthesis: the effect of style and expectation. In: Proceedings of the 7th International Conference on Speech Prosody, pp. 1012–1016. Trinity College, Dublin (2014)Google Scholar
  27. 27.
    Wang, R., Dong, S.Y., Xiao, J.H.: Research on human-machine natural interaction of intelligent vehicle interface design. J. Mach. Desig. 36(2), 132–136 (2019). (in Chinese)Google Scholar
  28. 28.
    Qiu, H.Z.: Quantitative research and statistical analysis: case study of SPSS (PASW) data analysis. Chongqing University Press, Chongqing (2013). (in Chinese)Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Jianren Ling
    • 1
  • Zhuochao Peng
    • 1
    Email author
  • Lu Yin
    • 1
  • Xiaojing Yuan
    • 1
  1. 1.School of DesignHunan UniversityChangshaChina

Personalised recommendations