Advertisement

A Mobile Command Input Through Vowel Lip Shape Recognition

  • Yuto Koguchi
  • Kazuya Oharada
  • Yuki Takagi
  • Yoshiki Sawada
  • Buntarou Shizuki
  • Shin Takahashi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10903)

Abstract

Most recent smartphones are controlled by touch screens, creating a need for hands-free input techniques. Voice is a simple means of input. However, this can be stressful in public spaces, and the recognition rate is low in noisy backgrounds. We propose a touch-free input technique using lip shapes. Vowels are detected by lip shape and used as commands. This creates a touch-free operation (like voice input) without actually requiring voice. We explored the recognition accuracies of each vowel of the Japanese moras. Vowels were identified with high accuracy by means of the characteristic lip shape.

Keywords

Lip shape Vowel recognition Hands-free input Touch-free input Convolutional neural network 

Notes

Acknowledgements

We would like to thank Pedro Passos Couteiro for improving the English of the paper.

References

  1. 1.
    Ronkainen, S., Häkkilä, J., Kaleva, S., Colley, A., Linjama, J.: Tap input as an embedded interaction method for mobile devices. In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI 2007, pp. 263–270. ACM, New York (2007)Google Scholar
  2. 2.
    Lyons, M.J., Chan, C.-H., Tetsutani, N.: MouthType: text entry by hand and mouth. In: CHI 2004 Extended Abstracts on Human Factors in Computing Systems, CHI EA 2004, pp. 1383–1386. ACM, New York (2004)Google Scholar
  3. 3.
    Esteves, A., Velloso, E., Bulling, A., Gellersen, H.: Orbits: gaze interaction for smart watches using smooth pursuit eye movements. In: Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST 2015, pp. 457–466. ACM, New York (2015)Google Scholar
  4. 4.
    Ando, T., Kubo, Y., Shizuki, B., Takahashi, S.: CanalSense: face-related movement recognition system based on sensing air pressure in ear canals. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017, pp. 679–689. ACM, New York (2017)Google Scholar
  5. 5.
    Azh, M., Zhao, S.: LUI: Lip in multimodal mobile GUI interaction. In: Proceedings of the 14th ACM International Conference on Multimodal Interaction, ICMI 2012, pp. 551–554. ACM, New York (2012)Google Scholar
  6. 6.
    Chung, J.S., Senior, A.W., Vinyals, O., Zisserman, A.: Lip reading sentences in the wild. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3444–3453, July 2017Google Scholar
  7. 7.
    Miyazaki, T., Nakashima, T.: The codification of distinctive mouth shapes and the expression method of data concerning changes in mouth shape when uttering Japanese. Trans. Inst. Electr. Eng. Jpn. Electr. Inf. Syst. Soc. 129(12), 2108–2114 (2009)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Yuto Koguchi
    • 1
  • Kazuya Oharada
    • 1
  • Yuki Takagi
    • 1
  • Yoshiki Sawada
    • 1
  • Buntarou Shizuki
    • 1
  • Shin Takahashi
    • 1
  1. 1.University of TsukubaTsukubaJapan

Personalised recommendations