Skip to main content

Two-Letters-Key Keyboard for Predictive Touchless Typing with Head Movements

  • Conference paper
  • First Online:
Computer Analysis of Images and Patterns (CAIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10424))

Included in the following conference series:

Abstract

Head operated interfaces offer touchless interaction with electronic devices for physically challenged people who are unable to operate standard input equipment. The great challenge of such interfaces is text entry. Most existing approaches are based on camera mouse where the on-screen keyboard is operated by the pointer controlled with head movements. Movements of the head are also employed to cycle through keys or their groups to access the intended letter. While the process of direct selection requires substantial precision the traverse procedure is time consuming. The main contribution of this paper is proposition of the Two-Letters-Key Keyboard for touchless typing with head movements. The solution offers substantial acceleration in accessing the desired keys. The typing proceeds with directional head movements and only two consecutive moves are required to reach the expected key. No additional mechanisms (like eye blink or mouth open) are required for head typing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jacob, R.J.K., Karn, K.S.: Eye tracking in human-computer interaction and usability research: ready to deliver the promises. In: The Minds Eye: Cognitive and Applied Aspects of Eye Movement Research, pp. 573–605. Elsevier Science, Oxford (2003)

    Google Scholar 

  2. Rebman, C.M., Aiken, M.W., Cegielski, C.G.: Speech recognition in the human-computer interface. Inf. Manage. 40(6), 509–519 (2003)

    Article  Google Scholar 

  3. Graimann, B., Allison, B.Z., Pfurtscheller, G.: Brain-computer interfaces: a gentle introduction. In: Graimann, B., Pfurtscheller, G. (eds.) Brain Computer Interfaces: Revolutionizing Human-Computer Interaction. The Frontiers Collection. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  4. Mantiuk, R., Kowalik, M., Nowosielski, A., Bazyluk, B.: Do-it-yourself eye tracker: low-cost pupil-based eye tracker for computer graphics applications. In: Schoeffmann, K., Merialdo, B., Hauptmann, A.G., Ngo, C.-W., Andreopoulos, Y., Breiteneder, C. (eds.) MMM 2012. LNCS, vol. 7131, pp. 115–125. Springer, Heidelberg (2012). doi:10.1007/978-3-642-27355-1_13

    Chapter  Google Scholar 

  5. Tu, J., Tao, H., Huang, T.: Face as mouse through visual face tracking. Comput. Vis. Image Underst. 108(2007), 35–40 (2007)

    Article  Google Scholar 

  6. Nabati, M., Behrad, A.: 3D head pose estimation and camera mouse implementation using a monocular video camera. Signal Image Video Process. 9(1), 39–44 (2015)

    Article  Google Scholar 

  7. Morris, T., Chauhan, V.: Facial feature tracking for cursor control. J. Netw. Comput. Appl. 29(1), 62–80 (2006)

    Article  Google Scholar 

  8. Varona, J., Manresa-Yee, C., Perales, F.J.: Hands-free vision-based interface for computer accessibility. J. Netw. Comput. Appl. 31(4), 357–374 (2008)

    Article  Google Scholar 

  9. Santis, A., Iacoviello, D.: Robust real time eye tracking for computer interface for disabled people. Comput. Methods Progr. Biomed. 96(1), 1–11 (2009)

    Article  Google Scholar 

  10. Shin, Y., Ju, J.S., Kim, E.Y.: Welfare interface implementation using multiple facial features tracking for the disabled people. Pattern Recognit. Lett. 29(2008), 1784–1796 (2008)

    Article  Google Scholar 

  11. Bian, Z.-P., Hou, J., Chau, L.-P., Magnenat-Thalmann, N.: Facial position and expression-based human-computer interface for persons with tetraplegia. IEEE J. Biomed. Health Inform. 20(3), 915–924 (2016)

    Article  Google Scholar 

  12. Gizatdinova, Y., Spakov, O., Surakka, V.: Face typing: vision-based perceptual interface for hands-free text entry with a scrollable virtual keyboard. In: IEEE Workshop on Applications of Computer Vision, Breckenridge, CO, USA, pp. 81–87 (2012)

    Google Scholar 

  13. Nowosielski, A.: 3-steps keyboard: reduced interaction interface for touchless typing with head movements. In: Kurzynski, M., Wozniak, M., Burduk, R. (eds.) CORES 2017. AISC, vol. 578, pp. 229–237. Springer, Cham (2018). doi:10.1007/978-3-319-59162-9_24

    Chapter  Google Scholar 

  14. Nowosielski, A., Chodyła, Ł.: Touchless input interface for disabled. In: Burduk, R., Jackowski, K., Kurzynski, M., Wozniak, M., Zolnierek, A. (eds.) Proceedings of the 8th International Conference on Computer Recognition Systems CORES 2013. AISC, vol. 226. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  15. Assistive Context-Aware Toolkit (ACAT). Project page (2017). https://01.org/acat

  16. QVirtboard. Project page (2017). http://qvirtboard.sourceforge.net

  17. MacKenzie, I.S., Soukoreff, R.W., Helga, J.: 1 thumb, 4 buttons, 20 words per minute: design and evaluation of H4-writer. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST 2011), Santa Barbara, California, USA, pp. 471–480 (2011)

    Google Scholar 

  18. Komninos, A., Dunlop, M.: Text input on a smart watch. IEEE Pervasive Comput. 13, 50–58 (2014)

    Article  Google Scholar 

  19. Kukharev, G., Nowosielski, A.: Fast and efficient algorithm for face detection in colour images. Mach. Graph. Vis. 13(4), 377–399 (2004)

    Google Scholar 

  20. Viola, P., Jones, M.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)

    Article  Google Scholar 

  21. Forczmański, P.: Performance evaluation of selected thermal imaging-based human face detectors. In: Kurzynski, M., Wozniak, M., Burduk, R. (eds.) CORES 2017. AISC, vol. 578, pp. 170–181. Springer, Cham (2018). doi:10.1007/978-3-319-59162-9_18

    Chapter  Google Scholar 

  22. Farfade, S.S., Saberian, M., Li, L.-J.: Multi-view face detection using deep convolutional neural networks. In: Proceedings of the 5th ACM on International Conference on Multimedia Retrieval (ICMR 2015), pp. 643–650 (2015)

    Google Scholar 

  23. Torr, P.H.S., Zisserman, A.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78(1), 138–156 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam Nowosielski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Nowosielski, A. (2017). Two-Letters-Key Keyboard for Predictive Touchless Typing with Head Movements. In: Felsberg, M., Heyden, A., Krüger, N. (eds) Computer Analysis of Images and Patterns. CAIP 2017. Lecture Notes in Computer Science(), vol 10424. Springer, Cham. https://doi.org/10.1007/978-3-319-64689-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-64689-3_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-64688-6

  • Online ISBN: 978-3-319-64689-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics