Skip to main content

A Virtual Mouse Interface for Supporting Multi-user Interactions

  • Conference paper
  • First Online:
Human-Computer Interaction. Multimodal and Natural Interaction (HCII 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12182))

Included in the following conference series:

  • 2162 Accesses

Abstract

Traditionally, two approaches have been used to build intelligent room applications. Mouse-based control schemes allow developers to leverage a wealth of existing user-interaction libraries that respond to clicks and other events. However, systems built in this manner cannot distinguish among multiple users. To realize the potential of intelligent rooms to support multi-user interactions, a second approach is often used, whereby applications are custom-built for this purpose, which is costly to create and maintain. We introduce a new framework that supports building multi-user intelligent room applications in a much more general and portable way, using a combination of existing web technologies that we have extended to better enable simultaneous interactions among multiple users, plus speech recognition and voice synthesis technologies that support multi-modal interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    It is important to note that through this interface, additional types of input can be supported beyond the two presented here.

  2. 2.

    The modification here is that all generated JavaScript events have the “isTrusted” flag set to true, which is usually only set to true for user generated actions. This allows us to interact with inputs, selects, etc. on a page that do not have an explicitly created “EventListener”.

  3. 3.

    https://codepen.io/masterodin/pen/jOOPddy gives an example of this sort of content.

References

  1. Bolt, R.: "Put-that-there": voice and gesture at the graphics interface. In: Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1980, Seattle, Washington, USA, 1980, pp. 262–270. ACM, July 1980

    Google Scholar 

  2. Boring, S., Jurmu, M., Butz, A.: Scroll, tilt or move it: using mobile phones to continuously control pointers on large public displays. In: Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group on Design: Open 24/7 - OZCHI 2009, Melbourne, Australia, p. 161. ACM Press (2009). https://doi.org/10.1145/1738826.1738853, http://portal.acm.org/citation.cfm?doid=1738826.1738853

  3. Brooks, R.: The intelligent room project. In: Proceedings Second International Conference on Cognitive Technology Humanizing the Information Age, Aizu-Wakamatsu City, Japan, pp. 271–278. IEEE Computur Society (1997)

    Google Scholar 

  4. Carbini, S., Delphin-Poulat, L., Perron, L., Viallet, J.: From a Wizard of Oz experiment to a real time speech and gesture multimodal interface. Sig. Process. 86(12), 3559–3577 (2006)

    Article  MATH  Google Scholar 

  5. Divekar, R.R., et al.: CIRA: an architecture for building configurable immersive smart-rooms. In: Arai, K., Kapoor, S., Bhatia, R. (eds.) IntelliSys 2018. AISC, vol. 869, pp. 76–95. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-01057-7_7

    Chapter  Google Scholar 

  6. Farrell, R.G., et al.: Symbiotic cognitive computing. AI Magazine 37(3), 81 (2016)

    Article  Google Scholar 

  7. IBM: IBM Watson Cloud, May 2019. https://www.ibm.com/cloud/ai

  8. Kephart, J.O., Dibia, V.C., Ellis, J., Srivastava, B., Talamadupula, K., Dholakia, M.: An embodied cognitive assistant for visualizing and analyzing exoplanet data. IEEE Internet Comput. 23(2), 31–39 (2019)

    Article  Google Scholar 

  9. Krum, D., Omoteso, O., Ribarsky, W., Starner, T., Hodges, L.: Speech and gesture multimodal control of a whole earth 3D visualization environment. In: Proceedings of the Symposium on Data Visualization 2002, Barcelona, Spain, pp. 195–200. Eurographics Association (2002)

    Google Scholar 

  10. Langner, R., Kister, U., Dachselt, R.: Multiple coordinated views at large displays for multiple users: empirical findings on user behavior, movements, and distances. IEEE Trans. Visual. Comput. Graphics 25(1), 608–618 (2019). https://doi.org/10.1109/TVCG.2018.2865235. https://ieeexplore.ieee.org/document/8440846/

    Article  Google Scholar 

  11. Noor, A.K., Aras, R.: Potential of multimodal and multiuser interaction with virtual holography. Adv. Eng. Softw. 81, 1–6 (2015)

    Article  Google Scholar 

  12. Oviatt, S., Cohen, P.: Perceptual user interfaces: multimodal interfaces that process what comes naturally. Commun. ACM 43(3), 45–53 (2000)

    Article  Google Scholar 

  13. Peveler, M., et al.: Translating the pen and paper brainstorming process into a cognitive and immersive system. In: Kurosu, M. (ed.) HCII 2019. LNCS, vol. 11567, pp. 366–376. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22643-5_28

    Chapter  Google Scholar 

  14. Peveler, M., Kephart, J.O., Su, H.: Reagent: converting ordinary webpages into interactive software agents. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Macao, China, pp. 6560–6562, August 2019. https://doi.org/10.24963/ijcai.2019/956, https://www.ijcai.org/proceedings/2019/956

  15. Stallings, J.: RobotJS, October 2019. http://robotjs.io/

  16. Tse, E., Greenberg, S., Shen, C., Forlines, C., Kodama, R.: Exploring true multi-user multimodal interaction over a digital table. In: Proceedings of the 7th ACM conference on Designing interactive systems - DIS 2008, Cape Town, South Africa. pp. 109–118. ACM Press (2008)

    Google Scholar 

  17. Zhang, Y.: Combining absolute and relative pointing for fast and accurate distant interaction. arXiv:1710.01778 [cs] October 2017

  18. Zhao, R., Wang, K., Divekar, R., Rouhani, R., Su, H., Ji, Q.: An immersive system with multi-modal human-computer interaction. In: 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, pp. 517–524. IEEE, May 2018

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew Peveler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peveler, M., Kephart, J.O., Mou, X., Clement, G., Su, H. (2020). A Virtual Mouse Interface for Supporting Multi-user Interactions. In: Kurosu, M. (eds) Human-Computer Interaction. Multimodal and Natural Interaction. HCII 2020. Lecture Notes in Computer Science(), vol 12182. Springer, Cham. https://doi.org/10.1007/978-3-030-49062-1_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49062-1_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49061-4

  • Online ISBN: 978-3-030-49062-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics