Advertisement

MIODMIT: A Generic Architecture for Dynamic Multimodal Interactive Systems

  • Martin Cronel
  • Bruno Dumas
  • Philippe PalanqueEmail author
  • Alexandre Canny
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11262)

Abstract

This paper proposes a generic interactive system architecture describing in a structured way, both hardware and software components of an interactive system. It makes explicit all the components that play a role in the information processing from input devices to the interactive application and back to the output devices. Along with the generic interactive system architecture the paper proposes a process for selecting and connecting those components in order to tune the generic interactive system architecture for a specific interactive application. This select, connect and tune-on-demand approach helps handle complexity of interactive applications featuring innovative interaction techniques by splitting the interactive software into dedicated functional components. It also supports design flexibility by making explicit the components impacted when the interaction design evolves. This interactive system architecture and its related process have been applied to the development of several real-life interactive systems and we illustrate their application on an interactive application offering multi-mice, multi-touch and leap motion interactions in the context of interactive cockpits of large civil aircrafts.

Keywords

Interactive systems engineering Input/output devices integration Interaction techniques Software architectures 

References

  1. 1.
    Accot, J., Chatty, S., Palanque, P.: A formal description of low level interaction and its application to multimodal interactive systems. In: Bodart, F., Vanderdonckt, J. (eds.) Design, Specification and Verification of Interactive Systems. Eurographics, pp. 92–104. Springer, Heidelberg (1996).  https://doi.org/10.1007/978-3-7091-7491-3_5CrossRefGoogle Scholar
  2. 2.
    Accot, J., Chatty, S., Maury, S., Palanque, P.: Formal transducers: models of devices and building bricks for the design of highly interactive systems. In: Harrison, M.D., Torres, J.C. (eds.) Design, Specification and Verification of Interactive Systems. Eurographics, pp. 143–159. Springer, Heidelberg (1997).  https://doi.org/10.1007/978-3-7091-6878-3_10CrossRefGoogle Scholar
  3. 3.
    Bass, L.: Software Architecture in Practice. Pearson Education India, Gurgaon (2007)Google Scholar
  4. 4.
    Bass, L., et al.: The arch model: Seeheim revisited. In: User Interface Developpers’ Workshop (1991) Google Scholar
  5. 5.
    Bastide, R., Navarre, D., Palanque, P., Schyn, A., Dragicevic, P.: A model-based approach for real-time embedded multimodal systems in military aircrafts. In: Proceedings of the 6th International Conference on Multimodal Interfaces (ICMI 2004), pp. 243–250. ACM, New York (2004)Google Scholar
  6. 6.
    Buxton, B.: Developing a Taxonomy of Input, chapter 4. http://www.billbuxton.com/input04.Taxonomies.pdf. Accessed 15 January
  7. 7.
    Buxton, B.: A three state model of graphical input. In: Diaper, D., et al. (eds.) Human-Computer Interaction - INTERACT 1990, pp. 449–456. Elsevier Science Publishers (1990)Google Scholar
  8. 8.
    Campos, J.C., Harrison, M.D.: Formally verifying interactive systems: a review. In: Harrison M.D., Torres J.C. (eds.) Design, Specification and Verification of Interactive Systems. Eurographics, pp. 109–124. Springer, Heidelberg (1997).  https://doi.org/10.1007/978-3-7091-6878-3_8
  9. 9.
    Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., Young, R.M.: Four easy pieces for assessing the usability of multimodal interaction: the care properties. In: Nordby, K., Helmersen, P., Gilmore, D.J., Arnesen, S.A. (eds.) Human—Computer Interaction. IFIP Advances in Information and Communication Technology, pp. 115–120. Springer, Heidelberg (1995).  https://doi.org/10.1007/978-1-5041-2896-4_19CrossRefGoogle Scholar
  10. 10.
    Cronel, M.: Une approche pour l’ingénierie des systèmes interactifs critiques multimodaux et multi-utilisateurs: Application à la prochaine génération de cockpit d’aéronefs, thèse de doctorat, Université Paul Sabatier, octobre 2017Google Scholar
  11. 11.
    CS-25 - Amendment 17 - Certification Specifications and Acceptable Means of Compliance for Large Aeroplanes. EASA (2015)Google Scholar
  12. 12.
    DO-178C/ED-12C, Software Considerations in Airborne Systems and Equipment Certification, published by RTCA and EUROCAE (2012)Google Scholar
  13. 13.
    Deshayes, R., Palanque, P., Mens, T.: A generic framework for executable gestural interaction models. In: VL/HCC 2013, pp. 35–38 (2013)Google Scholar
  14. 14.
    Feiler, P.H., Gluch, D.P., Hudak, J.J.: The architecture analysis & design language (AADL): An introduction (No. CMU/SEI-2006-TN-011). Carnegie-Mellon Univ Pittsburgh PA Software Engineering Inst (2006)Google Scholar
  15. 15.
    Hamon, A., Palanque, P., André, R., Barboni, E., Cronel, M., Navarre, D.: Multi-touch interactions for control and display in interactive cockpits. In: HCI’Aero 2014. ACM DL (2014)Google Scholar
  16. 16.
    Hamon, A., Palanque, P., Silva, J.L., Deleris, Y., Barboni, E.: Formal description of multi-touch interactions. In: 5th ACM SIGCHI EICS, pp. 207–216. ACM (2013)Google Scholar
  17. 17.
    Hansen, T.E., Hourcade, J.P., Virbel, M., Patali, S., Serra, T.: PyMT: a post-WIMP multi-touch user interface toolkit. In: ACM ICITS, pp. 17–24. ACM (2009)Google Scholar
  18. 18.
    Hinckley, K., Jacob, R.J.K., Ware, C., Wobbrock, J., Wigdor, D.: Input/output devices and interaction techniques. In: Computing Handbook, 3rd edn., Chap. 21, pp. 1–54 (2014)Google Scholar
  19. 19.
    Hoste, L., Dumas, B., Signer, B.: Mudra: a unified multimodal interaction framework. In: ICMI 2011, pp. 97–104. ACM (2011)Google Scholar
  20. 20.
    Kammer, D., Keck, M., Freitag, G., Wacker, M.: Taxonomy and overview of multi-touch frameworks: architecture, scope and features. In: Workshop on EPMI (2010)Google Scholar
  21. 21.
    Kraleva, R., Kralev, V.: On model architecture for a children’s speech recognition interactive dialog system. In: Proceedings of International Scientific Conference on Mathematics and Natural Sciences (2009). https://arxiv.org/pdf/1605.07733
  22. 22.
    Ladry, J.-F., Navarre, D., Palanque, P.: Formal description techniques to support the design, construction and evaluation of fusion engines for sure (safe, usable, reliable and evolvable) multimodal interfaces. In: ACM ICMI, pp. 185–192 (2009)Google Scholar
  23. 23.
    Lalanne, D., Nigay, L., Palanque, P., Robinson, P., Vanderdonckt, J., Ladry, J.F.: Fusion engines for multimodal input: a survey. In: ICMI, pp. 153–160. ACM (2009)Google Scholar
  24. 24.
    Latoschik, M.E., Reiners, D., Blach, R., Figueroa, P., Dachselt, R.: SEARIS: software engineering and architectures for realtime interactive systems. In: 24th ACM SIGPLAN OOPSLA, pp. 721–722 (2009)Google Scholar
  25. 25.
    Lee, J.S., et al.: A 0.4 V driving multi-touch capacitive sensor with the driving signal frequency set to (n + 0.5) times the inverse of the LCD VCOM noise period. In: IEEE International Symposium on Circuits and Systems (ISCAS), pp. 682–685 (2014)Google Scholar
  26. 26.
    Ng, A., Lepinski, J., Wigdor, D., Sanders, S., Dietz, P.: Designing for low-latency direct-touch input. In: 25th ACM UIST Conference, pp. 453–464. ACM (2012)Google Scholar
  27. 27.
    Navarre, D., Palanque, P., Basnyat, S.: A formal approach for user interaction reconfiguration of safety critical interactive systems. In: Harrison, Michael D., Sujan, M.-A. (eds.) SAFECOMP 2008. LNCS, vol. 5219, pp. 373–386. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-87698-4_31CrossRefGoogle Scholar
  28. 28.
    Nigay, L., Coutaz, J.: Multifeature systems: from HCI properties to software design. In: Proceedings of First International Workshop on Intelligence and Multimodality in Multimedia Interfaces. AAAI Press Publ. (1995)Google Scholar
  29. 29.
    Morris, M., Huang, A., Paepcke, A., Winograd, T.: Cooperative gestures: multi-user gestural interactions for co-located groupware. In: ACM CHI Conference 2006, pp. 1201–1210 (2006)Google Scholar
  30. 30.
    Palanque, P., Bastide, R., Navarre, D., Sy, O.: Computer discretized interaction: from continuous to discrete and back again. In: Workshop on Continuity in Human Computer Interaction, CHI 2000, The Hague (2000)Google Scholar
  31. 31.
    Pfaff, G.E. (ed.): User Interface Management Systems. Springer, Heidelberg (1985).  https://doi.org/10.1007/978-3-642-70041-5CrossRefzbMATHGoogle Scholar
  32. 32.
    Rousseau, C., Bellik, Y., Vernier, F.: Multimodal output specification/simulation platform. In: ACM ICMI 2005, pp. 84–91 (2005)Google Scholar
  33. 33.
    SAE-AS5506B: SAE Architecture Analysis and Design Language (AADL), International Society of Automotive Engineers, Warrendale, PA, USA, September 2012Google Scholar
  34. 34.
    Schneegass, S., Alt, F.: SenScreen: a toolkit for supporting sensor-enabled multi-display networks. In: Gehring, S. (ed.) Proceedings of the International Symposium on Pervasive Displays (PerDis 2014). ACM, New York (2014). 6 pagesGoogle Scholar
  35. 35.
    Tankeu-Choitatk, A., Navarrek, D., Palanquek, P., Delerisk, Y., Fabrek, J.-C., Fayollask, C.: Self-checking components for dependable interactive cockpits using formal description techniques. In: PRDC 2011, pp. 164–173 (2011)Google Scholar
  36. 36.
    Vu, T., et al.: Distinguishing users with capacitive touch communication. In: Mobicom 2012, pp. 197–208. ACM (2012)Google Scholar
  37. 37.
    Echtler, F., Klinker, G.: A multitouch software architecture. In: Proceedings of the 5th Nordic Conference on Human-Computer Interaction: Building Bridges (NordiCHI 2008), pp. 463–466. ACM, New York (2008)Google Scholar
  38. 38.
    Vennelakanti, R., Dey, P., Shekhawat, A., Pisupati, P.: The picture says it all!: Multimodal interactions and interaction metadata. In: Proceedings of the 13th International Conference on Multimodal Interfaces (ICMI 2011), pp. 89–96. ACM, New York (2011)Google Scholar
  39. 39.
    Kousidis, S., Kennington, C., Baumann, T., Buschmeier, H., Stefan, K., Schlangen, D.: A multimodal in-car dialogue system that tracks the driver’s attention. In: Proceedings of the 16th International Conference on Multimodal Interaction (ICMI 2014), pp. 26–33. ACM, New York (2014)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Martin Cronel
    • 1
  • Bruno Dumas
    • 2
  • Philippe Palanque
    • 1
    • 3
    Email author
  • Alexandre Canny
    • 1
  1. 1.ICS-IRIT, Université Paul Sabatier – Toulouse IIIToulouseFrance
  2. 2.University of NamurNamurBelgium
  3. 3.Department of Industrial DesignTechnical University EindhovenEindhovenNetherlands

Personalised recommendations