Skip to main content

Building an Adaptive Multimodal Framework for Resource Constrained Systems

  • Chapter
A Multimodal End-2-End Approach to Accessible Computing

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

Multimodal adaptive systems have typically been resource consuming systems, due to the requirements of processing recognition based modalities, and computing and applying adaptations based on users and context properties. In this chapter, we describe how we were able to design and implement an adaptive multimodal system, capable of performing in a resource constrained environment such as a Set-top Box. The presented approach endows standard non-adaptive, non-multimodal applications with adaptive multimodal capabilities, with limited extra effort demanded of their developers. Our approach has been deployed for Web applications, although it is applicable to other application environments. This chapter details the application interface interpretation, multimodal fusion and multimodal fission components of our framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.guide-project.eu/

References

  1. Lochrie, M., & Coulton, P. (2012). Sharing the viewing experience through second screens. In Proceedings of the 10th European conference on interactive tv and video (EuroiTV ’12) (pp. 199–202). New York: ACM.

    Chapter  Google Scholar 

  2. Accessible Rich Internet Applications (WAI-ARIA) 1.0. (2011). From: http://www.w3.org/TR/wai-aria/.

  3. Abascal, J., Aizpurua, A., Cearreta, I., Gamecho, B., Garay-Vitoria, N., & Miñón, R. (2011). Automatically generating tailored accessible user interfaces for ubiquitous services. In The proceedings of the 13th international ACM SIGACCESS conference on computers and accessibility (ASSETS ’11) (pp. 187–194). New York: ACM.

    Chapter  Google Scholar 

  4. Hybrid Broadcast Broadband TV. (2012). From: http://www.hbbtv.org/pages/about_hbbtv/introduction.php.

  5. Caldwell, B., Cooper, M., Reid, L., & Vanderheiden, G. (2008). Web content accessibility guidelines 2.0 (W3C Note, December 2008). From http://www.w3.org/TR/WCAG20/.

  6. Nigay, L., & Coutaz, J. (1993). A design space for multimodal systems: Concurrent processing and data fusion. In Proceedings of the INTERCHI 93 conference on human factors in computing systems (pp. 172–178). New York: ACM.

    Chapter  Google Scholar 

  7. Sharma, R., Pavlovic, V. I., & Huang, T. S. (1998). Toward multimodal human-computer interface. Proceedings of the IEEE, 86, 853–869.

    Google Scholar 

  8. Sanderson, C., & Paliwal, K. K. (2002). Information fusion and person verification using speech & face information (Research paper IDIAP-RR 02-33).

    Google Scholar 

  9. Hall, D. L., & Llinas, J. (2001). Multisensor data fusion. In D. L. Hall & J. Llinas (Eds.), Handbook of multisensor data fusion (pp. 1–10). Boca Raton: CRC Press.

    Google Scholar 

  10. Poh, N., Bourlai, T., & Kittler, J. (2010). Multimodal information fusion. In J.-P. Thiran, F. Marqués, & H. Bourlard (Eds.), Multimodal signal processing theory and applications for human computer interaction (p. 153). London: Academic.

    Google Scholar 

  11. Dumas, B., Lalanne, D., & Oviatt, S. (2009). Multimodal interfaces: A survey of principles, models and frameworks. Human Machine Interaction, 5440(2), 3–26.

    Article  Google Scholar 

  12. Rousseau, C., Bellik, Y., & Vernier, F. (2005). WWHT: un modele conceptuel pour la presentation multimodale d’information. In IHM (Volume 264 of ACM international conference proceeding series, pp. 59–66). New York: ACM.

    Google Scholar 

  13. Feiner, S. K., & McKeown, K. R. (1993). Automating the generation of coordinated multimedia explanations (pp. 117–138). Menlo Park: American Association for Artificial Intelligence.

    Google Scholar 

  14. Reithinger, N., Alexandersson, J., Becker, T., Blocher, A., Engel, R., Lockelt, M., Muller, J., Pfieger, N., Poller, P., Streit, M., et al. (2003). SmartKom: Adaptive and flexible multimodal access to multiple applications. In Proceedings of the 5th international conference on multimodal interfaces (ICMI 2003) (pp. 101–108). New York: ACM.

    Chapter  Google Scholar 

  15. Reithinger, N., Fedeler, D., Kumar, A., Lauer, C., Pecourt, E., & Romary, L. (2005). Miamm – A multimodal dialogue system using haptics. In J. van Kuppevelt, L. Dybkjaer, & N. O. Bernsen (Eds.), Advances in natural multimodal dialogue systems. Dordrecht: Springer.

    Google Scholar 

  16. Oviatt, S. (2003). Multimodal interfaces. In A. Sears & J. Jacko (Eds.), The human-computer interaction handbook: Fundamentals, evolving technologies and emerging application (pp. 286–304). Hillsdale: L. Erlbaum Associates Inc.

    Google Scholar 

  17. Fasciano, M., & Guy, L. (2000). Intentions in the coordinated generation of graphics and text from tabular data. Knowledge and Information Systems, 2, 310–339.

    Article  Google Scholar 

  18. Duarte, C. (2008). Design and evaluation of adaptive multimodal systems. Phd thesis, University of Lisbon.

    Google Scholar 

  19. Bateman, J., Kleinz, J., Kamps, T., & Reichenberger, K. (2001). Towards constructive text, diagram, and layout generation for information presentation. Computational Linguistics, 27, 409–449.

    Article  Google Scholar 

  20. Han, Y., & Zukerman, I. (1997, March). A mechanism for multimodal presentation planning based on agent cooperation and negotiation. Human-Computer Interaction, 12, 187–226.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos Duarte .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag London

About this chapter

Cite this chapter

Duarte, C., Costa, D., Feiteira, P., Costa, D. (2015). Building an Adaptive Multimodal Framework for Resource Constrained Systems. In: Biswas, P., Duarte, C., Langdon, P., Almeida, L. (eds) A Multimodal End-2-End Approach to Accessible Computing. Human–Computer Interaction Series. Springer, London. https://doi.org/10.1007/978-1-4471-6708-2_9

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-6708-2_9

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-6707-5

  • Online ISBN: 978-1-4471-6708-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics