Abstract
Multimodal adaptive systems have typically been resource consuming systems, due to the requirements of processing recognition based modalities, and computing and applying adaptations based on users and context properties. In this chapter, we describe how we were able to design and implement an adaptive multimodal system, capable of performing in a resource constrained environment such as a Set-top Box. The presented approach endows standard non-adaptive, non-multimodal applications with adaptive multimodal capabilities, with limited extra effort demanded of their developers. Our approach has been deployed for Web applications, although it is applicable to other application environments. This chapter details the application interface interpretation, multimodal fusion and multimodal fission components of our framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Lochrie, M., & Coulton, P. (2012). Sharing the viewing experience through second screens. In Proceedings of the 10th European conference on interactive tv and video (EuroiTV ’12) (pp. 199–202). New York: ACM.
Accessible Rich Internet Applications (WAI-ARIA) 1.0. (2011). From: http://www.w3.org/TR/wai-aria/.
Abascal, J., Aizpurua, A., Cearreta, I., Gamecho, B., Garay-Vitoria, N., & Miñón, R. (2011). Automatically generating tailored accessible user interfaces for ubiquitous services. In The proceedings of the 13th international ACM SIGACCESS conference on computers and accessibility (ASSETS ’11) (pp. 187–194). New York: ACM.
Hybrid Broadcast Broadband TV. (2012). From: http://www.hbbtv.org/pages/about_hbbtv/introduction.php.
Caldwell, B., Cooper, M., Reid, L., & Vanderheiden, G. (2008). Web content accessibility guidelines 2.0 (W3C Note, December 2008). From http://www.w3.org/TR/WCAG20/.
Nigay, L., & Coutaz, J. (1993). A design space for multimodal systems: Concurrent processing and data fusion. In Proceedings of the INTERCHI 93 conference on human factors in computing systems (pp. 172–178). New York: ACM.
Sharma, R., Pavlovic, V. I., & Huang, T. S. (1998). Toward multimodal human-computer interface. Proceedings of the IEEE, 86, 853–869.
Sanderson, C., & Paliwal, K. K. (2002). Information fusion and person verification using speech & face information (Research paper IDIAP-RR 02-33).
Hall, D. L., & Llinas, J. (2001). Multisensor data fusion. In D. L. Hall & J. Llinas (Eds.), Handbook of multisensor data fusion (pp. 1–10). Boca Raton: CRC Press.
Poh, N., Bourlai, T., & Kittler, J. (2010). Multimodal information fusion. In J.-P. Thiran, F. Marqués, & H. Bourlard (Eds.), Multimodal signal processing theory and applications for human computer interaction (p. 153). London: Academic.
Dumas, B., Lalanne, D., & Oviatt, S. (2009). Multimodal interfaces: A survey of principles, models and frameworks. Human Machine Interaction, 5440(2), 3–26.
Rousseau, C., Bellik, Y., & Vernier, F. (2005). WWHT: un modele conceptuel pour la presentation multimodale d’information. In IHM (Volume 264 of ACM international conference proceeding series, pp. 59–66). New York: ACM.
Feiner, S. K., & McKeown, K. R. (1993). Automating the generation of coordinated multimedia explanations (pp. 117–138). Menlo Park: American Association for Artificial Intelligence.
Reithinger, N., Alexandersson, J., Becker, T., Blocher, A., Engel, R., Lockelt, M., Muller, J., Pfieger, N., Poller, P., Streit, M., et al. (2003). SmartKom: Adaptive and flexible multimodal access to multiple applications. In Proceedings of the 5th international conference on multimodal interfaces (ICMI 2003) (pp. 101–108). New York: ACM.
Reithinger, N., Fedeler, D., Kumar, A., Lauer, C., Pecourt, E., & Romary, L. (2005). Miamm – A multimodal dialogue system using haptics. In J. van Kuppevelt, L. Dybkjaer, & N. O. Bernsen (Eds.), Advances in natural multimodal dialogue systems. Dordrecht: Springer.
Oviatt, S. (2003). Multimodal interfaces. In A. Sears & J. Jacko (Eds.), The human-computer interaction handbook: Fundamentals, evolving technologies and emerging application (pp. 286–304). Hillsdale: L. Erlbaum Associates Inc.
Fasciano, M., & Guy, L. (2000). Intentions in the coordinated generation of graphics and text from tabular data. Knowledge and Information Systems, 2, 310–339.
Duarte, C. (2008). Design and evaluation of adaptive multimodal systems. Phd thesis, University of Lisbon.
Bateman, J., Kleinz, J., Kamps, T., & Reichenberger, K. (2001). Towards constructive text, diagram, and layout generation for information presentation. Computational Linguistics, 27, 409–449.
Han, Y., & Zukerman, I. (1997, March). A mechanism for multimodal presentation planning based on agent cooperation and negotiation. Human-Computer Interaction, 12, 187–226.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer-Verlag London
About this chapter
Cite this chapter
Duarte, C., Costa, D., Feiteira, P., Costa, D. (2015). Building an Adaptive Multimodal Framework for Resource Constrained Systems. In: Biswas, P., Duarte, C., Langdon, P., Almeida, L. (eds) A Multimodal End-2-End Approach to Accessible Computing. Human–Computer Interaction Series. Springer, London. https://doi.org/10.1007/978-1-4471-6708-2_9
Download citation
DOI: https://doi.org/10.1007/978-1-4471-6708-2_9
Publisher Name: Springer, London
Print ISBN: 978-1-4471-6707-5
Online ISBN: 978-1-4471-6708-2
eBook Packages: Computer ScienceComputer Science (R0)