Interaction Design for Convergence Medias and Devices: A Multisensory Challenge
Today, digital convergence is everywhere, for everyone and associated with every device we use. This fact means that the user experience is richer, more sophisticated and also more complex. Designers have to be more flexible and handle a variety of interaction possibilities. Interaction design should be viewed as a fluid process that shapes different medias and devices to address user features. This chapter is concerned with the discussion of this convergence/divergence effect on interaction design. Interaction design for convergent medias and devices is also a multisensory challenge. A richer user experience explores the user’s senses and modalities. The definition of modality used in Human-Computer Interaction, came from a definition that was previously used in Psychology, where human sensorial modalities are used, such as vision, hearing and touch. Thus many user interfaces can be defined by combining two or more input modalities (such as speech, touch, gestures, head movements and mouse) in coordination with the various outputs available in a multimedia system. Furthermore, the use of multiple devices to interact adds other dimensions, making the experience multisensory. One of the most important convergence gaps is in interaction design. The most effective way of dealing with multiples devices, medias and platforms is dependent on the correct design and ensuring that one thinks in the right way about these user interfaces. In this context, this chapter focuses on the design of multisensory interaction, through the understanding of its concepts, media, devices and user experience.
KeywordsUser Experience Video Stream Interaction Design Natural Interaction Multimodal Interface
The current work has been made possible thanks to the financial support provided by CAPES and CNPq. Thank RNP (National Network for Education and Research) for funding the workgroups cited on this work, specially, GTMDA (Workgroup of Digital Media and Arts) and GTAVCS (Workgroup of Video Collaboration in Health). I would like to thank the “Brazilian Science without Borders” program and the State University of New York—Oswego for welcome me. Finally, thanks to all LAVID (Digital Video Apps Lab) “family” for the work indispensable partnership.
- Augusto, S. (2004, Fevereiro 06). Um pouco da história da fusão entre Arte e Tecnologia. ARTE E TECNOLOGIA. [Online] Jornal Express. http://www.jornalexpress.com.br/noticias/detalhes.php?id_jornal=10046&id_noticia=2
- Balbinot, R., Silveira, J. G., & Nunes, P. L. (2000). Desenvolvimento De Aplicaçõdes Multimı´dia Para Redes De Alta Velocidade. In Workshop da Rede Nacional de Pesquisa, Belo Horizonte: Federal University of Minas Gerais, May 23–24, 2000. Retrieved from http://www.rnp.br/wrnp2/2000/posters/multimidiapara%20redesdealta.pdf
- Chang, A., & Ishii, H. (2006). Sensorial interfaces. In Proceedings of the 6th Conference on Designing Interactive Systems (DIS ‘06) (pp. 50–59). New York: ACM. doi: 10.1145/1142405.1142415.
- Coury, W., Messina, L. A., Filho, J. L. R., & Simões, N. (2010). Implementing RUTE’s usability the Brazilian Telemedicine University Network. In 6th World Congress on Services.Google Scholar
- De Paula, R. (2003). A new era in human computer interaction: The challenges of technology as a social proxy. In Proceedings of the Latin American Conference on Human-Computer Interaction (CLIHC ‘03) (pp. 219–222). New York: ACM.Google Scholar
- Frati, V. (2011). Using Kinect for hand tracking and rendering in wearable haptics. In World Haptics Conference (WHC) at IEEE.Google Scholar
- Google. (2013). Google glass project. Retrieved March 05, 2013, from http://www.google.com/glass/start/
- Harrison, C., & Hudson, S. (2012). Using shear as a supplemental two-dimensional input channel for rich touchscreen interaction. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI ‘12) (pp. 3149–3152). New York: ACM. doi: 10.1145/2207676.2208730.
- Hewett, T. T., Baecker, R., Card, S., Carey, T., Gasen, J., & Mantei, M., et al. (2009). Curricula for human-computer interaction. In The Association for Computing Machinery, Special Interest Group on Computer Human Interaction. Retrieved September 12, 2012, from http://old.sigchi.org/cdg/cdg2.html#2_1
- Hinman, R. (2011, September 11). What does “Convergence” mean to you? In Rosenfeld Media—The Mobile Frontier. Retrieved March 03, 2013, from http://rosenfeldmedia.com/books/mobile-design/blog/what_does_convergence_mean_to/
- Hudgeons, B. L., & Lindley, H. (2010). Method and system for facilitating interactive multimedia experiences. U.S. Patent No. US7650623B2.Google Scholar
- Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.Google Scholar
- Jensen, J. F. (1998). Interactivity: Tracing a new concept in media and communication studies. Nordicom Review, 19, 185–204.Google Scholar
- Kanda, T., & Ishiguro, H. (2013). Human robots interaction in social robotics. Boca Raton, FL: Taylor and Francis Group.Google Scholar
- Kuznetsov, S., & Paulos, E. (2010). Rise of the expert amateur: DIY projects, communities, and cultures. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (NordiCHI ‘10) (pp. 295–304). New York: ACM. doi: 10.1145/1868914.1868950.
- Liu, J. C. (1999). System and method for online multimedia access. U.S. Patent No. US5953005A.Google Scholar
- Lund, P. (2011). Massively networked how the convergence of social media and technology is changing your life (1st ed.). San Francisco, CA: PLI Media.Google Scholar
- Machado, A. (2005). TECNOLOGIA E ARTE CONTEMPORÂNEA: COMO POLITIZAR O DEBATE. Revista de Estudios Sociales, 22, 71–79.Google Scholar
- MailOnline. (2012). Roboy, the robotic ‘boy’ set to help humans with everyday tasks (and scientists hope to build him in just nine months). Retrieved March 03, 2013, from http://www.dailymail.co.uk/sciencetech/article-2253652/Roboy-robot-boy-set-born-months-help-humans-everyday-tasks.html#ixzz2MVGARdrE
- Mayfield, A. (2008). What is social media? [e-book] iCrossing. Updated 01.08.08.Google Scholar
- Murilo, J., Jr. (2008). (In)TOQue—dança telemática. Join Vimeo. [Online] Vimeo, LCC. http://www.vimeo.com/1137759
- Nesbitt, K. V., & Hoskens, I. (2008). Multi-sensory game interface improves player satisfaction but not performance. In Proceedings of the Ninth Conference on Australasian User Interface—Volume 76 (AUIC ‘08) (Vol. 76, pp. 13–18). Darlinghurst, Australia: Australian Computer Society.Google Scholar
- Norman, D. (1993). Things that make us smart: Defending human attributes in the age of the machine. Reading, MA: Addison-Wesley.Google Scholar
- Pereira, F., & Burnett, I. (2003, March). Universal multimedia experiences for tomorrow. IEEE Signal Processing Magazine, 20(2), 63–73. Copyright IEEE 2003.Google Scholar
- Preece, J., Rogers, Y., & Sharp, H. (2005). Design de interação: além da interação homem-computador (V. Possamai, Tradução de). Porto Alegre: Bookman.Google Scholar
- Rauterberg, M., Mauch, T., & Stebler, R. (1996). How to improve the quality of human performance with natural user interfaces as a case study for augmented reality. Advances in Occupational Ergonomics and Safety, I, 150–153.Google Scholar
- RNP. (2005). RNP versus. Notı´cias da RNP. [Online] http://www.rnp.br/noticias/2005/not-051121-fotos.html
- Salomon, G. (Ed.). (1996). Distributed cognitions: Psychological and educational considerations. Cambridge: Cambridge University Press.Google Scholar
- Sharda, N. (2003). Multimedia: Fundamentals and design principles. Computer Science and Multimedia School of Computer Science and Mathematics–Notes (pp. 111–126). Melbourne, VIC: Victoria University.Google Scholar
- Silva, J. C. F., Ferreira, A., Vieira, E., Passos, M., Melo, E. A., & Tavares, T. A., et al. (2011). A multi-stream tool to support transmission in surgery applied to telemedicine. In International Workshop on Health and Social Care Information Systems and Technologies—HCIST2011, 2011, Algarve—Portugal. Proceedings of HCIST2011.Google Scholar
- Sun, Y., Chen, F., Shi, Y. (. D.)., & Chung, V. (2006). A novel method for multi-sensory data fusion in multimodal human computer interaction. In J. Kjeldskov & J. Paay (Eds.), Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments (OZCHI ‘06) (pp. 401–404). New York: ACM. doi: 10.1145/1228175.1228257.Google Scholar
- The Engineer. (2012). Researchers use Kinect gesture control in stroke rehab system. Retrieved September 12, 2012, from http://www.theengineer.co.uk/medical-and-healthcare/news/researchers-use-kinect-gesture-control-in-stroke-rehab-system/1012902.article
- UFBA. (2015). e-PORMUNDOS AFETO. Grupo de Pesquisa Poéticas Tecnologicas. [Online] http://www.poeticatecnologica.ufba.br
- Weintraub, K. (2013, January 03). Quantified self: The tech-based route to a better life? BBC Future. Retrieved March 03, 2013, from http://www.bbc.com/future/story/20130102-self-track-route-to-a-better-life
- Yamamoto, N. (2013) The evolution of information on the internet (or “reliable unreliability!”). Retrieved March 05, 2013, from http://www.discovernikkei.org/en/journal/2013/2/2/my-name-is-neal/