Advertisement

Interaction Design for Convergence Medias and Devices: A Multisensory Challenge

  • Tatiana Aires TavaresEmail author
  • Damian Schofield
Chapter
Part of the Media Business and Innovation book series (MEDIA)

Abstract

Today, digital convergence is everywhere, for everyone and associated with every device we use. This fact means that the user experience is richer, more sophisticated and also more complex. Designers have to be more flexible and handle a variety of interaction possibilities. Interaction design should be viewed as a fluid process that shapes different medias and devices to address user features. This chapter is concerned with the discussion of this convergence/divergence effect on interaction design. Interaction design for convergent medias and devices is also a multisensory challenge. A richer user experience explores the user’s senses and modalities. The definition of modality used in Human-Computer Interaction, came from a definition that was previously used in Psychology, where human sensorial modalities are used, such as vision, hearing and touch. Thus many user interfaces can be defined by combining two or more input modalities (such as speech, touch, gestures, head movements and mouse) in coordination with the various outputs available in a multimedia system. Furthermore, the use of multiple devices to interact adds other dimensions, making the experience multisensory. One of the most important convergence gaps is in interaction design. The most effective way of dealing with multiples devices, medias and platforms is dependent on the correct design and ensuring that one thinks in the right way about these user interfaces. In this context, this chapter focuses on the design of multisensory interaction, through the understanding of its concepts, media, devices and user experience.

Keywords

User Experience Video Stream Interaction Design Natural Interaction Multimodal Interface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

The current work has been made possible thanks to the financial support provided by CAPES and CNPq. Thank RNP (National Network for Education and Research) for funding the workgroups cited on this work, specially, GTMDA (Workgroup of Digital Media and Arts) and GTAVCS (Workgroup of Video Collaboration in Health). I would like to thank the “Brazilian Science without Borders” program and the State University of New York—Oswego for welcome me. Finally, thanks to all LAVID (Digital Video Apps Lab) “family” for the work indispensable partnership.

References

  1. Atzori, L., Iera, A., & Morabito, G. (2010). The internet of things: A survey. Computer Network, 54(15, October 2010), 2787–2805. doi: 10.1016/j.comnet.2010.05.010.CrossRefGoogle Scholar
  2. Augusto, S. (2004, Fevereiro 06). Um pouco da história da fusão entre Arte e Tecnologia. ARTE E TECNOLOGIA. [Online] Jornal Express. http://www.jornalexpress.com.br/noticias/detalhes.php?id_jornal=10046&id_noticia=2
  3. Balbinot, R., Silveira, J. G., & Nunes, P. L. (2000). Desenvolvimento De Aplicaçõdes Multimı´dia Para Redes De Alta Velocidade. In Workshop da Rede Nacional de Pesquisa, Belo Horizonte: Federal University of Minas Gerais, May 23–24, 2000. Retrieved from http://www.rnp.br/wrnp2/2000/posters/multimidiapara%20redesdealta.pdf
  4. Chang, A., & Ishii, H. (2006). Sensorial interfaces. In Proceedings of the 6th Conference on Designing Interactive Systems (DIS ‘06) (pp. 50–59). New York: ACM. doi:  10.1145/1142405.1142415.
  5. Coury, W., Messina, L. A., Filho, J. L. R., & Simões, N. (2010). Implementing RUTE’s usability the Brazilian Telemedicine University Network. In 6th World Congress on Services.Google Scholar
  6. Curran, V. R., & Fleet, L. (2005). A review of evaluation outcomes of web-based continuing medical education. Medical Education, 39, 561–567.CrossRefGoogle Scholar
  7. De Paula, R. (2003). A new era in human computer interaction: The challenges of technology as a social proxy. In Proceedings of the Latin American Conference on Human-Computer Interaction (CLIHC ‘03) (pp. 219–222). New York: ACM.Google Scholar
  8. Frati, V. (2011). Using Kinect for hand tracking and rendering in wearable haptics. In World Haptics Conference (WHC) at IEEE.Google Scholar
  9. Gandsas, A., McIntire, K., Palli, G., & Park, A. (2002). Live streaming video for medical education: A laboratory model. Journal of Laparoendoscopic and Advanced Surgical Techniques, 12(5), 377–382.CrossRefGoogle Scholar
  10. Google. (2013). Google glass project. Retrieved March 05, 2013, from http://www.google.com/glass/start/
  11. Harrison, C., & Hudson, S. (2012). Using shear as a supplemental two-dimensional input channel for rich touchscreen interaction. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI ‘12) (pp. 3149–3152). New York: ACM. doi:  10.1145/2207676.2208730.
  12. Hewett, T. T., Baecker, R., Card, S., Carey, T., Gasen, J., & Mantei, M., et al. (2009). Curricula for human-computer interaction. In The Association for Computing Machinery, Special Interest Group on Computer Human Interaction. Retrieved September 12, 2012, from http://old.sigchi.org/cdg/cdg2.html#2_1
  13. Hinman, R. (2011, September 11). What does “Convergence” mean to you? In Rosenfeld Media—The Mobile Frontier. Retrieved March 03, 2013, from http://rosenfeldmedia.com/books/mobile-design/blog/what_does_convergence_mean_to/
  14. Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2), 174–196. doi: 10.1145/353485.353487. http://doi.acm.org/10.1145/353485.353487.CrossRefGoogle Scholar
  15. Hudgeons, B. L., & Lindley, H. (2010). Method and system for facilitating interactive multimedia experiences. U.S. Patent No. US7650623B2.Google Scholar
  16. Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.Google Scholar
  17. Jensen, J. F. (1998). Interactivity: Tracing a new concept in media and communication studies. Nordicom Review, 19, 185–204.Google Scholar
  18. Kanda, T., & Ishiguro, H. (2013). Human robots interaction in social robotics. Boca Raton, FL: Taylor and Francis Group.Google Scholar
  19. Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of Social Media. Business Horizons, 53(1), 59–68.CrossRefGoogle Scholar
  20. Kuznetsov, S., & Paulos, E. (2010). Rise of the expert amateur: DIY projects, communities, and cultures. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (NordiCHI ‘10) (pp. 295–304). New York: ACM. doi:  10.1145/1868914.1868950.
  21. Liu, J. C. (1999). System and method for online multimedia access. U.S. Patent No. US5953005A.Google Scholar
  22. Lund, P. (2011). Massively networked how the convergence of social media and technology is changing your life (1st ed.). San Francisco, CA: PLI Media.Google Scholar
  23. Machado, A. (2005). TECNOLOGIA E ARTE CONTEMPORÂNEA: COMO POLITIZAR O DEBATE. Revista de Estudios Sociales, 22, 71–79.Google Scholar
  24. MailOnline. (2012). Roboy, the robotic ‘boy’ set to help humans with everyday tasks (and scientists hope to build him in just nine months). Retrieved March 03, 2013, from http://www.dailymail.co.uk/sciencetech/article-2253652/Roboy-robot-boy-set-born-months-help-humans-everyday-tasks.html#ixzz2MVGARdrE
  25. Mayfield, A. (2008). What is social media? [e-book] iCrossing. Updated 01.08.08.Google Scholar
  26. Morris, T. (2000). Multimedia systems: Delivering, generating, and interacting with multimedia. New York: Springer.CrossRefGoogle Scholar
  27. Murilo, J., Jr. (2008). (In)TOQue—dança telemática. Join Vimeo. [Online] Vimeo, LCC. http://www.vimeo.com/1137759
  28. Nesbitt, K. V., & Hoskens, I. (2008). Multi-sensory game interface improves player satisfaction but not performance. In Proceedings of the Ninth Conference on Australasian User Interface—Volume 76 (AUIC ‘08) (Vol. 76, pp. 13–18). Darlinghurst, Australia: Australian Computer Society.Google Scholar
  29. Norman, D. (1993). Things that make us smart: Defending human attributes in the age of the machine. Reading, MA: Addison-Wesley.Google Scholar
  30. Pereira, F., & Burnett, I. (2003, March). Universal multimedia experiences for tomorrow. IEEE Signal Processing Magazine, 20(2), 63–73. Copyright IEEE 2003.Google Scholar
  31. Preece, J., Rogers, Y., & Sharp, H. (2005). Design de interação: além da interação homem-computador (V. Possamai, Tradução de). Porto Alegre: Bookman.Google Scholar
  32. Rauterberg, M., Mauch, T., & Stebler, R. (1996). How to improve the quality of human performance with natural user interfaces as a case study for augmented reality. Advances in Occupational Ergonomics and Safety, I, 150–153.Google Scholar
  33. RNP. (2005). RNP versus. Notı´cias da RNP. [Online] http://www.rnp.br/noticias/2005/not-051121-fotos.html
  34. Salomon, G. (Ed.). (1996). Distributed cognitions: Psychological and educational considerations. Cambridge: Cambridge University Press.Google Scholar
  35. Sharda, N. (2003). Multimedia: Fundamentals and design principles. Computer Science and Multimedia School of Computer Science and Mathematics–Notes (pp. 111–126). Melbourne, VIC: Victoria University.Google Scholar
  36. Silva, J. C. F., Ferreira, A., Vieira, E., Passos, M., Melo, E. A., & Tavares, T. A., et al. (2011). A multi-stream tool to support transmission in surgery applied to telemedicine. In International Workshop on Health and Social Care Information Systems and Technologies—HCIST2011, 2011, Algarve—Portugal. Proceedings of HCIST2011.Google Scholar
  37. Smith, T. F., & Waterman, M. S. (1981). Identification of common molecular subsequences. Journal of Molecular Biology, 147, 195–197.CrossRefGoogle Scholar
  38. Sun, Y., Chen, F., Shi, Y. (. D.)., & Chung, V. (2006). A novel method for multi-sensory data fusion in multimodal human computer interaction. In J. Kjeldskov & J. Paay (Eds.), Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments (OZCHI ‘06) (pp. 401–404). New York: ACM. doi: 10.1145/1228175.1228257.Google Scholar
  39. The Engineer. (2012). Researchers use Kinect gesture control in stroke rehab system. Retrieved September 12, 2012, from http://www.theengineer.co.uk/medical-and-healthcare/news/researchers-use-kinect-gesture-control-in-stroke-rehab-system/1012902.article
  40. UFBA. (2015). e-PORMUNDOS AFETO. Grupo de Pesquisa Poéticas Tecnologicas. [Online] http://www.poeticatecnologica.ufba.br
  41. Weintraub, K. (2013, January 03). Quantified self: The tech-based route to a better life? BBC Future. Retrieved March 03, 2013, from http://www.bbc.com/future/story/20130102-self-track-route-to-a-better-life
  42. Yamamoto, N. (2013) The evolution of information on the internet (or “reliable unreliability!”). Retrieved March 05, 2013, from http://www.discovernikkei.org/en/journal/2013/2/2/my-name-is-neal/

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Federal University of PelotasPelotasBrazil
  2. 2.State University of New YorkOswegoUSA

Personalised recommendations