Advertisement

User Evaluations of Virtually Experiencing Mount Everest

  • Marta LarusdottirEmail author
  • David Thue
  • Hannes Högni Vilhjálmsson
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11262)

Abstract

In software development it is hard to know both whether the team has developed a product that fits the users’ needs, and is easy to use. One way of gathering feedback from users on both these issues is to conduct formal user testing, which has been rated by IT professionals as one of the best methods for user involvement in software development. In this paper, we present a formal evaluation of a running prototype for a virtual reality experience that was scheduled to be launched 3 months later. We conducted formal user testing with five users, and recorded the problems that the users experienced while they used the VR prototype. We also collected data concerning each user’s impressions of their experience immediately after it was complete. The results show that many serious problems were identified, and that the developers found several of them to be very useful. In some cases, the user testing was regarded as having been essential to discovering these problems.

Keywords

User testing Virtual reality Agile software development 

References

  1. 1.
    Ardito, C., et al.: Usability evaluation: a survey of software development organizations. In: Proceedings of International Conference on Software Engineering and Knowledge Engineering (SEKE 2011), Miami, FL, USA, pp. 282–287 (2011) Google Scholar
  2. 2.
    Bak, J.O., Nguyen, K., Risgaard, P., Stage, J.: Obstacles to usability evaluation in practice: a survey of software development organizations, In: Proceedings of NordiCHI 2008 Conference, Lund, Sweden. ACM Press (2008)Google Scholar
  3. 3.
    Bradshaw, T.: VR industry faces reality check on sales growth. Financial Times (2017). https://www.ft.com/content/f7e231ee-fc84-11e6-96f8-3700c5664d30. Accessed 06 May 2018
  4. 4.
    Cajander, Å., Larusdottir, M., Gulliksen, J.: Existing but not explicit-the user perspective in scrum projects in practice. In: Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M. (eds.) INTERACT 2013. LNCS, vol. 8119, pp. 762–779. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40477-1_52
  5. 5.
    Cockton, G.: I can’t get no iteration. Interfaces 63, 4–5 (2005) Google Scholar
  6. 6.
    Cuomo, D.L., Bowen, C.D.: Understanding usability issues addressed by three user-system interface evaluation techniques. Interact. Comput. 6(1), 86–108 (1994)CrossRefGoogle Scholar
  7. 7.
    Desurvire, H.W., Kondziela, J.M., Atwood, M.E.: What is gained and lost when using evaluation methods other than empirical testing. In: People and Computers VII, pp. 173–201. Cambridge University Press, Cambridge (1992)Google Scholar
  8. 8.
    Hassenzahl, M., Burmester, M., Koller, F.: AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität. In: Mensch & Computer 2003, pp. 187–196. Vieweg+Teubner Verlag (2003)Google Scholar
  9. 9.
    Hassenzahl, M., Tractinsky, N.: User experience-a research agenda. Behav. Inf. Technol. 25(2), 91–97 (2006)Google Scholar
  10. 10.
    Hornbæk, K., Frökjær, E.: Comparing usability problems and redesign proposals as input to practical systems development. In: Proceedings of the CHI 2005 Conference, Portland, Oregon, USA. ACM Press (2005)Google Scholar
  11. 11.
    ISO 9241-210: Ergonomics of human-system interaction – Part 210: Human-centred design process for interactive systems. International Organisation for Standardization, Geneva (2010)Google Scholar
  12. 12.
    ISO 9241-11: Ergonomic requirements for office work with visual display terminals. International Organisation for Standardization, Geneva (1998)Google Scholar
  13. 13.
    Jeffries, R., Miller, J.R., Wharton, C., Uyeda, K.: User interface evaluation in the real world: a comparison of four techniques. In: Proceedings of CHI 1991 Conference, New Orleans, Louisiana, USA (1991)Google Scholar
  14. 14.
    Jia, Y., Larusdottir, M.K., Cajander, Å.: The usage of usability techniques in Scrum Projects. In: Winckler, M., Forbrig, P., Bernhaupt, R. (eds.) HCSE 2012. LNCS, vol. 7623, pp. 331–341. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-34347-6_25CrossRefGoogle Scholar
  15. 15.
    Karat, C.M., Campbell, R., Fiegel, T.: Comparison of empirical testing and walkthrough methods in user interface evaluation. In: Proceedings of the CHI 1992 Conference, Monterey, CA, USA. ACM Press (1992)Google Scholar
  16. 16.
    Kennedy, R.S., Lane, N.E., Berbaum, K.S., Lilienthal, M.G.: simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3(3), 203–220 (2009).  https://doi.org/10.1207/s15327108ijap0303_3
  17. 17.
    Knapp, J., Zeratsky, J., Kowitz, B.: Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. Simon and Schuster, New York (2016)Google Scholar
  18. 18.
    Kjeldskov, J., Skov, M.B., Stage, J.: Instant data analysis: conducting usability evaluations in a day. In: Proceedings of the Third Nordic Conference on Human-Computer Interaction, pp. 233–240 (2004)Google Scholar
  19. 19.
    Larusdottir, M.K., Cajander, Å., Gulliksen, J.: Informal feedback rather than performance measurements – user-centred evaluation in Scrum projects. Behav. Inf. Technol. 33(11), 1118–1135 (2013).  https://doi.org/10.1080/0144929X.2013.857430CrossRefGoogle Scholar
  20. 20.
    Larusdottir, M.K., Gulliksen, J, Hallberg, N.: The RAMES framework for planning and documenting user-centred evaluation. Behav. Inf. Technol. (2018)Google Scholar
  21. 21.
    Law, E.L.-C.: Evaluating the downstream utility of user tests and examining the developer effect: A case study. Int. J. Hum. Comput. Interact. 21(2), 147–172 (2006)CrossRefGoogle Scholar
  22. 22.
    Law, E.L., Larusdottir, M.K.: Whose experience do we care about? analysis of the fitness of Scrum and Kanban to user experience. Int. J. Hum. Comput. Interact. 31(9), 584–602 (2015).  https://doi.org/10.1080/10447318.2015.1065693CrossRefGoogle Scholar
  23. 23.
    Lazar, J., Feng, J.H., Hochheiser, H.: Research Methods in Human-Computer Interaction. Wiley, New York (2009)Google Scholar
  24. 24.
    Nielsen, J.: How many test users in a usability study (2012). https://www.nngroup.com/articles/how-many-test-users/
  25. 25.
    Nielsen, J., Phillips, V.L.: Estimating the relative usability of two interfaces: heuristic, formal, and empirical methods compared. In: Proceedings of the INTERCHI 1993 Conference, Amsterdam, Netherlands. ACM Press (1993)Google Scholar
  26. 26.
    Sólfar Studios: Everest VR (2016). http://www.solfar.com/everest-vr/. Accessed 06 May 2018
  27. 27.
    Thorgeirsson, T., Larusdottir, M.K.: Case study: are CUP attributes useful to developers? In: Proceedings for the COST-294 Workshop: Downstream Utility: The good, The Bad and The Utterly Useless Usability Feedback, Toulouse, France, pp. 50–54 (2007)Google Scholar
  28. 28.
    Vredenburg, K., Mao, J.-Y., Smith, P.W., Carey, T.: A survey of user-centered design practice. In: Proceedings of the CHI 2002 Conference, Minneapolis, Minnesota, USA. ACM Press (2002)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Marta Larusdottir
    • 1
    Email author
  • David Thue
    • 1
  • Hannes Högni Vilhjálmsson
    • 1
  1. 1.Reykjavik UniversityReykjavikIceland

Personalised recommendations