Are We Testing Utility? Analysis of Usability Problem Types

  • Kimmo TarkkanenEmail author
  • Ville Harkke
  • Pekka Reijonen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9186)


Usability problems and related redesign recommendations are the main outcome of usability tests although both are questioned in terms of impact in the design process. Problem classifications aim to provide better feedback for designers by improving usability problem identification, analysis and reporting. However, within the classifications, quite little is discussed about the types and the contents of usability problems as well as the types of required design efforts. We address this problem by scrutinizing the findings of three empirical usability tests conducted in software development projects. As a result, 173 problems were classified into 11 categories. Specific focus was placed on the distinction between the utility and usability types of problems, in order to define the correct development phase and method to fix the problem. The number of utility problems varied from 51 % to 74 %, which shows that early usability testing with a think-aloud protocol and an open task structure measure both utility and usability equally well.


Usability problem Utility problem Problem classification Usability testing 


  1. 1.
    Wixon, D.: Evaluating usability methods: why the current literature fails the practitioner. Interactions 10(4), 28–34 (2003)CrossRefGoogle Scholar
  2. 2.
    Hornbæk, K.: Dogmas in the assessment of usability evaluation methods. Behav. Inf. Technol. 29(1), 97–111 (2010)CrossRefGoogle Scholar
  3. 3.
    Molich, R., Ede, M.R., Kaasgaard, K., Karyukin, B.: Comparative usability evaluation. Behav. Inf. Technol. 23(1), 65–74 (2004)CrossRefGoogle Scholar
  4. 4.
    Hornbæk, K., Frøkjær, E.: Comparing usability problems and redesign proposals as input to practical systems development. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 391–400. ACM (2005)Google Scholar
  5. 5.
    Følstad, A., Law, E.L.-C., Hornbæk, K.: Analysis in usability evaluations: an exploratory study. In: Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, pp. 647–650. ACM (2010)Google Scholar
  6. 6.
    Law, E.L.-C.: Evaluating the downstream utility of user tests and examining the developer effect: a case study. Int. J. Hum.-Comput. Interact. 21(2), 147–172 (2006)CrossRefGoogle Scholar
  7. 7.
    Woolrych, A., Hornbæk, K., Frøkjær, E., Cockton, G.: Ingredients and meals rather than recipes: a proposal for research that does not treat usability evaluation methods as indivisible wholes. Int. J. Hum.-Comput. Interact. 27(10), 940–970 (2011)CrossRefGoogle Scholar
  8. 8.
    Lavery, D., Cockton, G., Atkinson, M.P.: Comparison of evaluation methods using structured usability problem reports. Behav. Inf. Technol. 16, 246–266 (1997)CrossRefGoogle Scholar
  9. 9.
    Hvannberg, E.T., Law, E.L.-C.: Classification of usability problems (CUP) scheme. In: Proceedings of the INTERACT 2003, pp. 655–662. ACM Press (2003)Google Scholar
  10. 10.
    Vilbergsdottir, S.G., Hvannberg, E.T., Law, E.L.-C.: Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP). J. Syst. Softw. 87, 18–37 (2014)CrossRefGoogle Scholar
  11. 11.
    Vilbergsdóttir, S.G., Hvannberg, E.T., Law, E.L.-C.: Classification of usability problems (CUP) scheme: augmentation and exploitation. In: Proceedings of the 4th Nordic Conference on Human-Computer Interaction: Changing Roles, pp. 281–290. ACM (2006)Google Scholar
  12. 12.
    Nørgaard, M., Hornbæk, K.: What do usability evaluators do in practice? an explorative study of think-aloud testing. In: Proceedings of the 6th Conference on Designing Interactive Systems, pp. 209–218, ACM (2006)Google Scholar
  13. 13.
    Nielsen, J.: Usability Engineering. Academic Press (1993)Google Scholar
  14. 14.
    Goodwin, N.: Functionality and usability. Commun. ACM 30, 229–233 (1987). ACMCrossRefGoogle Scholar
  15. 15.
    Whiteside, J., Bennett, J., Holtzblatt, K.: Usability engineering: our experience and evolution. In: Helander, M. (ed.) Handbook of Human-Computer Interaction. North Holland, Amsterdam (1988)Google Scholar
  16. 16.
    Grudin, J.: Utility and usability: research issues and development contexts. Interact. Comput. 4(2), 209–217 (1992)CrossRefGoogle Scholar
  17. 17.
    ISO 9241-11: Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs)–Part II Guidance on Usability (1998)Google Scholar
  18. 18.
    Bevan, N.: Usability is quality of use. Adv. Hum. Factors/Ergon. 20, 349–354 (1995)Google Scholar
  19. 19.
    Mahmood, M.A., Burn, J.M., Gemoets, L.A., Jacquez, C.: Variables affecting information technology end-user satisfaction: a meta-analysis of the empirical literature. Int. J. Hum.-Comput. Stud. 52(4), 751–771 (2000)CrossRefGoogle Scholar
  20. 20.
    Johannessen, G.H.J., Hornbæk, K.: Must evaluation methods be about usability? devising and assessing the utility inspection method. Behav. Inf. Technol. 33(2), 195–206 (2014)CrossRefGoogle Scholar
  21. 21.
    Ham, D.-H.: A model-based framework for classifying and diagnosing usability problems. Cogn. Technol. Work 16(3), 373–388 (2014)CrossRefGoogle Scholar
  22. 22.
    Chillarege, R., Bhandari, I.S., Chaar, J.K., Halliday, M.J., Moebus, D.S., Ray, B.K., Wong, M.-Y.: Orthogonal defect classification-a concept for in-process measurements. IEEE Trans. Softw. Eng. 18(11), 943–956 (1992)CrossRefGoogle Scholar
  23. 23.
    Grady, R.B.: Software failure analysis for high-return process improvement decisions. Hewlett Packard J. 47, 15–24 (1996)Google Scholar
  24. 24.
    Andre, T.S.: Rex Hartson, H., Belz, S.M., McCreary, F.A.: The user action framework: a reliable foundation for usability engineering support tools. Int. J. Hum.-Comput. Stud. 54(1), 107–136 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Keenan, S.L., Hartson, H.R., Kafura, D.G., Schulman, R.S.: The usability problem taxonomy: a framework for classification and analysis. Empirical Softw. Eng. 4(1), 71–104 (1999)CrossRefGoogle Scholar
  26. 26.
    Tarkkanen, K., Reijonen, P., Tétard, F., Harkke, V.: Back to user-centered usability testing. In: Holzinger, A., Ziefle, M., Hitz, M., Debevc, M. (eds.) SouthCHI 2013. LNCS, vol. 7946, pp. 91–106. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  27. 27.
    Baskerville, R.L.: Investigating information systems with action research. Commun. AIS 2(3es), 4 (1999)Google Scholar
  28. 28.
    Strauss, A., Corbin, J.M.: Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Sage Publications, Newbury Park (1990)Google Scholar
  29. 29.
    Gothelf, J.: Lean UX: Applying Lean Principles to Improve User Experience. O’Reilly Media, Inc. (2013)Google Scholar
  30. 30.
    International Organization for Standardization: ISO 9241-210:2010. Ergonomics of Human-system Interaction - Part 210: Human-centred Design for Interactive Systems. ISO (2010)Google Scholar
  31. 31.
    Blandford, A., Green, T.R., Furniss, D., Makri, S.: Evaluating system utility and conceptual fit using CASSM. Int. J. Hum.-Comput. Stud. 66(6), 393–409 (2008)CrossRefGoogle Scholar
  32. 32.
    Geng, R., Chen, M., Tian, J.: In-process usability problem classification, analysis and improvement. In: 14th International Conference on Quality Software 2014, pp. 240–245. IEEE (2014)Google Scholar
  33. 33.
    Campbell, D.J.: Task complexity: a review and analysis. Acad. Manag. Rev. 13(1), 40–52 (1988)Google Scholar
  34. 34.
    Hertzum, M., Molich, R., Jacobsen, N.E.: What you get is what you see: revisiting the evaluator effect in usability tests. Behav. Inf. Technol. 33(2), 144–162 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Information Systems ScienceUniversity of TurkuTurkuFinland

Personalised recommendations