Seeing the System through the End Users’ Eyes: Shadow Expert Technique for Evaluating the Consistency of a Learning Management System

  • Andreas Holzinger
  • Christian Stickel
  • Markus Fassold
  • Martin Ebner
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5889)


Interface consistency is an important basic concept in web design and has an effect on performance and satisfaction of end users. Consistency also has significant effects on the learning performance of both expert and novice end users. Consequently, the evaluation of consistency within a e-learning system and the ensuing eradication of irritating discrepancies in the user interface redesign is a big issue. In this paper, we report of our experiences with the Shadow Expert Technique (SET) during the evaluation of the consistency of the user interface of a large university learning management system. The main objective of this new usability evaluation method is to understand the interaction processes of end users with a specific system interface. Two teams of usability experts worked independently from each other in order to maximize the objectivity of the results. The outcome of this SET method is a list of recommended changes to improve the user interaction processes, hence to facilitate high consistency.


Consistency Shadow Expert Technique Usability Test Methods Performance Measurement 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Chu, L.F., Chan, B.K.: Evolution of web site design: implications for medical education on the Internet. Computers in Biology and Medicine 28(5), 459–472 (1998)CrossRefGoogle Scholar
  2. 2.
    Shneiderman, B.: Designing the User Interface. Strategies for effective Human-Computer Interaction, 3rd edn. Addison-Wesley, Reading (1997)Google Scholar
  3. 3.
    Rubinstein, R., Hersh, H.: The human factor. Digital Press, Bedford (1984)Google Scholar
  4. 4.
    Grudin, J.: The Case against User Interface Consistency. Communications of the ACM 32(10), 1164–1173 (1989)CrossRefGoogle Scholar
  5. 5.
    Rhee, C., Moon, J., Choe, Y.: Web interface consistency in e-learning. Online Information Review 30(1), 53–69 (2006)CrossRefGoogle Scholar
  6. 6.
    Nielsen, J.: Coordinating User Interfaces for Consistency. The Morgan Kaufmann Series in Interactive Technologies. Morgan Kaufmann, San Francisco (2001)Google Scholar
  7. 7.
    Tanaka, T., Eberts, R.E., Salvendy, G.: Consistency of Human-Computer Interface Design - Quantification and Validation. Human Factors 33(6), 653–676 (1991)Google Scholar
  8. 8.
    Ozok, A.A., Salvendy, G.: Measuring consistency of web page design and its effects on performance and satisfaction. Ergonomics 43(4), 443–460 (2000)CrossRefGoogle Scholar
  9. 9.
    Satzinger, J.W.: The effects of conceptual consistency on the end user’s mental models of multiple applications. Journal of End User Computing 10(3), 3–14 (1998)Google Scholar
  10. 10.
    Satzinger, J.W., Olfman, L.: User interface consistency across end-user applications: the effects on mental models. Journal of Management Information Systems 14(4), 167–193 (1998)Google Scholar
  11. 11.
    Norman, D.A., Draper, S.: User Centered System Design. Erlbaum, Hillsdale (1986)Google Scholar
  12. 12.
    Holzinger, A.: User-Centered Interface Design for disabled and elderly people: First experiences with designing a patient communication system (PACOSY). In: Miesenberger, K., Klaus, J., Zagler, W.L. (eds.) ICCHP 2002. LNCS, vol. 2398, pp. 33–41. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  13. 13.
    Norman, D.A.: Cognitive engineering. In: Norman, D., Draper, S. (eds.) User Centered System Design: New Perspectives on Human-Computer interaction. Erlbaum, Mahwah (1986)Google Scholar
  14. 14.
    Holzinger, A., Kickmeier-Rust, M., Albert, D.: Dynamic Media in Computer Science Education; Content Complexity and Learning Performance: Is Less More? Educational Technology & Society 11(1), 279–290 (2008)Google Scholar
  15. 15.
    Holzinger, A., Kickmeier-Rust, M.D., Wassertheurer, S., Hessinger, M.: Learning performance with interactive simulations in medical education: Lessons learned from results of learning complex physiological models with the HAEMOdynamics SIMulator. Computers & Education 52(2), 292–301 (2009)CrossRefGoogle Scholar
  16. 16.
    Krug, S.: Don’t Make Me Think: A Common Sense Approach to Web Usability. New Riders, Indianapolis (2000)Google Scholar
  17. 17.
    Holzinger, A.: Usability Engineering for Software Developers. Communications of the ACM 48(1), 71–74 (2005)CrossRefGoogle Scholar
  18. 18.
    Nielsen, J., Molich, R.: Heuristic evaluation of user interfaces. In: CHI 1990, pp. 249–256. ACM, New York (1990)CrossRefGoogle Scholar
  19. 19.
    Kamper, R.J.: Extending the usability of heuristics for design and evaluation: Lead, follow get out of the way. International Journal of Human-Computer Interaction 14(3-4), 447–462 (2002)CrossRefGoogle Scholar
  20. 20.
    Hvannberg, E.T., Law, E.L.C., Larusdottir, M.K.: Heuristic evaluation: Comparing ways of finding and reporting usability problems. Interacting with Computers 19(2), 225–240 (2007)CrossRefGoogle Scholar
  21. 21.
    Nielsen, J.: Finding usability problems through heuristic evaluation. In: CHI 1992, pp. 373–380 (1992)Google Scholar
  22. 22.
    Javahery, H., Seffah, A.: Refining the usability engineering toolbox: lessons learned from a user study on a visualization tool. In: Holzinger, A. (ed.) USAB 2007. LNCS, vol. 4799, pp. 185–198. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  23. 23.
    Bailey, R.W., Wolfson, C.A., Nall, J., Koyani, S.: Performance-Based Usability Testing: Metrics That Have the Greatest Impact for Improving a System’s Usability. In: Kurosu, M. (ed.) Human Centered Design HCII 2009. LNCS, vol. 5619, pp. 3–12. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  24. 24.
    Virzi, R.A.: Refining the test phase of usability evaluation: how many subjects is enough? Human Factors 34(4), 457–468 (1992)Google Scholar
  25. 25.
    Nielsen, J.: Usability Metrics: Tracking Interface Improvements. IEEE Software 13(6), 12–13 (1996)Google Scholar
  26. 26.
    Bevan, N.: Measuring Usability as Quality of Use. Software Quality Journal 4(2), 115–130 (1995)CrossRefGoogle Scholar
  27. 27.
    Thomas, C., Bevan, N.: Usability Context Analysis: A Practical Guide. National Physical Laboratory, Teddington (1996)Google Scholar
  28. 28.
    Bevan, N.: Quality in Use: Incorporating Human Factors into the Software Engineering Lifecycle. In: 3rd International Software Engineering Standards Symposium (ISESS 1997), pp. 169–179 (1997)Google Scholar
  29. 29.
    Macleod, M., Bowden, R., Bevan, N., Curson, I.: The MUSiC performance measurement method. Behaviour & Information Technology 16(4-5), 279–293 (1997)CrossRefGoogle Scholar
  30. 30.
    Bevan, N.: Extending Quality in Use to Provide a Framework for Usability Measurement. In: Kurosu, M. (ed.) Human Centered Design HCII 2009. LNCS, vol. 5619, pp. 13–22. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  31. 31.
    Stickel, C., Scerbakov, A., Kaufmann, T., Ebner, M.: Usability Metrics of Time and Stress - Biological Enhanced Performance Test of a University Wide Learning Management System. In: Holzinger, A. (ed.) 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian-Computer-Society, pp. 173–184. Springer, Berlin (2008)Google Scholar
  32. 32.
    Seffah, A., Metzker, E.: The obstacles and myths of usability and software engineering. Communications of the ACM 47(12), 71–76 (2004)CrossRefGoogle Scholar
  33. 33.
    Seffah, A., Donyaee, M., Kline, R.B., Padda, H.K.: Usability measurement and metrics: A consolidated model. Software Quality Journal 14(2), 159–178 (2006)CrossRefGoogle Scholar
  34. 34.
    Holzinger, A., Searle, G., Kleinberger, T., Seffah, A., Javahery, H.: Investigating Usability Metrics for the Design and Development of Applications for the Elderly. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2008. LNCS, vol. 5105, pp. 98–105. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  35. 35.
    Bevan, N.: Usability is Quality of Use. In: Anzai, Y., Ogawa, K., Mori, H. (eds.) 6th International Conference on Human Computer Interaction. Elsevier, Amsterdam (1995)Google Scholar
  36. 36.
    Kirakowski, J., Corbett, M.: SUMI: The Software Usability Measurement Inventory. British Journal of Educational Technology 24(3), 210–212 (1993)CrossRefGoogle Scholar
  37. 37.
    Brooke, J.: SUS: A "quick and dirty" usability scale. In: Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, A.L. (eds.) Usability Evaluation in Industry. Taylor & Francis, Abington (1996)Google Scholar
  38. 38.
    Raskin, J.: The Humane Interface: New Directions for Designing Interactive Systems. Addison-Wesley-Longman, Boston (2000)Google Scholar
  39. 39.
    Harbich, S., Auer, S.: Rater bias: The influence of hedonic quality on usability questionnaires. In: Costabile, M.F., Paternó, F. (eds.) INTERACT 2005. LNCS, vol. 3585, pp. 1129–1133. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  40. 40.
    Smith, L.A., Turner, E.: Using Camtasia to develop and enhance online learning: tutorial presentation. Journal of Computing Sciences in Colleges 22(5), 121–122 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Andreas Holzinger
    • 1
    • 2
  • Christian Stickel
    • 3
  • Markus Fassold
    • 3
  • Martin Ebner
    • 3
  1. 1.Institute for Medical Informatics (IMI), Research Unit HCI4MEDMedical University GrazGrazAustria
  2. 2.Institute for Information Systems and Computer Media (IICM)Graz University of TechnologyGrazAustria
  3. 3.CIS/Department of Social MediaGraz University of TechnologyGraz

Personalised recommendations