Towards Indicators for HCI Quality Evaluation Support

  • Ahlem Assila
  • Káthia Marçal de Oliveira
  • Houcine Ezzedine
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8823)


The current variety of existing approaches for HCI quality evaluation is marked by a lack of the integration of subjective methods (such as the questionnaire method) and objective methods (such as the electronic informer method) for supporting in making an evaluation final decision. Over the past decades, different researches have been interested to define various quality criteria with their measures. However, the lack in determining how to integrate qualitative with quantitative data leads us to specify new indicators for HCI quality evaluation. This paper aims at defining and constructing quality indicators with their measures related relatively to existing quality criteria based on ISO/IEC 15939 standard. These indicators allow the integration of qualitative and quantitative data and provide a basis for decision making about the quality of the HCI relatively to the evaluation quality criteria. This paper presents a proposal for defining and constructing quality indicators and it highlights a proposed example. A feasibility study of using a quality indicator is presented by the evaluation of traffic supervision system in Valenciennes (France) as a part of CISIT-ISART project.


Human-Computer Interface (HCI) HCI evaluation integration measures indicator qualitative quantitative criteria 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Al-Wabil, A., Al-Khalifa, H.: A Framework for Integrating Usability Evaluations Methods: The Mawhiba Web Portal Case Study. In: The International Conference on the Current Trends in Information Technology (CTIT 2009), Dubai, UAE, pp. 1–6 (2009)Google Scholar
  2. 2.
    Assila, A., de Oliveira, K.M., Ezzedine, H.: Towards qualitative and quantitative data integration approach for enhancing HCI quality evaluation. In: Kurosu, M. (ed.) HCI 2014, Part I. LNCS, vol. 8510, pp. 469–480. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  3. 3.
    Bastien, J.M.C., Scapin, D.: Ergonomic Criteria for the Evaluation of Human Computer interfaces. Technical Report n° 156, Institut Nationale de Recherche en Informatique et en Automatique, France (1993)Google Scholar
  4. 4.
    Charfi, S., Ezzedine, H., Kolski, C.: RITA: A Framework based on multi-evaluation techniques for user interface evaluation, Application to a transport network supervision system. In: ICALT, May 29-31, pp. 263–268. IEEE, Tunisia (2013) ISBN 978-1-4799-0312-2Google Scholar
  5. 5.
    Hardin, M., Hom, D., Perez, R., Williams, L.: Quel diagramme ou graphique vous convient le mieux? Copyright Tableau Software, Inc. (2012)Google Scholar
  6. 6.
    Hartson, H.R., Andre, T.S., Will, R.C.: Criteria for evaluating usability evaluation methods. International Journal of Human-Computer Interaction, vol 15(1), 145–181 (2003)CrossRefGoogle Scholar
  7. 7.
    Hwang, W., Salvendy, G.: Number of people required for usability evaluation: the 10 2 rule. Commun. ACM 53(5), 130–133 (2010)CrossRefGoogle Scholar
  8. 8.
    ISO/IEC 15939 Systems and software engineering — Measurement process (2007)Google Scholar
  9. 9.
    ISO/IEC. ISO 9241-11 Ergonomic requirements for office work with visual display terminals (VDT) s- Part 11 Guidance on usability. ISO/IEC 9241-11: 1998(E)Google Scholar
  10. 10.
    Kerzazi, N., Lavallée, M.: Inquiry on usability of two software process modeling systems using ISO/IEC 9241. In: CCECE, pp. 773–776 (2011)Google Scholar
  11. 11.
    Lewis, J.R.: IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. IJHCI (7), 57–78 (1995)Google Scholar
  12. 12.
    Monteiro, L., Oliveira, K.: Defining a catalog of indicators to support process performance analysis. Journal of Software Maintenance and Evolution: Research and Practice 23(6), 395–422 (2010)CrossRefGoogle Scholar
  13. 13.
    Nielsen, N.: Engineering, Usability. Morgan Kaufmann Publishers Inc., San Francisco (1993)Google Scholar
  14. 14.
    Trabelsi, A., Ezzedine, H.: Evaluation of an Information Assistance System based on an agent-based architecture in transportation domain: first results. International Journal of Computers, Communications and Control 8(2), 320–333 (2013)CrossRefGoogle Scholar
  15. 15.
    Tran, C., Ezzedine, H., Kolski, C.: EISEval, a Generic Reconfigurable Environment for Evaluating Agent-based Interactive Systems. International Journal of Human-Computer Studies 71(6), 725–761 (2013)CrossRefGoogle Scholar
  16. 16.
    Whiting, M.A., Haack, J., Varley, C.: Creating realistic, scenario-based synthetic data for test and evaluation of information analytics software. In: Proc. Conference on Beyond Time and Errors: Novel Evaluation Methods For information Visualization, A Workshop of the ACM CHI 2008 Conference, Florence, Italy, pp. 1–9 (2008)Google Scholar
  17. 17.
    Yang, T., Linder, J., Bolchini, D.: DEEP: Design-Oriented Evaluation of Perceived Usability. International Journal of Human Computer Interaction, 308–346 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Ahlem Assila
    • 1
    • 2
  • Káthia Marçal de Oliveira
    • 1
  • Houcine Ezzedine
    • 1
  1. 1.L.A.M.I.H. - UMR CNRS 8201 UVHC, Le Mont HouyValenciennes Cedex 9France
  2. 2.S.E.T.I.T.Université de SfaxTunisie

Personalised recommendations