Evaluation is a key element in the design of IT-based artifacts. A designer finds a suitable and interesting problem to solve. Then they come up with design solutions. That is followed by the actual build phase. After they have built the artifact, the next phase is evaluating for efficiency, utility, or performance.
Evaluation is a crucial component in the design science research process. The designed IT artifact is a socio-technical entity that exists within an environment (business or social) which lays out the requirements for its evaluation. Such evaluation of IT artifacts requires definition of appropriate metrics and possibly the gathering and analysis of appropriate data. IT artifacts can be evaluated in terms of functionality, completeness, consistency, accuracy, performance, reliability, usability, fit with the organization, and other relevant quality attributes (Hevner, March et al. 2004).
In this chapter, it is our goal to help the reader understand the different issues, questions, methods, and techniques that arise when one does evaluation. To present a full detailed analysis of various techniques is beyond the scope of this chapter or the book, but we hope that the reader will learn to ask the right questions, know when to apply which technique and be confident to look at the right places for more answers.
- Friedman, C. P. and J. C. Wyatt (1997) Evaluation Methods in Medical Informatics, Springer-Verlag New York, LLC.Google Scholar
- Hevner, A., S. March, J. Park, and S. Ram (2004) Design science in information systems research." MIS Quarterly 28 (1), pp. 75–105.Google Scholar
- Jain, R. (1991) The Art of Computer Systems Performance Analysis, J. Wiley & Sons, Inc, New York.Google Scholar
- Schroeder, L. D., D. L. Sjoquist, and P. E. Stephan (2007) Understanding Regression Analysis: An Introductory Guide. Sage Series: Quantitative Applications in the Social Sciences, No. 07-057.Google Scholar