A practical strategy for the evaluation of software tools
This paper describes a working strategy for software tool evaluations that has resulted from work within Rolls-Royce plc in response to the difficulty, and mixed successes, we have experienced in the selection of software tools. The lack of an acceptable methodology has meant that industrial evaluations are commonly time-consuming, fail to capture both tool and problem knowledge in a form suitable to aid future evaluations, and frequently give inconclusive results. Even where rigorous selection methods are used we raise the concern that tool evaluators are failing to address perhaps the most important factors in determining final success namely the non-technical or ‘soft’ factors.
In an attempt to overcome some of these problems the proposed strategy provides a qualitative list of important issues distilled from many years experience of making tool selection decisions. This generic issue checklist is used to form domain specific criteria against which tools can be compared in a more quantitative manner. This process ensures traceability between issues, tool requirements criteria and supporting evidence in order to document decisions and provide assurance that all issues have been addressed. It also helps us to capture valuable corporate knowledge for future evaluations in order to become more efficient at evaluating tools, provide more consistent criteria and to limit the risk of expensive mistakes.
This industrial perspective on tool selection will be of interest to managers and evaluators of organisations who purchase software tools. To a lesser degree the issue guidelines cover method evaluation and tool emplacement but further refinement and practical application is recognised. The strategy may also form the basis of a process for tool evaluations as required by higher levels of the SEI Capability Maturity Model (Humphrey, 1988; Humphrey, 1990). Finally, we hope that tool vendors will use it to provide better support for eliciting and meeting customer requirements during the evaluation process.
KeywordsSoftware tools industrial practice evaluation experience
- Cheng, D. Y., and Pane, D. M. “An Evaluation of Automatic and Interactive Parallel Programming Tools.” Supercomputing ‘81,Albuquerque, USA, 412–423.Google Scholar
- Dick, R., and Hunter, R. “Subjective Software Evaluation.” Software Quality Management II: Building Quality into Software,Edinburgh, UK, 321–334.Google Scholar
- Humphrey, W. A. (1988) Characterising the Software Process: A Maturity Framework. IEEE Software.Google Scholar
- Humphrey, W. S. (1990). Managing the Software Process,Addison-Wesley.Google Scholar
- Jeanrenaud, J., and Romanazzi, P. “Software Product Evaluation: A Methodological Approach.” Software Quality Management II: Building Quality into Software, Edinburgh, UK, 59–69.Google Scholar
- Klopping, I. M., and Bolgiano, C. F. (1994) Effective Evaluation of off-the-shelf Microcomputer Software. Office Systems Research Journal,9(1), 46–40.Google Scholar
- Polvia, P. (1992). “A Comprehensive Model and Evaluation of the Software Engineering Environment.” Information Resources Management Association International Conference,Harrisburg, USA, 302–307.Google Scholar
- Rowley, J. E. (1993) Selection and Evaluation of Software. ASLIB Proceedings, 45(3), 77–81. Schamp, A. (1995) CM-Tool Evaluation and Selection. IEEE Software, 12(4), 114–118.Google Scholar
- Scheffler, F. L., and Marshall, R. R. “The Software Technology Support Centre: Help for Acquiring Sofware Tools.” National Aerospace and Electronics Conference,Dayton, OH, USA, 647–653.Google Scholar