Studying Clinical Information Resources
In Chapter 1 we introduced the challenge of conducting evaluations in medical informatics and discussed specific sources of complexity that give rise to these challenges. In Chapter 2 we introduced the range of approaches that can be used to conduct evaluations in medical informatics and across many areas of human endeavor. Chapter 2 also stressed that the evaluator can address many of these challenges by viewing each evaluation as anchored by specific purposes. Each study is conducted for some identifiable client group, often to inform specific decisions that must be made by members of that group. The work of the evaluator is made possible by focusing on the specific purposes the particular study is designed to address, often framing them as a set of questions and choosing the approach or approaches best suited to those purposes. A study is successful if it provides credible information to help members of an identified audience make decisions.
KeywordsInformation Resource None None Medical Informatics Resource Function Clinical Prediction Rule
Unable to display preview. Download preview PDF.
- 3.Smith MF: Prototypically topical: software prototyping and delivery of health care information systems. Br J Health Care Comput 1993; 10(6):25–27.Google Scholar
- 4.Smith MF: Software Prototyping. London: McGraw-Hill, 1991.Google Scholar
- 6.Sommerville I: Software Engineering. Reading, MA: Addison Wesley 1992.Google Scholar
- 8.Perrault L, Wiederhold G: System design and evaluation. In: Shortliffe E, Perrault L, Wiederhold G, Fagan L (eds) Medical Informatics. Reading, MA: Addison-Wesley 1990:151–177.Google Scholar
- 9.Wellwood J, Spiegelhalter DJ, Johannessen S: How does computer-aided diagnosis improve the management of acute abdominal pain? Ann R Col Surg Engl 1992;74:140–146.Google Scholar
- 13.Holbrooke A, Langton K, Haynes R, Mathieu A, Cowan S: PREOP: development of an evidence-based expert system to assist with preoperative assessment. Proc Symp Comput Applications Med Care 1991;15:669–673.Google Scholar
- 14.Nguyen T, Perkins W, Laffey T, Pecora D: Knowledge base verification. AI Magazine 1987;8:69–75.Google Scholar
- 15.Suwa M, Scott AC, Shortliffe EH: Completeness and consistency in a rule-based system. In: Buchanan BG, Shortliffe EH (eds) Rule-Based Expert Systems. Reading, MA: Addison-Wesley, 1984:159–170.Google Scholar
- 16.Heckerman D, Horwitz E: The myth of modularity in rule-based systems. In Lemmer J, Kanal L (eds) Uncertainty in AI 2. Amsterdam: Elsevier, 1988:115–121.Google Scholar
- 21.Gaschnig J, Klahr P, Pople H, Shortliffe E, Terry A: Evaluation of expert systems: issues and case studies. In: Hayes-Roth F, Waterman DA, Lenat D (eds) Building Expert Systems. Reading, MA: Addison Wesley, 1983.Google Scholar
- 25.Nykanen P (ed): Issues in the Evaluation of Computer-Based Support to Clinical Decision Making. Report of SYDPOL WG5. Denmark: SYDPOL, 1989.Google Scholar