Basics of Measurement

  • Charles P. Friedman
  • Jeremy C. Wyatt
Part of the Computers and Medicine book series (C+M)

Abstract

In Chapter 4 we established a strong distinction between measurement studies to determine how well (with how much error) we can measure an attribute of interest and demonstration studies, which use these measures to make descriptive or comparative assertions. Whereas we might conclude from a measurement study that a certain process makes it possible to measure the “speed” of a resource in executing a certain family of tasks to a precision of ±10%, we would conclude from a demonstration study that a hospital where resource A is deployed completes this task with greater speed than a hospital using resource B. Demonstration studies are the focus of Chapter 7; measurement and measurement studies are the foci of this chapter and the next.

Keywords

Measurement Error Content Validity Measurement Process Reliability Coefficient User Satisfaction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kerlinger F: Foundations of Behavioral Research. New York: Holt, Rinehart and Winston, 1986.Google Scholar
  2. 2.
    Thorndike RL, Hagen E: Measurement and Evaluation in Psychology and Education. New York: Wiley, 1977.Google Scholar
  3. 3.
    Weinstein MC, Fineberg HV, Elstein AS, et al: Clinical Decision Analysis. Philadelphia: Saunders, 1980.Google Scholar
  4. 4.
    Feinstein AR: Clinimetrics. New Haven, CT: Yale University Press, 1987.Google Scholar
  5. 5.
    Clarke JR, Cebula, DP, Webber BL: Artificial intelligence: a computerized decision aid for trauma. J Trauma 1988;28:1250–1254.PubMedCrossRefGoogle Scholar
  6. 6.
    Babbie ER: The Practice of Social Research. Belmont, CA: Wadsworth, 1992.Google Scholar
  7. 7.
    Cronbach L: Coefficient alpha and the internal structure of tests. Psychometrika 1951;16:297–334.CrossRefGoogle Scholar
  8. 8.
    Phelps CD, Hutson A: Estimating diagnostic test accuracy using a “fuzzy gold standard.” Med Decis Making 1995; 15: 44–57.PubMedCrossRefGoogle Scholar
  9. 9.
    Standards for Educational and Psychological Tests. Washington, DC: American Psychological Association, 1974.Google Scholar
  10. 10.
    Van der Lei J, Musen MA, van der Does E, Manin‘t Veld AJ, van Bemmel JH: Comparison of computer-aided and human review of general practitioners’ management of hypertension. Lancet 1991;338:1504–1508.PubMedCrossRefGoogle Scholar
  11. 11.
    Shrout PE, Fleiss JL: Intraclass correlations: uses in assessing rater reliability. Psychol Bull 1979;86: 420–428.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Charles P. Friedman
    • 1
    • 2
  • Jeremy C. Wyatt
    • 3
  1. 1.University of North CarolinaPittsburghUSA
  2. 2.Center for Biomedical InformaticsUniversity of PittsburghPittsburghUSA
  3. 3.Imperial Cancer Research FundLondonUK

Personalised recommendations