Measurement and Statistical Problems in Neuropsychological Assessment of Children

  • Cecil R. Reynolds
Part of the Critical Issues in Neuropsychology book series (CINP)

Abstract

The field of neuropsychology as practiced clinically has been driven in large part by the development and application of standardized diagnostic procedures that are more sensitive than medical examinations to changes in behavior, in particular higher cognitive processes, as related to brain function. The techniques and methods so derived have led to major conceptual and theoretical advances in the understanding of normal and abnormal patterns of brain—behavior relationships. Despite the apparent utility of many of the neuropsychological tests discussed in this volume, their psychometric properties leave much to be desired. Much of their utility comes from the clinical acumen and experience of their users and developers, a situation that has, historically, made clinical neuropsychology more difficult to teach than should be the case. In fact, much of today’s practice and yesterday’s theoretical advances in clinical neuropsychology stem from intense and insightful observation of brain-damaged individuals by such astute observers as Ward Halstead, A. R. Luria, Hans Teuber, Karl Pribram, Roger Sperry, and others. These superstars of clinical neuropsychology were state-of-the-art researchers (though the state of the art was often crude) to be sure, but their greatest inspirations came from their constant monitoring and informal interactions with the behavior of persons suffering from a variety of neurological trauma and disease. Halstead roamed the halls of Otho S. S. Sprague making notes as he observed behavior among brain-damaged individuals; Luria gained great insights into brain function with his rather informal, sometimes impromptu, bedside examination and discussions with soldiers with head injury; Sperry and his students followed and observed a series of “split-brain” patients going about their daily activities, even to the point of observing some as they dressed themselves and others in leisure activities.

Keywords

Reading Comprehension Neuropsychological Test Standard Score Neuropsychological Assessment Scatter Index 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adams, R. L. (1985). Review of the Luria—Nebraska Neuro-psychological Battery. In J. V. Mitchell (Ed.), The ninth mental measurements yearbook. Lincoln: University of Nebraska Press.Google Scholar
  2. Angoff, W. H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement ( 2nd ed. ). Washington, DC: American Council on Education.Google Scholar
  3. Ardila, A., Roselli, M., and Puente, T. (1994). Neuropsychological evaluation of the Spanish speaker. New York: Plenum Press.Google Scholar
  4. Arnold, B. R., Montgomery, G. T., Castaneda, I., and Langoria, R. (1994). Acculturation and performance of Hispanics on selected Halstead—Reitan neuropsychological tests. Assessment, 1, 239–248.Google Scholar
  5. Bernard, L. C., Houston, W., and Natoli, L. (1993). Malingering on neuropsychological memory tests: Potential objective indicators. Journal of Clinical Psychology, 49, 45–53.PubMedCrossRefGoogle Scholar
  6. Bersoff, D. N. (1982). The legal regulation of school psychology. In C. R. Reynolds and T. B. Gutkin (Eds.), The handbook of school psychology. New York: Wiley.Google Scholar
  7. Boder, E., and Jarrico, S. (1982). Boder Test of Reading—Spelling Patterns. New York: Grune and Stratton.Google Scholar
  8. Boone, K., Ghaffarian, S., Lesser, I., and Hill—Gutierrez, E. (1993). Wisconsin Card Sorting Test performance in healthy, older adults: Relationship to age, sex, education, and IQ. Journal of Clinical Psychology, 49, 54–60.Google Scholar
  9. Burns, E. (1982). The use and interpretation of standard grade equivalents. Journal of Learning Disabilities, 15, 17–18.CrossRefGoogle Scholar
  10. Cattell, R. B. (1966). Handbook of multivariate experimental psychology. Chicago: Rand McNally.Google Scholar
  11. Cattin, P. (1980). Note on the estimation of the squared cross-validated multiple correlation of a regression model. Psychological Bulletin, 87, 63–65.Google Scholar
  12. Cicchetti, D. V. (1994). Multiple comparison methods: Establishing guidelines for their valid application in neuropsychological research. Journal of Clinical and Experimental Neuropsychology, 16, 155–161.PubMedCrossRefGoogle Scholar
  13. Coles, G. S. (1978). The learning disabilities test battery: Empirical and social issues. Howard Educational Review, 4, 313–340.Google Scholar
  14. Cooley, W. W., and Lohnes, P. R. (1971). Multivariate data analysis. New York: Wiley.Google Scholar
  15. Cronbach, L. J. (1990). Essentials of psychological testing ( 5th ed. ). New York: Harper and Row.Google Scholar
  16. Cronbach, L. J., and Gleser, G. C. (1953). Assessing similarity be- tween profiles. Psychological Bulletin, 50, 456–473.PubMedCrossRefGoogle Scholar
  17. Davis, F. B. (1959). Interpretation of differences among average and individual test scores. Journal of Educational Psychology, 50, 162–170.CrossRefGoogle Scholar
  18. Dean, R. S. (1978). Distinguishing learning-disabled and emotionally disturbed children on the WISC-R. Journal of Consulting and Clinical Psychology, 46, 4381–4382.CrossRefGoogle Scholar
  19. Dean, R. S. (1985). Review of the Halstead-Reitan Neuropsychological Test Battery. In J. V. Mitchell (Ed.), The ninth mental measurements yearbook. Lincoln: University of Nebraska Press.Google Scholar
  20. Dunleavy, R. A., Hansen, J. L., and Baade, L. E. (1981). Discriminating powers of Halstead Battery tests in assessment of 9 to 14 year old severely asthmatic children. Clinical Neuropsychology, 3, 99–12.Google Scholar
  21. Feldt, L. S., and Brennan, R. L. (1989). Reliability. In R. Linn (Ed.), Educational measurement ( 3rd ed. ). New York: Macmillan Co.Google Scholar
  22. Fuller, G. B., and Goh, D. S. (1981). Intelligence, achievement, and visual-motor performance among learning disabled and emotionally impaired children. Psychology in the Schools, 18, 262–268.CrossRefGoogle Scholar
  23. Glass, G. V. (1978). Integrating findings: The meta-analysis of research. In L. Shulman (Ed.), Review of Research in Education, 5, 351–379.Google Scholar
  24. Golden, C. J. (1981). Diagnosis and rehabilitation in clinical neuropsychology ( 2nd ed. ). Springfield, IL: Thomas.Google Scholar
  25. Golden, C. J., Moses, J. A., Jr., Graber, B., and Berg, T. (1981). Objective clinical rules for interpreting the Luria-Nebraska Neuropsychological Battery: Derivation, effectiveness, and validation. Journal of Consulting and Clinical Psychology, 49, 616–668.PubMedCrossRefGoogle Scholar
  26. Gordon, R. A. (1984). Digits backward and the Mercer-Kamin law: An empirical response to Mercer’s treatment of internal validity of IQ tests. In C. R. Reynolds and R. T. Brown (Eds.), Perspectives on bias in mental testing. New York: Plenum Press.Google Scholar
  27. Guilford, J. P. (1954). Psychometric methods ( 2nd ed. ). New York: McGraw-Hill.Google Scholar
  28. Guilmette, T. J., and Rasile, D. (1995). Sensitivity, specificity, and diagnostic accuracy of three verbal memory measures in the assessment of mild brain injury. Neuropsychology; 9, 338–344.CrossRefGoogle Scholar
  29. Gutkin, T. B., and Reynolds, C. R. (1980, September). Normative data for interpreting Reitan’s index of Wechsler subtest scatter. Paper presented to the annual meeting of the American Psychological Association, Montreal.Google Scholar
  30. Haak, R. (1989). Establishing neuropsychology in a school setting: Organization, problems, and benefits, In C. R. Reynolds and E. Fletcher-Janzen (Eds.), Handbook of clinical child neuropsychology (pp. 489–502 ). New York: Plenum Press.Google Scholar
  31. Helms, J. E. (1992). Why is there no study of cultural equivalence in standardized cognitive ability testing? American Psychology, 47, 1083–1101.CrossRefGoogle Scholar
  32. Hynd, G. (Ed.). (1981). Neuropsychology in schools. School Psychology Review, 10(3).Google Scholar
  33. Ivnik, R. J., Smith, G. E., Malec, J. F., Kokmen, E., and Tangelos, E. G. (1994). Mayo cognitive factor scales: Distinguishing normal and clinical samples by profile variability. Neuro-psychology, 8, 203–209.Google Scholar
  34. Jastak, J. F., and Jastak, S. (1978). Wide Range Achievement Test. Wilmington, DE: Jastak. Kaufman, A. S. (1976a). A new approach to the interpretation of test scatter on the WISC-R. Journal of Learning Disabilities, 9, 160–167.Google Scholar
  35. Kaufman, A. S. (I 976b). Verbal-performance IQ discrepancies on the WISC-R. Journal of Learning Disabilities,9, 739–744. Kaufman, A. S. (1979). Intelligence testing with the WISC-R. New York: Wiley-Interscience.Google Scholar
  36. Kaufman, A. S. (1990). Assessing adolescent and adult intelligence. Boston: Allyn and Bacon.Google Scholar
  37. Kaufman, A. S., and Kaufman, N. L. (1983). Kaufman Assessment Battery for Children: Interpretive manual. Circle Pines, MN: American Guidance Service.Google Scholar
  38. Kaufman, A. S., McLean, J. E., and Reynolds, C. R. (1988). Sex, race, residence, region, and education differences on the 11 WAIS-R subtests. Journal of Clinical Psychology, 44, 231–248.PubMedCrossRefGoogle Scholar
  39. Lord, F. M., and Novick, M. R. (1968). Statistical theories of mental tests. Reading, MA: Addison-Wesley.Google Scholar
  40. Malloy, P. E, and Webster, J. S. (1981). Detecting mild brain impairment using the Luria-Nebraska Neuropsychological Battery. Journal of Consulting and Clinical Psychology, 49, 768–770.PubMedCrossRefGoogle Scholar
  41. Matarazzo, J. D. (1972). Wechsler’s measurement and appraisal of adult intelligence. Baltimore: Williams and Wilkins. Mayfield, J. W., and Reynolds, C. R. (1995a). Factor structure of the Test of Memory and Learning for blacks and for whites. Paper presented to the annual meeting of the National Association of School Psychologists, Chicago.Google Scholar
  42. Mayfield, J. W., and Reynolds, C. R. (1995b). Black-white differences in performance on measures of memory and learn-ing. Paper presented to the annual meeting of the National Academy of Neuropsychology, San Francisco.Google Scholar
  43. Miles, J., and Stelmack, R. (1994). Learning disability subtypes and the effects of auditory and visual priming on visual event-related potentials to words. Journal of Clinical and Experimental Neuropsychology, 16, 43–64.PubMedCrossRefGoogle Scholar
  44. Nesselroade, J., and Cattell, R. B. (1988). Handbook of multivariate experimental psychology (2nd ed.). New York: Plenum Press. Nunnally, J. C. (1978). Psychometric theory ( 2nd ed. ). New York: McGraw-Hill.Google Scholar
  45. Osgood, C. E., and Suci, G. J. (1952). A measurement of relation determined by both mean differences and profile interpretation. Psychological Bulletin, 49, 251–262.PubMedCrossRefGoogle Scholar
  46. Parsons, O. A., and Prigatano, G. P. (1978). Methodological considerations in clinical neuropsychological research. Journal of Consulting and Clinical Psychology; 46, 608–619.CrossRefGoogle Scholar
  47. Pedhazur, E. J., and Schmelkin, L. P. (1991). Measurement, design, and analysis. Hillsdale, NJ: Erlbaum.Google Scholar
  48. Petersen, N. S., Kolen, M. J., and Hoover, H. D. (1989). Scaling, norming, and equating. In R. Linn (Ed.), Educational measurement, 3rd ed. (pp. 221–262 ). New York: MacMillan.Google Scholar
  49. Piotrowski, R. J. (1978). Abnormality of subtest score differences on the WISC-R. Journal of Consulting and Clinical Psychology, 46, 569–570.CrossRefGoogle Scholar
  50. Plake, B. S., Reynolds, C. R., and Gutkin, T. B. (1981). Atechnique for the comparison of profile variability between independent groups. Journal of Clinical Psychology, 37, 142–146.CrossRefGoogle Scholar
  51. Purisch, A. D., Golden, C. J., and Hammeke, T. A. (1979). Discrimination of schizophrenic and brain injured patients by standardized version of Luria’s neuropsychological tests. Clinical Neuropsychology, 1, 53–59.Google Scholar
  52. Reschly, D., and Gresham, F. M. (1989). Current neuropsychological diagnosis of learning problems: A leap of faith. In C. R. Reynolds, Sc. E. Fletcher-Janzen (Eds.), Handbook of clinical child neuropsychology (pp. 503–520 ). New York: Plenum Press.Google Scholar
  53. Reynolds, C. R. (1979a). Interpreting the index of abnormality when the distribution of score differences is known: Comment on Piotrowski. Journal of Consulting and Clinical Psychology, 47, 401–402.PubMedCrossRefGoogle Scholar
  54. Reynolds, C. R. (1979b). Objectivity of scoring for the McCarthy Drawing Tests. Psychology in the Schools, 16, 367–368.CrossRefGoogle Scholar
  55. Reynolds, C. R. (1981a). The problem of bias in psychological assessment. In C. R. Reynolds and T. B. Gutkin (Eds.), The handbook of school psychology. New York: Wiley.Google Scholar
  56. Reynolds, C. R. (1981b). Screening tests: Problems and promises. In N. Lamberts (Ed.), Special Education Assessment Matrix. Monterey, CA: CTB McGraw Hill.Google Scholar
  57. Reynolds, C. R. (1982a). The importance of norms and other traditional psychometric concepts to assessment in clinical neuropsychology. In R. N. Malathesha and L. C. Hartlage (Eds.), Neuropsychology and cognition (Vol. II, pp. 55–76 ). The Hague: Nijhoff.Google Scholar
  58. Reynolds, C. R. (1982b). The problem of bias in psychological assessment. In C. R. Reynolds and T. B. Gutkin (Eds.), The handbookofschoolpsychology (pp. 178–208 ). New York: Wiley.Google Scholar
  59. Reynolds, C. R. (1983). Some new and some unusual educational and psychological tests. School Psychology Review, 12, 481–488.Google Scholar
  60. Reynolds, C. R. (1984). Critical measurement issues in learning disabilities. Journal of Special Education, 18, 451–476.CrossRefGoogle Scholar
  61. Reynolds, C. R. (1986). Clinical acumen but psychometric naivete in neuropsychological assessment of educational disorders. Archives of Clinical Neuropsychology, 1, 121–138.PubMedGoogle Scholar
  62. Reynolds, C. R. (1995). Test bias and the assessment of personality and intelligence. In D. Saklofske and M. Zeidner (Eds.), International handbook of personality and intelligence, (pp. 545–573 ). New York: Plenum Press.CrossRefGoogle Scholar
  63. Reynolds, C. R., and Bigler, E. D. (1994). Manual for the Test of Memory and Learning. Austin, TX: PRO-ED.Google Scholar
  64. Reynolds, C. R., Chastain, R., Kaufman, A. S., and McLean, J. (1987). Demographic influences on adult intelligence at ages 16 to 74 years. Journal of School Psychology, 25, 323–342.CrossRefGoogle Scholar
  65. Reynolds, C. R., and Clark, J. H. (1985). Profile analysis of standardized intelligence test performance of very low functioning individuals. Journal of School Psychology, 23, 227–283.Google Scholar
  66. Reynolds, C. R., and Clark, J. H. (1986). Profile analysis of standardized intelligence test performance of very low functioning individuals. Psychology in the Schools, 23, 5–12.CrossRefGoogle Scholar
  67. Reynolds, C. R., and Gutkin, T. B. (1979). Predicting the premorbid intellectual status of children using demographic data. Clinical Neuropsychology, 1, 36–38.Google Scholar
  68. Reynolds, C. R., and Gutkin, T. B. (1980). Statistics related to profile interpretation of the Peabody Individual Achievement Test. Psychology in the Schools, 17, 316–319.CrossRefGoogle Scholar
  69. Reynolds, C. R., Hartlage, L. C., and Haak, R. ( 1980, September). Lateral preference as determined by neuropsychological performance and aptitude/achievement discrepancies. Pa-per presented to the annual meeting of the American Psychological Association, Montreal.Google Scholar
  70. Reynolds, C. R., and Kaiser, S. M. (1990). Test bias in psychological assessment. In T. B. Gutkin and C. R. Reynolds (Eds.), The handbook of school psychology (2nd ed., pp. 487–525). New York: Wiley.Google Scholar
  71. Reynolds, C. R., and Kaufman, A. S. (1986). Clinical assessment of children’s intelligence with the Wechsler Scales. In B. Wolman (Ed.), Handbook of intelligence (pp. 601–662 ), New York: Wiley.Google Scholar
  72. Reynolds, C. R., and Willson, V. L. (1983, January). Standardized grade equivalents: Really! No. Well, sort of but they lead to the valley of the shadow of misinterpretation and confu-sion. Paper presented to the annual meeting of the South-western Educational Research Association, Houston.Google Scholar
  73. Ris, M. D., and Noll, R. B. (1994). Long-term neurobehavioral outcome in pediatric brain tumor patients: Review and methodological critique. Journal of Clinical and Experimental Neuropsychology, 16, 21.PubMedCrossRefGoogle Scholar
  74. Roffe, M. W., and Bryant, C. K. (1979). How reliable are MSCA profile interpretations? Psychology in the Schools, 16, 14–18.CrossRefGoogle Scholar
  75. Rourke, B. P. (1975). Brain-behavior relationships in children with learning disabilities: A research program. American Psychologist, 30, 911–920.PubMedCrossRefGoogle Scholar
  76. Sandoval, J. (1981, August). Can neuropsychology contribute to rehabilitation in educational settings? No. Paper presented to the annual meeting of the American Psychological Association, Los Angeles.Google Scholar
  77. Sattler, J. M. (1974). Assessment of children’s intelligence. Philadelphia: Saunders.Google Scholar
  78. Satz, P., Taylor, H. G., Friel, J., and Fletcher, J. (1978). Some developmental and predictive precursors of reading disabilities: A six year follow-up. In A. L. Benton and D. Pearl (Eds.), Dyslexia: An appraisal of current knowledge. London: Oxford University Press.Google Scholar
  79. Selz, M., and Reitan, R. M. (1979). Rules for neuropsychological diagnosis: Classification of brain function in older chil-dren. Journal of Consulting and Clinical Psychology, 47, 258–264.PubMedCrossRefGoogle Scholar
  80. Tabachnick, B. G. (1979). Test scatter on the WISC-R. Journal of Learning Disabilities, 12, 60–62.CrossRefGoogle Scholar
  81. Taylor, R. L., and Imivey, J. K. (1980). Diagnostic use of the WISC-R and McCarthy Scales: A regression analysis approach to learning disabilities. Psychology in the Schools, 17, 327–330.CrossRefGoogle Scholar
  82. Thienemann, M., and Koran, L. M. (1995). Do soft signs predict treatment outcome in obsessive-compulsive disorder?Google Scholar
  83. Journal of Neuropsychiatry and Clinical Neurosciences,7, 218–222.Google Scholar
  84. Thompson, R. J. (1980). The diagnostic utility of WISC-R measures with children referred to a developmental evaluation center. Journal of Consulting and Clinical Psychology, 48, 440–447.Google Scholar
  85. Thorndike, R. L., and Hagen E. P. (1977). Measurement and evaluation in psychology and education ( 4th ed. ). New York: Wiley.Google Scholar
  86. U. S. Office of Education. (1976). Education of handicapped children: Assistance to state: Proposed rulemaking. Federal Register; 41, 52404–52407.Google Scholar
  87. Wallbrown, F. H., Vance, H., and Pritchard, K. K. (1979). Discriminating between attitudes expressed by normal and disabled readers. Psychology in the Schools, 4, 472–477.CrossRefGoogle Scholar
  88. Wechsler, D. (1974). Wechsler Intelligence Scale for Children-Revised. New York: Psychological Corporation.Google Scholar
  89. Wherry, R. J., Sr. (1932). A new formula for predicting the shrinkage of the coefficient for multiple correlation. Annals of Mathematical Statistics, 2, 404–457.Google Scholar
  90. Willson, V. L., and Reynolds, C. R. (1982). Methodological and statistical problems in determining membership in clinical populations. Clinical Neuropsychology, 4, 134–138.Google Scholar
  91. Wright, L., Schaefer, A. B., and Solomons, G. (1979). Encyclopedia of pediatric psychology. Baltimore: University Park Press.Google Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Cecil R. Reynolds
    • 1
  1. 1.Department of Educational PsychologyTexas A&M UniversityCollege StationUSA

Personalised recommendations