Advertisement

Measurement and Statistical Problems in Neuropsychological Assessment of Children

  • Cecil R. Reynolds
  • Benjamin A. Mason

The field of neuropsychology as practiced clinically has been driven in large part by the development and application of standardized diagnostic procedures that are more sensitive than medical examinations to changes in behavior, in particular higher cognitive processes, as related to brain function.

Keywords

Reading Comprehension Neuropsychological Test Standard Score Neuropsychological Measure Subtest Score 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Adams, R. L. (1985). Review of the Luria-Nebraska neuropsychological battery. In J. V. Mitchell (Ed.), The ninth mental measurements yearbook. Lincoln: University of Nebraska Press.Google Scholar
  2. Angoff, W. H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (2nd ed.). Washington, DC: American Council on Education.Google Scholar
  3. Arnold, B. R., Montgomery, G. T., Castaneda, I., & Langoria, R. (1994). Acculturation and performance of Hispanics on selected Halstead—Reitan neuropsychological tests. Assessment, 1, 239-248.Google Scholar
  4. Beglinger, L. J., Gaydos, B., Tangphao-Daniels, O., Duff, K., Karaken, D. A., Crawford, J., et al. (2005). Practice effects and the use of alternate forms in serial neuropsychological testing. Archives of Clinical Neuropsychology, 20, 517-529.PubMedCrossRefGoogle Scholar
  5. Bernard, L. C., Houston, W., & Natoli, L. (1993). Malingering on neuropsychological memory tests: Potential objective indicators. Journal of Clinical Psychology, 49, 45-53.PubMedCrossRefGoogle Scholar
  6. Boder, E., & Jarrico, S. (1982). Boder Test of Reading- Spelling Patterns. New York: Grune & Stratton.Google Scholar
  7. Boone, K., Ghaffarian, S., Lesser, I., & Hill-Gutierrez, E. (1993). Wisconsin card sorting test performance in healthy, older adults: Relationship to age, sex, education, and IQ. Journal of Clinical Psychology, 49, 54-60.PubMedCrossRefGoogle Scholar
  8. Borsuk, E. R., Watkins, M. W., & Carnivez, G. L. (2006). Long-term stability of membership in a Wechsler Intelligence Scale for Children-third edition (WISC-III) subtest core profile taxonomy. Journal of Psychoeducational Assessment, 24, 52-68.CrossRefGoogle Scholar
  9. Broshek, D. K., & Barth, J. T. (2000). The Halstead-Reitan Neuropsychological Test Battery. In G. Groth-Marnat (Ed.), Neuropsychological assessment in clinical practice: A guide to test interpretation and integration (pp. 223-262). New York: Wiley.Google Scholar
  10. Brown, R. T., Reynolds, C. R., & Whitaker, J. S. (1999). Bias in mental testing since bias in mental testing. School Psychology Quarterly, 14, 208-238.CrossRefGoogle Scholar
  11. Burns, E. (1982). The use and interpretation of standard grade equivalents. Journal of Learning Disabilities, 15, 17-18.CrossRefGoogle Scholar
  12. Byrd, D. A., Miller, S. W., Reilly, J., Weber, S., Wall, T. L., & Heaton, R. K. (2006). Early environmental factors, ethnicity, and adult cognitive test performance. The Clinical Neuropsychologist, 20, 243-260.PubMedCrossRefGoogle Scholar
  13. Cattell, R. B. (1966). Handbook of multivariate experimental psychology . Chicago: Rand McNallyGoogle Scholar
  14. Cattin, P. (1980). Note on the estimation of the squared cross-validated multiple correlation of a regression model. Psychological Bulletin, 87, 63-65.CrossRefGoogle Scholar
  15. Charter, R. A. (1999). Sample size requirements for precise estimates of reliability, generalizability, and validity coefficients. Journal of Clinical and Experimental Neuropsychology, 21, 559-566.PubMedCrossRefGoogle Scholar
  16. Cicchetti, D. V. (1994). Multiple comparison methods: Establishing guidelines for their valid application in neuropsychological research. Journal of Clinical and Experimental Neuropsychology, 16, 155-161.PubMedCrossRefGoogle Scholar
  17. Cicchetti, D. V. (2001). The precision of reliability and validity estimates re-visited: Distinguishing between clinical and statistical significance of sample size requirements. Journal of Clinical and Experimental Neuropsychology, 23, 695-700.Google Scholar
  18. Coles, G. S. (1978). The learning disabilities test battery: Empirical and social issues. Howard Educational Review, 4, 313-340.Google Scholar
  19. Cooley, W. W., & Lohnes, P. R. (1971). Multivariate data analysis. New York: Wiley.Google Scholar
  20. Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York: Harper & Row.Google Scholar
  21. Cronbach, L. J., & Gleser, G. C. (1953). Assessing similarity between profiles. Psychological Bulletin, 50, 456-473.PubMedCrossRefGoogle Scholar
  22. Davis, F. B. (1959). Interpretation of differences among average and individual test scores. Journal of Educational Psychology, 50, 162-170.CrossRefGoogle Scholar
  23. Dean, R. S. (1978). Distinguishing learning-disabled and emotionally disturbed children on the WISC-R. Journal of Consulting and Clinical Psychology, 46, 4381-4382.CrossRefGoogle Scholar
  24. Dean, R. S. (1985). Review of the Halstead-Reitan neuropsychological test battery. In J. V. Mitchell (Ed.), The ninth mental measurements yearbook. Lincoln: University of Nebraska Press.Google Scholar
  25. Demakis, G.J. (2006). Meta-analysis in neuropsychology: Basic approaches, findings, and directions. The Clinical Neuropsychologist, 20, 10-26.Google Scholar
  26. Dodrill, C. B. (1997). Myths of neuropsychology. The Clinical Neuropsychologist, 11, 1-17.CrossRefGoogle Scholar
  27. Dodrill, C. B. (1999). Myths of neuropsychology: Further considerations. The Clinical Neuropsychologist, 13, 562-572.PubMedGoogle Scholar
  28. Dombrowski, S. C., Kamphaus, R. W., & Reynolds, C. R. (2004). After the demise of the discrepancy: Proposed learning disabilities diagnostic criteria. Professional Psychology: Research and Practice, 35, 364-372.CrossRefGoogle Scholar
  29. Dunleavy, R. A., Hansen, J. L., & Baade, L. E. (1981). Discriminating powers of Halstead Battery tests in assessment of 9 to 14 year old severely asthmatic children. Clinical Neuropsychology, 3, 99-12.Google Scholar
  30. Feldt, L. S., & Brennan, R. L. (1989). Reliability. In R. Linn (Ed.), Educational measurement (3rd ed.). New York: Macmillan Co.Google Scholar
  31. Fuller, G. B., & Goh, D. S. (1981). Intelligence, achievement, and visual-motor performance among learning disabled and emotionally impaired children. Psychology in the Schools, 18, 262-268.CrossRefGoogle Scholar
  32. Glass, G. V. (1978). Integrating findings: The meta-analysis of research. In L. Shulman (Ed.), Review of Research in Education, 5, 351-379.Google Scholar
  33. Glozman, J. M. (2007). A. R. Luria and the history of Russian neuropsychology. Journal of the History of the Neurosciences, 16, 168-180.PubMedCrossRefGoogle Scholar
  34. Glutting, J. J., McDermott, P. A., Watkins, M. M., Kush, J. C., & Konold, T. R. (1997). The base rate problem and its consequences for interpreting children’s ability profiles. School Psychology Review, 26(2), 176-188.Google Scholar
  35. Glutting, J. J., Watkins, M. W., Konold, T. R., & McDermott, P. A. (2006). Distinctions without a difference: The utility of observed versus latent factors from the WISC-IV in estimating reading and math achievement on the WIAT-II. Journal of Special Education, 40, 103-114.Google Scholar
  36. Golden, C. J. (1981). Diagnosis and rehabilitation in clinical neuropsychology (2 nd ed.). Springfield, IL: Thomas.Google Scholar
  37. Golden, C. J., Moses, J. A., Jr., Graber, B., & Berg, T. (1981). Objective clinical rules for interpreting the Luria- Nebraska Neuropsychological Battery: Derivation, effectiveness, and validation. Journal of Consulting and Clinical Psychology, 49, 616-668.PubMedCrossRefGoogle Scholar
  38. Gordon, R. A. (1984). Digits backward and the Mercer- Kamin law: An empirical response to Mercer’s treatment of internal validity of IQ tests. In C. R. Reynolds & R. T. Brown (Eds.), Perspectives on bias in mental testing. New York: Plenum Press.Google Scholar
  39. Green, P. (2003). Welcoming a paradigm shift in neuropsychology. Archives of Clinical Neuropsychology, 18, 625-627.CrossRefGoogle Scholar
  40. Guilford, J. P. (1954). Psychometric methods (2 nd ed.). New York: McGraw-Hill.Google Scholar
  41. Guilmette, T. J., & Rasile, D. (1995). Sensitivity, specificity, and diagnostic accuracy of three verbal memory measures in the assessment of mild brain injury. Neuropsychology, 9, 338-344.CrossRefGoogle Scholar
  42. Gutkin, T. B., & Reynolds, C. R. (1980, September). Normative data for interpreting Reitan’s index of Wechsler subtest scatter. Paper presented to the annual meeting of the American Psychological Association, Montreal.Google Scholar
  43. Haak, R. (1989). Establishing neuropsychology in a school setting: Organization, problems, and benefits, In C. R. Reynolds & E. Fletcher- Janzen (Eds.), Handbook of clinical child neuropsychology (pp. 489-502). New York: Plenum Press.Google Scholar
  44. Helms, J. E. (1992). Why is there no study of cultural equivalence in standardized cognitive ability testing? American Psychology, 47, 1083-1101.CrossRefGoogle Scholar
  45. Herskovits, M., & Gyarmathy, E. (1995). Types of high ability: Highly able children with an unbalanced intelligence structure. European Journal for High Ability, 6, 38-48.CrossRefGoogle Scholar
  46. Hishinuma, E. S., & Tadaki, S. (1997). The problem with grade and age equivalents: WIAT as a case in point. Journal of Psychoeducational Assessment, 15, 214-225.CrossRefGoogle Scholar
  47. Hogarty, K. Y., Kromrey, J. D., Ferron, J. M., & Hines, C. V. (2004). Selection of variables in exploratory factor analysis: An empirical comparison of a stepwise and traditional approach. Psykometrika, 69, 593-611.CrossRefGoogle Scholar
  48. Hynd, G. (Ed.). (1981). Neuropsychology in schools. School Psychology Review, 10 (3).Google Scholar
  49. Ivnik, R. J., Smith, G. E., Malec, J. F., Kokmen, E., & Tangelos, E. G. (1994). Mayo cognitive factor scales: Distinguishing normal and clinical samples by profile variability. Neuropsychology, 8, 203-209.CrossRefGoogle Scholar
  50. Iverson, G. L., Mendrek, A., & Adams, R. L. (2004). The persistent belief that VIQ-PIQ splits suggest lateralized brain damage. Applied Neuropsychology, 11, 85-90.PubMedCrossRefGoogle Scholar
  51. Jastak, J. F., & Jastak, S. (1978). Wide Range Achievement Test. Wilmington, DE: Jastak.Google Scholar
  52. Jung, R. E., Yeo, R. A., Chiulli, S. J., Sibbitt Jr., W. L., & Brooks, W. M. (2000). Myths of Neuropsychology: Intelligence, neurometabolism, and cognitive ability. The Clinical Neuropsychologist, 14, 535-545.PubMedGoogle Scholar
  53. Kamphaus, R. W. (2004). “Back to the future” of the Stanford-Binet Intelligence Scales. In M. Hersen (Ed.), Comprehensive handbook of psychological assessment (pp. 77-86). Hoboken, NJ: Wiley.Google Scholar
  54. Kaufman, A. S. (1976a). A new approach to the interpretation of test scatter on the WISC-R. Journal of Learning Disabilities, 9, 160-167.Google Scholar
  55. Kaufman, A. S. (1976b). Verbal-performance IQ discrepancies on the WISC-R. Journal of Learning Disabilities, 9, 739-744.Google Scholar
  56. Kaufman, A. S. (1979). Intelligence testing with the WISC-R. New York: Wiley-Interscience.Google Scholar
  57. Kaufman, A. S. (1990). Assessing adolescent and adult intelligence. Boston: Allyn & Bacon.Google Scholar
  58. Kaufman, A. S., & Kaufman, N. L.(2005). Kaufman assessment battery for children, second edition. Circle Pines, MN: American Guidance Service.Google Scholar
  59. Kaufman, A. S., McLean, J. E., & Reynolds, C. R. (1988). Sex, race, residence, region, and education differences on the 11 WAIS-R subtests. Journal of Clinical Psychology, 44, 231-248.PubMedCrossRefGoogle Scholar
  60. Kennedy, M. L., Willis, W. G., & Faust, D. (1997). The base-rate fallacy in school psychology. Journal of Psychoeducational Assessment, 15, 292-307.CrossRefGoogle Scholar
  61. Kennepohl, S., Shore, D., Nabors, N., & Hanks, R. (2004). African American acculturation and neuropsychological test performance following traumatic brain injury. Journal of the International Neuropsychological Society, 10, 566-577.PubMedCrossRefGoogle Scholar
  62. Labarge, A. S., McCaffrey, R. J., & Brown, T. A. (2003). Neuropsychologists' abilities to determine the predicitve value of diagnostic tests. Archives of Clinical Neuropsychology, 18, 165-175.PubMedGoogle Scholar
  63. Livingston, R. B., Jennings, E., Reynolds, C. R., & Gray, R. M. (2003). Mulitvariate analyses of the profile stability of intelligence tests: High for IQs, low to very low for subtest analyses. Archives of Clinical Neuropsychology, 18, 487-507.PubMedGoogle Scholar
  64. Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental tests. Reading, MA: Addison-Wesley.Google Scholar
  65. Lowe, P. A., Mayfield, J. W., & Reynolds, C. R. (2003). Gender differences in memory test performance among children and adolescents. Archives of Clinical Neuropsychology, 18, 865-878.PubMedGoogle Scholar
  66. Malloy, P. F., & Webster, J. S. (1981). Detecting mild brain impairment using the Luria-Nebraska Neuropsychological Battery. Journal of Consulting and Clinical Psychology, 49, 768-770.PubMedCrossRefGoogle Scholar
  67. Matarazzo, J. D. (1972). Wechsler’s measurement and appraisal of adult intelligence. Baltimore: Williams & Wilkins.Google Scholar
  68. Mattison, R. E., Hooper, S. R.,& Carlson, G. A. (2006). Neuropsychological characteristics of Special Education students with serious emotional/behavioral disorders. Behavioral Disorders, 31, 176-188.Google Scholar
  69. Miles, J., & Stelmack, R. (1994). Learning disability subtypes and the effects of auditory and visual priming on visual event-related potentials to words. Journal of Clinical and Experimental Neuropsychology, 16, 43-64.PubMedCrossRefGoogle Scholar
  70. Naglieri, J. A., & Paolitto, A. W. (2005). Ipsative comparisons of WISC-IV index scores. Applied Neuropsychology, 12, 208-211.PubMedCrossRefGoogle Scholar
  71. Nesselroade, J., & Cattell, R. B. (1988). Handbook of multivariate experimental psychology (2 nd ed.). New York: Plenum Press.Google Scholar
  72. Nunnally, J. C. (1978). Psychometric theory (2 nd ed.). New York: McGraw-Hill.Google Scholar
  73. O' Bryant, S. E., O' Jile, J. R., & McCaffrey, R. J. (2004). Reporting of demographic variables in neuropsychological research: Trends in the current literature. The Clinical Neuropsychologist, 18, 229-233.CrossRefGoogle Scholar
  74. Osgood, C. E., & Suci, G. J. (1952). A measurement of relation determined by both mean differences and profile interpretation. Psychological Bulletin, 49, 251-262.PubMedCrossRefGoogle Scholar
  75. Ostrosky-Solis, F., Ramirez, M., & Ardila, A. (2004). Effects of culture and education on neuropsychological testing: A preliminary study with indigenous and nonindigenous population. Applied Neuropsychology, 11, 186-193.CrossRefGoogle Scholar
  76. Parsons, O.A. & Prigatano, G.P. (1978). Methodological considerations in clinical neuropsychological research. Journal of Consulting and Clinical Psychology, 46, 608-619.Google Scholar
  77. Prigatano, G. P. (2003). Challenging dogma in neuropsychology and related disciplines. Archives of Clinical Neuropsychology, 18, 811-825.PubMedGoogle Scholar
  78. Pedhazur, E. J., & Schmelkin, L. P. (1991). Measurement, design, and analysis. Hillsdale, NJ: Erlbaum.Google Scholar
  79. Petersen, N. S., Kolen, M. J., & Hoover, H. D. (1989). Scaling, norming, and equating. In R. Linn (Ed.), Educational measurement, 3rd ed. (pp. 221-262). New York: MacMillan.Google Scholar
  80. Piotrowski, R. J. (1978). Abnormality of subtest score differences on the WISC-R. Journal of Consulting and Clinical Psychology, 46, 569-570.CrossRefGoogle Scholar
  81. Plake, B. S., Reynolds, C. R., & Gutkin, T. B. (1981). A technique for the comparison of profile variability between independent groups. Journal of Clinical Psychology, 37, 142-146.CrossRefGoogle Scholar
  82. Purisch, A. D., Golden, C. J., & Hammeke, T. A. (1979). Discrimination of schizophrenic and brain injured patients by standardized version of Luria’s neuropsychological tests. Clinical Neuropsychology, 1, 53-59.Google Scholar
  83. Reschly, D., & Gresham, F. M. (1989). Current neuropsychological diagnosis of learning problems: A leap of faith. In C. R. Reynolds, & E. Fletcher-Janzen (Eds.), Handbook of clinical child neuropsychology (pp. 503-520). New York: Plenum Press.Google Scholar
  84. Reynolds, C. R. (1979a). Interpreting the index of abnormality when the distribution of score differences is known: Comment on Piotrowski. Journal of Consulting and Clinical Psychology, 47, 401-402.Google Scholar
  85. Reynolds, C. R. (1979b). Objectivity of scoring for the McCarthy drawing tests. Psychology in the Schools, 16, 367-368.Google Scholar
  86. Reynolds, C. R. (1981a). The problem of bias in psychological assessment. In C. R. Reynolds & T. B. Gutkin (Eds.), The handbook of school psychology. New York: Wiley.Google Scholar
  87. Reynolds, C. R. (1981b). Screening tests: Problems and promises. In N. Lamberts (Ed.), Special education assessment matrix. Monterey, CA: CTB McGraw Hill.Google Scholar
  88. Reynolds, C. R. (1982a). The importance of norms and other traditional psychometric concepts to assessment in clinical neuropsychology. In R. N. Malathesha & L. C. Hartlage (Eds.), Neuropsychology and cognition (Vol. II, pp. 55-76). The Hague: Nijhoff.Google Scholar
  89. Reynolds, C. R. (1982b). The problem of bias in psychological assessment. In C. R. Reynolds & T. B. Gutkin (Eds.), The handbook of school psychology (pp. 178-208). New York: Wiley.Google Scholar
  90. Reynolds, C. R. (1983). Some new and some unusual educational and psychological tests. School Psychology Review, 12, 481-488.Google Scholar
  91. Reynolds, C. R. (1984). Critical measurement issues in learning disabilities. Journal of Special Education, 18, 451-476.CrossRefGoogle Scholar
  92. Reynolds, C. R. (1986). Clinical acumen but psychometric naivete in neuropsychological assessment of educational disorders. Archives of Clinical Neuropsychology, 1, 121-138.PubMedGoogle Scholar
  93. Reynolds, C. R. (1995). Test bias and the assessment of personality and intelligence. In D. Saklofske & M. Zeidner (Eds.), International handbook of personality and intelligence, (pp. 545-573). New York: Plenum Press.Google Scholar
  94. Reynolds, C. R., & Bigler, E. D. (1994). Manual for the test of memory and learning. Austin, TX: PRO-ED.Google Scholar
  95. Reynolds, C. R., Chastain, R., Kaufman, A. S., & McLean, J. (1987). Demographic influences on adult intelligence at ages 16 to 74 years. Journal of School Psychology, 25, 323-342.CrossRefGoogle Scholar
  96. Reynolds, C. R., & Clark, J. H. (1985). Profile analysis of standardized intelligence test performance of very low functioning individuals. Journal of School Psychology, 23, 227-283.Google Scholar
  97. Reynolds, C. R., & Clark, J. H. (1986). Profile analysis of standardized intelligence test performance of very low functioning individuals. Psychology in the Schools, 23, 5-12.CrossRefGoogle Scholar
  98. Reynolds, C. R., & Gutkin, T. B. (1979). Predicting the premorbid intellectual status of children using demographic data. Clinical Neuropsychology, 1, 36-38.Google Scholar
  99. Reynolds, C. R., & Gutkin, T. B. (1980). Statistics related to profile interpretation of the Peabody Individual Achievement Test. Psychology in the Schools, 17, 316-319.CrossRefGoogle Scholar
  100. Reynolds, C. R., Hartlage, L. C., & Haak, R. (1980, September). Lateral preference as determined by neuropsychological performance and aptitude/achievement discrepancies. Paper presented to the annual meeting of the American Psychological Association, Montreal.Google Scholar
  101. Reynolds, C. R., & Kaiser, S. M. (1990). Test bias in psychological assessment. In T. B. Gutkin & C. R. Reynolds (Eds.), The handbook of school psychology (2 nd ed., pp. 487-525). New York: Wiley.Google Scholar
  102. Reynolds, C. R. & Kamphaus, R. W. (2003). Reynolds intellectual assessment scales: Professional manual. Lutz, FL: PAR, Inc.Google Scholar
  103. Reynolds, C. R., & Kaufman, A. S. (1986). Clinical assessment of children’s intelligence with the Wechsler Scales. In B. Wolman (Ed.), Handbook of intelligence (pp. 601-662), New York: Wiley.Google Scholar
  104. Reynolds, C. R., Livingston, R. L., & Willson, V. L. (2006). Measurement and assessment in the classroom. Boston: Allyn & Bacon.Google Scholar
  105. Reynolds, C. R., & Ramsay, M. C. (2003). Bias in psychological assessment: An empirical review and recommendations. In J. R. Graham, J. A. Naglieri & I. B. Weiner (Eds.), Handbook of psychology: Vol. 10. Assessment psychology (pp. 67-93). New York: Wiley.Google Scholar
  106. Reynolds, C. R. & Voress, J. K. (2007). Test of memory and learning, second edition. Austin, TX: Pro-Ed Inc.Google Scholar
  107. Reynolds, C. R., & Willson, V. L. (1983, January). Standardized grade equivalents: Really! No. Well, sort of, but they lead to the valley of the shadow of misinterpretation and confusion. Paper presented to the annual meeting of the Southwestern Educational Research Association, Houston.Google Scholar
  108. Ris, M. D., & Noll, R. B. (1994). Long-term neurobehavioral outcome in pediatric brain tumor patients: Review and methodological critique. Journal of Clinical and Experimental Neuropsychology, 16, 21.PubMedCrossRefGoogle Scholar
  109. Roffe, M. W., & Bryant, C. K. (1979). How reliable are MSCA profile interpretations? Psychology in the Schools, 16, 14-18.CrossRefGoogle Scholar
  110. Rourke, B. P. (1975). Brain-behavior relationships in children with learning disabilities: A research program. American Psychologist, 30, 911-920.PubMedCrossRefGoogle Scholar
  111. Ruff, R. M. (2003). A friendly critique of neuropsychology: Facing the challenges of our future. Archives of Clinical Neuropsychology, 18, 847-864.PubMedCrossRefGoogle Scholar
  112. Russell, E. W. (2005). Norming subjects for the Halstead Reitan battery. Archives of Clinical Neuropsychology, 20, 479-484.PubMedCrossRefGoogle Scholar
  113. Russell, E. W. , Russell, S. L. K., & Hill, B. D. (2005). The fundamental psychometric status of neuropsychological batteries. Archives of Clinical Neuropsychology, 20, 785-794.PubMedCrossRefGoogle Scholar
  114. Sandoval, J. (1981, August). Can neuropsychology contribute to rehabilitation in educational settings? No. Paper presented to the annual meeting of the American Psychological Association, Los Angeles.Google Scholar
  115. Sattler, J. M. (1974). Assessment of children’s intelligence. Philadelphia: Saunders.Google Scholar
  116. Satz, P., Taylor, H. G., Friel, J., & Fletcher, J. (1978). Some developmental and predictive precursors of reading disabilities: A six year follow-up. In A. L. Benton & D. Pearl (Eds.), Dyslexia: An appraisal of current knowledge. London: Oxford University Press.Google Scholar
  117. Schatz, P., Jay, K. A., McComb, J., & McLaughlin, J. R. (2005). Misuse of statistical tests in Archives of Clinical Neuropsychology publications. Archives of Clinical Neuropsychology, 20, 1053-1059.PubMedCrossRefGoogle Scholar
  118. Selz, M., & Reitan, R. M. (1979). Rules for neuropsychological diagnosis: Classification of brain function in older children. Journal of Consulting and Clinical Psychology, 47, 258-264.PubMedCrossRefGoogle Scholar
  119. Stanczak, E. M.., Stanczak, D. E..,& Templer, D. I. (2000). Subject-selection procedures in neuropsychological research: A meta-analysis and prospective study. Archives of Clinical Neuropsychology, 15, 587-601.PubMedGoogle Scholar
  120. Tabachnick, B. G. (1979). Test scatter on the WISC-R. Journal of Learning Disabilities, 12, 60-62.CrossRefGoogle Scholar
  121. Taylor, R. L., & Imivey, J. K. (1980). Diagnostic use of the WISC-R and McCarthy Scales: A regression analysis approach to learning disabilities. Psychology in the Schools, 17, 327-330.CrossRefGoogle Scholar
  122. Thienemann, M., & Koran, L. M. (1995). Do soft signs predict treatment outcome in obsessive-compulsive disorder? Journal of Neuropsychiatry and Clinical Neurosciences, 7, 218-222.PubMedGoogle Scholar
  123. Thompson, B. (2003). Score reliability: Contemporary thinking on reliability issues. Thousand Oaks, CA: Sage.Google Scholar
  124. Thompson, R. J. (1980). The diagnostic utility of WISC-R measures with children referred to a developmental evaluation center. Journal of Consulting and Clinical Psychology, 48, 440-447.PubMedCrossRefGoogle Scholar
  125. Thorndike, R. L., & Hagen E. P. (1977). Measurement and evaluation in psychology and education (4th ed.). New York: Wiley.Google Scholar
  126. U.S. Office of Education. (1976). Education of handicapped children: Assistance to state: Proposed rulemaking. Federal Register, 41, 52404-52407.Google Scholar
  127. Wallbrown, F. H., Vance, H., & Pritchard, K. K. (1979). Discriminating between attitudes expressed by normal and disabled readers. Psychology in the Schools, 4, 472-477.CrossRefGoogle Scholar
  128. Wechsler, D. (1974). Wechsler Intelligence Scale for Children-Revised. New York: Psychological Corporation.Google Scholar
  129. Wherry, R. J., Sr. (1932). A new formula for predicting the shrinkage of the coefficient for multiple correlation. Annals of Mathematical Statistics, 2, 404-457.Google Scholar
  130. Willson, V. L., & Reynolds, C. R. (1982). Methodological and statistical problems in determining membership in clinical populations. Clinical Neuropsychology, 4, 134-138.Google Scholar
  131. Wong, T. M. (2006). Ethical controversies in neuropsychological test selection, administration, and interpretation. Applied Neuropsychology, 13, 68-76.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Cecil R. Reynolds
    • 1
  • Benjamin A. Mason
    • 1
  1. 1.Department of Educational PsychologyTexas A&M UniversityUSA

Personalised recommendations