Advertisement

The Status of Research on the Scholastic Aptitude Test (SAT) and Hispanic Students in Postsecondary Education

  • Maria Pennock-Román
Part of the Evaluation in Education and Human Services book series (EEHS, volume 32)

Abstract

In this chapter, studies evaluating the validity of the Scholastic Aptitude Test (SAT) for use in college admissions of Hispanic students will be reviewed. Other tests for admission to graduate or professional schools are not considered here because small sample sizes and other practical problems limit the number of studies on Hispanic versus white non-Hispanic group differences in validity. Before beginning, it is worthwhile to repeat the cautions in Linn’s (1982) preface to his review of group differences in test validity:

The controversies over testing are neither created by, nor will they be resolved by, the results of investigations of test validity (Cronbach, 1975)....Justification of test use obviously depends upon much more than [how well an ability test predicts academic or professional performance]. Potential benefits and losses for the individual, the institution, and the society at large need to be considered, and the relative importance of the benefits and losses can be expected to vary greatly in the eyes of these various interests. Nonetheless, information about the degree of relationship to test scores to particular criterion measures and about the degree to which the observed relationship is generalizable across situations and from one situation to another is an important component in the evaluation of the use of tests. (335–336).

Keywords

Hispanic Student College Entrance Examination Language Background High School Grade Scholastic Aptitude Test 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alderman, D. L. (1982) Language proficiency as a moderator variable in testing academic aptitude, Journal of Educational Psychology, 74, 580–87.CrossRefGoogle Scholar
  2. Alderman, D. L. and P. W. Holland. (1981) Item performance across native language groups on the Test of English as a Foreign Language. TOEFL Research Report no. 9. Princeton, NJ: Educational Testing Service (ETS).Google Scholar
  3. Anastasi, A. (1982) Psychological testing. New York: Macmillan.Google Scholar
  4. Arbeiter, S. (1985) Profiles, college bound seniors. New York: College Entrance Examination Board.Google Scholar
  5. Astin, A. W. (1982) Minorities in American higher education. San Francisco: Jossey-Bass.Google Scholar
  6. Beaton, A. E. and J. L. Barone. (1981) The usefulness of selection tests in college admissions. ETS Research Report no. RR-81–12. Princeton: Educational Testing Service (ETS).Google Scholar
  7. Breland, H. M. (1979) Population validity and college entrance measures. Research Monograph no. 8. New York: College Entrance Examination Board (CEEB).Google Scholar
  8. Breland, H. M., Stocking, M., Pinchak, B. M. and N. Abrams. (1974) The crosscultural stability of mental test items: An investigation of response patterns for ten sociocultural groups. ETS Project Report 74–2. Princeton: Educational Testing Service (ETS).Google Scholar
  9. Chen, Z. and G. Henning. (1985) Linguistic and cultural bias in language proficiency tests. Language Testing, 2, 155–63.CrossRefGoogle Scholar
  10. Cohen, J. and P. Cohen. (1983) Applied multiple regression/correlation analysis for the behavioral sciences. (2d ed). Hillsdale: Lawrence Erlbaum.Google Scholar
  11. College Entrance Examination Board. (1982) Profiles, college bound seniors. New York: Author.Google Scholar
  12. College Entrance Examination Board. (1987) National scores on SAT show little change in 1987; New data on student academic backgrounds available. Press release, September 22.Google Scholar
  13. College Entrance Examination Board. (1988) 1987 College bound seniors ethnic/ sex data. Princeton: College Entrance Examination Board.Google Scholar
  14. Cronbach, L. J. (1975) Five decades of public controversy over mental testing, American Psychologist, 30, 1–14.CrossRefGoogle Scholar
  15. Dorans, N. J. and E. Kulick. (1986) Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the Scholastic Aptitude Test, Journal of Educational Measurement, 23 (4), 355–68.CrossRefGoogle Scholar
  16. Durán, R. P. (1983) Hispanics’ education and background: Predictors of college achievement. New York: College Entrance Examination Board (CEEB).Google Scholar
  17. Durán, R. P. (1988) Testing of Hispanic students: Implications for secondary education. (In this volume)Google Scholar
  18. Durán, R. P., Enright, M. K. and D. A. Rock. (1985) Language factors and Hispanic freshmen’s student profile. College Board Report no. 85–3. New York: College Entrance Examination Board (CEEB).Google Scholar
  19. Freedle, R. and I. Kostin. (1990) Item difficulty of four verbal item types and an index of different item functioning for Black and White examinees, Journal of Educational Measurement, 27, 329–430.CrossRefGoogle Scholar
  20. Freedle, R., Kostin, J. and L. Schwartz. (1987) A comparison of strategies used by black and white students in solving SAT verbal analogies using a thinking aloud method and a matched percentage-correct design. ETS Research Report No. RR-87–48. Princeton: Educational Testing Service (ETS).Google Scholar
  21. Goldman, R. D. and R. Richards. (1974) The SAT prediction of grades for Mexican American versus Anglo-American students of the University of California, Riverside, Journal of Educational Measurement, 11 (2), 129–35.CrossRefGoogle Scholar
  22. Goldman, R. D. and B. N. Hewitt. (1975) An investigation of test bias for Mexican American college students, Journal of Educational Measurement, 12, 187–96.CrossRefGoogle Scholar
  23. Goldman, R. D. and B. N. Hewitt. (1976) Predicting the success of black, Chicano, oriental and white college students, Journal of Educational Measurement, 13, 107–17.CrossRefGoogle Scholar
  24. Goldman, R. D. and M. Widawski. (1976) An analysis of types of errors in the selection of minority college students, Journal of Educational Measurement, 13, 185–200.CrossRefGoogle Scholar
  25. Hale, G. A., Stansfield, C. W. and R. P. Duran. (1984) Summaries of studies involving the Test of English as a Foreign Language, 1963–1982. TOEFL Research Report no. 16. Princeton: Educational Testing Service (ETS).Google Scholar
  26. Hunter, R. V. and C. D. Slaughter. (1980) ETS test sensitivity review process. Princeton: Educational Testing Service (ETS).Google Scholar
  27. Lee, V. E. and R. B. Ekstrom. (1987) Student access to guidance counseling in high school. American Educational Research Journal, 24, 287–310.Google Scholar
  28. Linn, R. L. (1982) Ability testing: Individual differences, prediction, and differential prediction. In Ability testing: Uses, consequences, and controversies, ed. A. K. Wigdor and W. R. Garner, 335–88. Washington, DC: National Academy Press.Google Scholar
  29. Loyd, B. H. (1982) Analysis of content-related bias for Anglo and Hispanic students. Paper presented at the annual meeting of the American Educational Research Association, March, New York.Google Scholar
  30. Messick, S. (1987) Validity. ETS Research Report no. RR-87–40. Princeton, NJ: Educational Testing Service (ETS).Google Scholar
  31. Messick, S. and A. Jungeblut. (1981) Time and method in coaching for the SAT, Psychological Bulletin, 89, 191–216.CrossRefGoogle Scholar
  32. Olmedo, E. L. (1977) Psychological testing and the Chicano. In Chicano psychology, ed. J. L. Martinez, Jr. New York: Academic Press.Google Scholar
  33. Pennock-Román, M. (1986a) New directions for research on Spanish-language tests and test-item bias. In Latino college students, ed. M. A. Olivas, 193–220, chapter 7. New York: Teachers College Press.Google Scholar
  34. Pennock-Román, M. (1986b) Fairness in the use of tests for selective admissions of Hispanics. In Latino college students, ed. M. A. Olivas, 246–80, chapter 9. New York: Teachers College Press.Google Scholar
  35. Pennock-Román, M., (1990). Test validity and language background: A study of Hispanic American students at six universities. New York: College Entrance Examination Board.Google Scholar
  36. Pennock-Román, M., Powers, D. and M. Perez. (1991) A preliminary evaluation of Testskills: A kit to prepare Hispanic students for the PSAT/NMSQT. In Assessment and Access, eds. G. D. Keller, J. R. Deneen, and R. J. Magallan, 243–264, Chapter 10. Albany, NY: State University of New York Press.Google Scholar
  37. Ramist, L. and S. Arbeiter. (1983) Profiles, college bound seniors. New York: College Entrance Examination Board.Google Scholar
  38. Ramist, L. and S. Arbeiter. (1984) Profiles, college bound seniors. New York: College Entrance Examination Board.Google Scholar
  39. Ramist, L. and S. Arbeiter. (1986) Profiles, college bound seniors. New York: College Entrance Examination Board.Google Scholar
  40. Rogers, H. J. and E. Kulick. (1987) An investigation of unexpected differences in item performance between blacks and whites taking the SAT. 1986 NCME paper in Differential item functioning on the Scholastic Aptitude Test, ed. A. P. Schmitt and N. J. Dorans. ETS Research Report No. RM-87–1. Princeton: Educational Testing Service (ETS).Google Scholar
  41. Sánchez, G. I. (1932a) Group differences and Spanish-speaking children: A critical review, Journal of Applied Psychology, 16, 549–58.CrossRefGoogle Scholar
  42. Sánchez, G. I. (1932b) Scores of Spanish-speaking children on repeated tests, Journal of General Psychology, 40, 223–31.Google Scholar
  43. Sánchez, G. I. (1934a) Bilingualism and mental measures: A word of caution, Journal of Applied Psychology, 18, 765–72.CrossRefGoogle Scholar
  44. Sánchez, G. I. (1934b) The implications of a basal vocabulary for the measurement of the abilities of bilingual children, Journal of Social Psychology, 5, 395–402.CrossRefGoogle Scholar
  45. Scheuneman, J. D. (1982) A posteriori analyses of biased items. In Handbook of methods for detecting test bias, ed. R. A. Berk. Baltimore: Johns Hopkins University Press.Google Scholar
  46. Schmitt, A. P. (1988) Language and cultural characteristics that explain differential item functioning for Hispanic examinees on the Scholastic Aptitude Test, Journal of Educational Measurement, 25, 1–13.CrossRefGoogle Scholar
  47. Schmitt, A. P. and C. A. Bleistein. (1987) Factors affecting differential item functioning for black examinees on Scholastic Aptitude Test analogy items. ETS Research Report no. RR-87–23. Princeton: Educational Testing Service (ETS).Google Scholar
  48. Schmitt, A. P. and N. J. Dorans. (1990) Differential item functioning for minority examinees on the SAT. Journal of Education Measurement, 27, 67–81.CrossRefGoogle Scholar
  49. Shepard, L. A. (1982) Definitions of bias. In The handbook of methods for detecting test bias, ed. R. A. Berk. Baltimore: Johns Hopkins University Press.Google Scholar
  50. Shepard, L. A. (1987) Discussant comments on the National Council on Measurement in Education (NCME) Symposium. In Differential item functioning on the Scholastic Aptitude Test, ed. A. P. Schmitt and N. J. Dorans. ETS Research Report No. RM-87–1. Princeton: Educational Testing Service (ETS).Google Scholar
  51. Shepard, L. A., Camilli, G. and D. M. Williams. (1984) Accounting for statistical artifacts in item bias research, Journal of Educational Statistics, 9, 93–128.CrossRefGoogle Scholar
  52. Strenta, A. C. and R. Elliot. (1987) Differential grading standards revisited, Journal of Educational Measurement, 24, 281–91.CrossRefGoogle Scholar
  53. Warren, J. (1976) Prediction of college achievement among Mexican American students in California. College Board Research and Development Report. Princeton: Educational Testing Service (ETS).Google Scholar
  54. Wright, D. (1986) An empirical comparison of the Mantel-Haenszel and standardization methods of detecting differential item performance. Paper presented at the annual meeting of the National Council on Measurement in Education, April, San Francisco.Google Scholar
  55. Zieky, M. (1987) Procedures for use of differential item difficulty statistics in test development. Memorandum for ETS test developers, September. Princeton: Educational Testing Service (ETS).Google Scholar

Copyright information

© Springer Science+Business Media New York 1993

Authors and Affiliations

  • Maria Pennock-Román

There are no affiliations available

Personalised recommendations