Advertisement

Research in Higher Education

, Volume 54, Issue 2, pp 149–170 | Cite as

NSSE Benchmarks and Institutional Outcomes: A Note on the Importance of Considering the Intended Uses of a Measure in Validity Studies

  • Gary R. Pike
Article

Abstract

Surveys play a prominent role in assessment and institutional research, and the NSSE College Student Report is one of the most popular surveys of enrolled undergraduates. Recent studies have raised questions about the validity of the NSSE survey. Although these studies have themselves been criticized, documenting the validity of an instrument requires an affirmative finding regarding the adequacy and appropriateness of score interpretation and use. Using national data from NSSE 2008, the present study found that the NSSE benchmarks provided dependable means for 50 or more students and were significantly related to important institutional outcomes such as retention and graduation rates.

Keywords

Surveys Validity Engagement Retention Graduation 

References

  1. Adelman, C. (1999). Answers in the tool box: Academic intensity, attendance patterns, and bachelor’s degree attainment, report PLLI (pp. 1999–8021). Washington, D.C.: U. S. Department of Education.Google Scholar
  2. Adelman, C. (2007). Making graduation rates matter. In Inside higher education. Retrieved February 3, 2012 from http://www.insidehighered.com/views/2007/03/12/adelman.
  3. American Educational Research Association, American Psychological Association, National Council for Measurement in Education. (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.Google Scholar
  4. Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Development, 25, 297–308.Google Scholar
  5. Astin, A. W., & Oseguera, L. (2005) Pre-college and institutional influences on degree attainment In A. Seidman (Ed.), College student retention: Formula for student success (pp. 245–276). Westport, CT: American Council on Education and Praeger.Google Scholar
  6. Banta, T. W., Pike, G. R., & Hansen, M. J. (2009). The use of engagement data in accreditation, planning and assessment. In R. M. Gonyea & G. D. Kuh (Eds.), Using NSSE in institutional research (new directions for institutional research series, no. 141, pp. 21–34). San Francisco: Jossey-Bass.Google Scholar
  7. Bernstein, I. H., & Teng, G. (1989). Factoring items and factoring scales are different: Spurious evidence for multidimensionality due to item categorization. Psychological Bulletin, 105, 467–477.CrossRefGoogle Scholar
  8. Brennan, R. L. (1983). Elements of generalizability theory. Iowa City, IA: ACT Publications.Google Scholar
  9. Brint, S., Mulligan, K., Rotondi, M. B., & Apkarian, J. (2011). The institutional data archive on American higher education, 19702010. Riverside, CA: University of California. Retrieved from http://www.higher-ed2000.ucr.edu/databases.html/.
  10. Campbell, C. M., & Cabrera, A. F. (2011). How sound is NSSE? Investigating the psychometric properties of NSSE at a public, research-extensive institution. Review of Higher Education, 35, 77–103.CrossRefGoogle Scholar
  11. Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47, 1–32.CrossRefGoogle Scholar
  12. Carnegie Foundation for the Advancement of Teaching (2010). Summary tables: Undergraduate instructional program classification. Stanford, CA: Author. Retrieved September 12, 2010 from http://classifications.carnegiefoundation.org/summary/ugrad_prog.php.
  13. Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3–7.Google Scholar
  14. Cook, R. D., & Weisberg, S. (1983). Diagnostics for heteroskedasticity in regression. Biometrika, 70, 1–10.CrossRefGoogle Scholar
  15. Cronbach, L. J. (1971). Test validation. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 443–507). Washington, D.C.: American Council on Education.Google Scholar
  16. Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: Wiley.Google Scholar
  17. Cureton, E. E. (1951). Validity. In E. F. Lindquist (Ed.), Educational measurement (pp. 621–694). Washington, D.C.: American Council on Education.Google Scholar
  18. DiRamio, D., & Shannon, D. (2010, April). Is NSSE messy? An analysis of predictive validity. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.Google Scholar
  19. Dixon, W. J. (1992). BMDP statistical software manual. Berkeley: University of California Press.Google Scholar
  20. Ewell, P. T. (1991). To capture the ineffable: New forms of assessment in higher education. In G. Grant (Ed.), Review of research in education (Vol. 17) (pp. 75–126). Washington, D.C.: American Educational Research Association.Google Scholar
  21. Ewell, P. T., McClenney, K., & McCormick, A. C. (2011). Measuring engagement. In Inside higher education. Retrieved September 20, 2011 from http://www.insidehighered.com/views/2011/09/20/essay_defending_the_value_of_surveys_of_student_engagement.
  22. Feldt, L. S., & Brennan, R. L. (1989). Reliability. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 105–146). New York: American Council on Education and Macmillan.Google Scholar
  23. Gansemer-Topf, A. M., & Schuh, J. H. (2006). Institutional selectivity and institutional expenditures: Examining organizational factors that contribute to retention and graduation. Research in Higher Education, 47, 614–641.CrossRefGoogle Scholar
  24. Gordon, J., Ludlum, J., & Hoey, J. J. (2008). Validating NSSE against student outcomes: Are they related? Research in Higher Education, 49, 19–39.CrossRefGoogle Scholar
  25. Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  26. Hagedorn, L. S. (2005). How to define retention: A new look at an old problem. In A. Seidman (Ed.), College student retention: A formula for student success (pp. 89–106). Westport: American Council on Education and Praeger.Google Scholar
  27. Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport: American Council on Education and Praeger.Google Scholar
  28. Kane, M. T., Gillmore, G. M., & Crooks, T. J. (1976). Student evaluations of teaching: The generalizability of class means. Journal of Educational Measurement, 13, 171–183.CrossRefGoogle Scholar
  29. Korzekwa, A. M., & Marley, S. C. (2010, April). An examination of the predictive validity of National Survey of Student Engagement benchmarks and scalelets. Paper Presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.Google Scholar
  30. Kuh, G. D. (2001). The national survey of student engagement: Conceptual framework and overview of psychometric properties. Bloomington: Indiana University Center for Postsecondary Research.Google Scholar
  31. Kuh, G. D. (2003). What we’re learning about student engagement from NSSE. Change, 35(2), 24–32.CrossRefGoogle Scholar
  32. Kuh, G. D. (2005). Putting student engagement results to use: Lessons from the field. Assessment Update: Progress, Trends, and Practices in Higher Education, 17(1), 12–13.Google Scholar
  33. Kuh, G. D. (2006). Making students matter. In J. C. Burke (Ed.), Fixing the fragmented university: Decentralization with discretion (pp. 235–264). Boston: Jossey-Bass.Google Scholar
  34. Kuh, G. D. (2007). Risky business: Promise and pitfalls of institutional transparency. Change, 39(5), 30–35.CrossRefGoogle Scholar
  35. Kuh, G. D. (2009). The national survey of student engagement: Conceptual and empirical foundations. In R. M. Gonyea & G. D. Kuh (Eds.), Using NSSE in institutional research (new directions for institutional research series, no. 141, pp. 5–20). San Francisco: Jossey-Bass.Google Scholar
  36. Kuh, G. D., Schuh, J. H., Whitt, E. J., & Associates. (1991). Involving colleges: Encouraging student learning and personal development through out-of-class experiences. San Francisco: Jossey-Bass.Google Scholar
  37. Kuh, G. D., Hayek, J. C., Carini, R. M., Ouimet, J. A., Gonyea, R. M., & Kennedy, J. (2001). NSSE technical and norms report. Bloomington: Indiana University Center for Postsecondary Research.Google Scholar
  38. Kuh, G. D., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Champaign: National Institute for Learning Outcomes Assessment.Google Scholar
  39. Kuh, G. D., Kinzie, J., Cruce, T., Shoup, R., & Gonyea, R. M. (2007). Connecting the dots: Multifaceted analyses of the relationships between student engagement results from the NSSE, and the institutional practices and conditions that foster student success. Bloomington: Final report prepared for Lumina Foundation for Education. Center for Postsecondary Research.Google Scholar
  40. LaNasa, S. M., Cabrera, A. F., & Transgrud, H. (2009). The construct validity of student engagement: A confirmatory factor analysis approach. Research in Higher Education, 50, 315–332.CrossRefGoogle Scholar
  41. Lee, C. (2010, April). The reliability of national benchmarks of effective student engagement. Paper Presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.Google Scholar
  42. Loevinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Reports, 3, 635-694 (Monograph Supplement 9).Google Scholar
  43. McCormick, A. C., & McClenney, K. (2012). Will these trees ever bear fruit? A response to the special issue on student engagement. Review of Higher Education, 35, 307–333.CrossRefGoogle Scholar
  44. McCormick, A. C., Pike, G. R., Kuh, G. D., & Chen, D. P. (2009). Comparing the utility of the 2000 and 2005 Carnegie classification systems in research on students’ college experiences and outcomes. Research in Higher Education, 50, 144–167.CrossRefGoogle Scholar
  45. McDonald, R. P. (1985). Factor analysis and related methods. Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  46. Melguizo, T. (2008). Quality matters: Assessing the impact of attending more selective institutions on college completion rates of minorities. Research in Higher Education, 49, 214–236.CrossRefGoogle Scholar
  47. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). New York: American Council on Education and Macmillan.Google Scholar
  48. National Center for Education Statistics (2012a). About IPEDS. Retrieved March 20, 2012 from http://nces.ed.gov/ipeds/about/.
  49. National Center for Education Statistics (2012b). IPEDS data center. Retrieved from http://nces.ed.gov/ipeds/datacenter/.
  50. National Center for Public Policy and Higher Education (2002). The different dimensions of transfer. Retrieved February 3, 2012 from http://www.highereducation.org/reports/transfer/transfer5.shtml.
  51. National Institute of Education Study Group on the Conditions of Excellence in American Higher Education. (1984). Involvement in learning: Realizing the potential of American higher education. Washington, D.C.: U. S. Government Printing Office.Google Scholar
  52. National Survey of Student Engagement. (2001). Improving the college experience: National benchmarks of effective educational practice. Bloomington: Indiana University Center for Postsecondary Research.Google Scholar
  53. National Survey of Student Engagement. (2008a). NSSE 2008 overview. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved from http://nsse.iub.edu/pdf/2008_Institutional_Report/NSSE2008Overview.pdf.
  54. National Survey of Student Engagement. (2008b). Promoting engagement for all students: The imperative to look within. Bloomington: Indiana University, Center for Postsecondary Research.Google Scholar
  55. National Survey of Student Engagement. (2009). Using NSSE to assess and improve undergraduate education: Lessons from the field, 2009. Bloomington: Indiana University Center for Postsecondary Research.Google Scholar
  56. National Survey of Student Engagement (2010a). 2009 Known groups validation. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  57. National Survey of Student Engagement (2010b). Cognitive interviews and focus groups. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  58. National Survey of Student Engagement (2010c). Consequential aspect of validity. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  59. National Survey of Student Engagement (2010d). Do different versions of NSSE questions produce the “same” or similar results? Specifically, how often is often? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  60. National Survey of Student Engagement (2010e). Do institutions participating in NSSE have enough respondents to adequately represent their population? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  61. National Survey of Student Engagement (2010f). Do institutions use survey data as intended by NSSE? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  62. National Survey of Student Engagement (2010g). Focus groups. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  63. National Survey of Student Engagement (2010h). Predicting retention and degree progress. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  64. National Survey of Student Engagement (2011a). About NSSE. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 13, 2011 from http://www.nsse.iub.edu/html/about.cfn.
  65. National Survey of Student Engagement (2011b). Does the NSSE survey produce similar results when administered to different cohorts of students at the same institutions across consecutive years? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.
  66. Nora, A., Crisp, G., & Matthews, C. (2011). A reconceptualization of CCSSE’s benchmarks of student engagement. Review of Higher Education, 35, 105–130.CrossRefGoogle Scholar
  67. Ouimet, J. A., Bunnage, J. B., Carini, R. M., Kuh, G. D., & Kennedy, J. (2004). Using focus groups to establish the validity and reliability of a college student survey. Research in Higher Education, 45, 233–250.CrossRefGoogle Scholar
  68. Pace, C. R. (1980). Measuring the quality of student effort. Current Issues in Higher Education, 2, 10–16.Google Scholar
  69. Pace, C. R. (1984). Measuring the quality of college student experiences. An account of the development and use of the college student experiences questionnaire. Los Angeles: Higher Education Research Institute.Google Scholar
  70. Pace, C. R., & Friedlander, J. (1982). The meaning of response categories: How often is “Occasionally”, “Often”, and “Very Often?”. Research in Higher Education, 17, 267–281.CrossRefGoogle Scholar
  71. Pascarella, E. T., Seifert, T. A., & Blaich, C. (2009). How effective are the NSSE benchmarks in predicting important educational outcomes? Change, 42(1), 16–22.CrossRefGoogle Scholar
  72. Pike, G. R. (1994). Applications of generalizability theory in higher education assessment research. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. X) (pp. 45–87). New York: Agathon.Google Scholar
  73. Pike, G. R. (2006a). The convergent and discriminant validity of NSSE scalelet scores. Journal of College Student Development, 47, 550–563.CrossRefGoogle Scholar
  74. Pike, G. R. (2006b). The dependability of NSSE scalelets for college and department-level assessment. Research in Higher Education, 47, 177–195.CrossRefGoogle Scholar
  75. Pike, G. R. (2011). Using college students’ self-reported learning outcomes in scholarly research. In S. Herzog & N. A. Bowman (Eds.), Validity and limitations of college student self-report data (new directions for institutional researcher series, no. 150, pp. 41–58). San Francisco: Jossey-Bass.Google Scholar
  76. Pike, G. R., Kuh, G. D., & McCormick, A. C. (2011). An investigation of the contingent relationships between learning community participation and student engagement. Research in Higher Education, 52, 300–322.CrossRefGoogle Scholar
  77. Pike, G. R., Smart, J. C., Kuh, G. D., & Hayek, J. C. (2006). Educational expenditures and student engagement: When does money matter? Research in Higher Education, 47, 847–872.CrossRefGoogle Scholar
  78. Porter, S. R. (2011). Do college student surveys have any validity? Review of Higher Education, 35, 45–76.CrossRefGoogle Scholar
  79. Porter, S. R., Rumann, C., & Pontius, J. (2011). The validity of student engagement survey questions: Can we accurately measure academic challenge? In S. Herzog & N. A. Bowman (Eds.), Validity and limitations of college student self-report data (new directions for institutional research series, no. 150, pp. 87–98). San Francisco: Jossey-Bass.Google Scholar
  80. Rummel, R. J. (1970). Applied factor analysis. Evanston: Northwestern University Press.Google Scholar
  81. Ryan, J. F. (2004). The relationship between institutional expenditures and degree attainment at baccalaureate colleges. Research in Higher Education, 45, 97–114.CrossRefGoogle Scholar
  82. Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory: A primer. Newberry Park: Sage.Google Scholar
  83. StataCorp (2007). Stata 10 user’s guide. College Station, TX: StataCorp.Google Scholar
  84. Toulmin, S. (1958). The uses of argument. Cambridge: Cambridge University Press.Google Scholar
  85. Tyler, R. W. (1932). Service studies in higher education. Columbus: Bureau of Educational Research, Ohio State University.Google Scholar
  86. Wänke, M. (2002). Conversational norms and the interpretation of vague quantifiers. Applied Cognitive Psychology, 16, 301–307.CrossRefGoogle Scholar
  87. Wright, D. B., Gaskell, G. D., & O’Muircheartaigh, C. A. (1994). How much is “quite a bit?” Mapping between numerical values and vague quantifiers. Applied Cognitive Psychology, 8, 479–496.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  1. 1.Indiana University-Purdue University-IndianapolisIndianapolisUSA

Personalised recommendations