Skip to main content
Log in

NSSE Benchmarks and Institutional Outcomes: A Note on the Importance of Considering the Intended Uses of a Measure in Validity Studies

  • Published:
Research in Higher Education Aims and scope Submit manuscript

Abstract

Surveys play a prominent role in assessment and institutional research, and the NSSE College Student Report is one of the most popular surveys of enrolled undergraduates. Recent studies have raised questions about the validity of the NSSE survey. Although these studies have themselves been criticized, documenting the validity of an instrument requires an affirmative finding regarding the adequacy and appropriateness of score interpretation and use. Using national data from NSSE 2008, the present study found that the NSSE benchmarks provided dependable means for 50 or more students and were significantly related to important institutional outcomes such as retention and graduation rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Similar content being viewed by others

References

  • Adelman, C. (1999). Answers in the tool box: Academic intensity, attendance patterns, and bachelor’s degree attainment, report PLLI (pp. 1999–8021). Washington, D.C.: U. S. Department of Education.

    Google Scholar 

  • Adelman, C. (2007). Making graduation rates matter. In Inside higher education. Retrieved February 3, 2012 from http://www.insidehighered.com/views/2007/03/12/adelman.

  • American Educational Research Association, American Psychological Association, National Council for Measurement in Education. (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.

  • Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Development, 25, 297–308.

    Google Scholar 

  • Astin, A. W., & Oseguera, L. (2005) Pre-college and institutional influences on degree attainment In A. Seidman (Ed.), College student retention: Formula for student success (pp. 245–276). Westport, CT: American Council on Education and Praeger.

  • Banta, T. W., Pike, G. R., & Hansen, M. J. (2009). The use of engagement data in accreditation, planning and assessment. In R. M. Gonyea & G. D. Kuh (Eds.), Using NSSE in institutional research (new directions for institutional research series, no. 141, pp. 21–34). San Francisco: Jossey-Bass.

  • Bernstein, I. H., & Teng, G. (1989). Factoring items and factoring scales are different: Spurious evidence for multidimensionality due to item categorization. Psychological Bulletin, 105, 467–477.

    Article  Google Scholar 

  • Brennan, R. L. (1983). Elements of generalizability theory. Iowa City, IA: ACT Publications.

    Google Scholar 

  • Brint, S., Mulligan, K., Rotondi, M. B., & Apkarian, J. (2011). The institutional data archive on American higher education, 19702010. Riverside, CA: University of California. Retrieved from http://www.higher-ed2000.ucr.edu/databases.html/.

  • Campbell, C. M., & Cabrera, A. F. (2011). How sound is NSSE? Investigating the psychometric properties of NSSE at a public, research-extensive institution. Review of Higher Education, 35, 77–103.

    Article  Google Scholar 

  • Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47, 1–32.

    Article  Google Scholar 

  • Carnegie Foundation for the Advancement of Teaching (2010). Summary tables: Undergraduate instructional program classification. Stanford, CA: Author. Retrieved September 12, 2010 from http://classifications.carnegiefoundation.org/summary/ugrad_prog.php.

  • Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3–7.

    Google Scholar 

  • Cook, R. D., & Weisberg, S. (1983). Diagnostics for heteroskedasticity in regression. Biometrika, 70, 1–10.

    Article  Google Scholar 

  • Cronbach, L. J. (1971). Test validation. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 443–507). Washington, D.C.: American Council on Education.

    Google Scholar 

  • Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: Wiley.

    Google Scholar 

  • Cureton, E. E. (1951). Validity. In E. F. Lindquist (Ed.), Educational measurement (pp. 621–694). Washington, D.C.: American Council on Education.

    Google Scholar 

  • DiRamio, D., & Shannon, D. (2010, April). Is NSSE messy? An analysis of predictive validity. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.

  • Dixon, W. J. (1992). BMDP statistical software manual. Berkeley: University of California Press.

    Google Scholar 

  • Ewell, P. T. (1991). To capture the ineffable: New forms of assessment in higher education. In G. Grant (Ed.), Review of research in education (Vol. 17) (pp. 75–126). Washington, D.C.: American Educational Research Association.

    Google Scholar 

  • Ewell, P. T., McClenney, K., & McCormick, A. C. (2011). Measuring engagement. In Inside higher education. Retrieved September 20, 2011 from http://www.insidehighered.com/views/2011/09/20/essay_defending_the_value_of_surveys_of_student_engagement.

  • Feldt, L. S., & Brennan, R. L. (1989). Reliability. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 105–146). New York: American Council on Education and Macmillan.

    Google Scholar 

  • Gansemer-Topf, A. M., & Schuh, J. H. (2006). Institutional selectivity and institutional expenditures: Examining organizational factors that contribute to retention and graduation. Research in Higher Education, 47, 614–641.

    Article  Google Scholar 

  • Gordon, J., Ludlum, J., & Hoey, J. J. (2008). Validating NSSE against student outcomes: Are they related? Research in Higher Education, 49, 19–39.

    Article  Google Scholar 

  • Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Hagedorn, L. S. (2005). How to define retention: A new look at an old problem. In A. Seidman (Ed.), College student retention: A formula for student success (pp. 89–106). Westport: American Council on Education and Praeger.

    Google Scholar 

  • Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport: American Council on Education and Praeger.

    Google Scholar 

  • Kane, M. T., Gillmore, G. M., & Crooks, T. J. (1976). Student evaluations of teaching: The generalizability of class means. Journal of Educational Measurement, 13, 171–183.

    Article  Google Scholar 

  • Korzekwa, A. M., & Marley, S. C. (2010, April). An examination of the predictive validity of National Survey of Student Engagement benchmarks and scalelets. Paper Presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.

  • Kuh, G. D. (2001). The national survey of student engagement: Conceptual framework and overview of psychometric properties. Bloomington: Indiana University Center for Postsecondary Research.

    Google Scholar 

  • Kuh, G. D. (2003). What we’re learning about student engagement from NSSE. Change, 35(2), 24–32.

    Article  Google Scholar 

  • Kuh, G. D. (2005). Putting student engagement results to use: Lessons from the field. Assessment Update: Progress, Trends, and Practices in Higher Education, 17(1), 12–13.

    Google Scholar 

  • Kuh, G. D. (2006). Making students matter. In J. C. Burke (Ed.), Fixing the fragmented university: Decentralization with discretion (pp. 235–264). Boston: Jossey-Bass.

    Google Scholar 

  • Kuh, G. D. (2007). Risky business: Promise and pitfalls of institutional transparency. Change, 39(5), 30–35.

    Article  Google Scholar 

  • Kuh, G. D. (2009). The national survey of student engagement: Conceptual and empirical foundations. In R. M. Gonyea & G. D. Kuh (Eds.), Using NSSE in institutional research (new directions for institutional research series, no. 141, pp. 5–20). San Francisco: Jossey-Bass.

  • Kuh, G. D., Schuh, J. H., Whitt, E. J., & Associates. (1991). Involving colleges: Encouraging student learning and personal development through out-of-class experiences. San Francisco: Jossey-Bass.

    Google Scholar 

  • Kuh, G. D., Hayek, J. C., Carini, R. M., Ouimet, J. A., Gonyea, R. M., & Kennedy, J. (2001). NSSE technical and norms report. Bloomington: Indiana University Center for Postsecondary Research.

    Google Scholar 

  • Kuh, G. D., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Champaign: National Institute for Learning Outcomes Assessment.

    Google Scholar 

  • Kuh, G. D., Kinzie, J., Cruce, T., Shoup, R., & Gonyea, R. M. (2007). Connecting the dots: Multifaceted analyses of the relationships between student engagement results from the NSSE, and the institutional practices and conditions that foster student success. Bloomington: Final report prepared for Lumina Foundation for Education. Center for Postsecondary Research.

    Google Scholar 

  • LaNasa, S. M., Cabrera, A. F., & Transgrud, H. (2009). The construct validity of student engagement: A confirmatory factor analysis approach. Research in Higher Education, 50, 315–332.

    Article  Google Scholar 

  • Lee, C. (2010, April). The reliability of national benchmarks of effective student engagement. Paper Presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.

  • Loevinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Reports, 3, 635-694 (Monograph Supplement 9).

  • McCormick, A. C., & McClenney, K. (2012). Will these trees ever bear fruit? A response to the special issue on student engagement. Review of Higher Education, 35, 307–333.

    Article  Google Scholar 

  • McCormick, A. C., Pike, G. R., Kuh, G. D., & Chen, D. P. (2009). Comparing the utility of the 2000 and 2005 Carnegie classification systems in research on students’ college experiences and outcomes. Research in Higher Education, 50, 144–167.

    Article  Google Scholar 

  • McDonald, R. P. (1985). Factor analysis and related methods. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Melguizo, T. (2008). Quality matters: Assessing the impact of attending more selective institutions on college completion rates of minorities. Research in Higher Education, 49, 214–236.

    Article  Google Scholar 

  • Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). New York: American Council on Education and Macmillan.

    Google Scholar 

  • National Center for Education Statistics (2012a). About IPEDS. Retrieved March 20, 2012 from http://nces.ed.gov/ipeds/about/.

  • National Center for Education Statistics (2012b). IPEDS data center. Retrieved from http://nces.ed.gov/ipeds/datacenter/.

  • National Center for Public Policy and Higher Education (2002). The different dimensions of transfer. Retrieved February 3, 2012 from http://www.highereducation.org/reports/transfer/transfer5.shtml.

  • National Institute of Education Study Group on the Conditions of Excellence in American Higher Education. (1984). Involvement in learning: Realizing the potential of American higher education. Washington, D.C.: U. S. Government Printing Office.

    Google Scholar 

  • National Survey of Student Engagement. (2001). Improving the college experience: National benchmarks of effective educational practice. Bloomington: Indiana University Center for Postsecondary Research.

    Google Scholar 

  • National Survey of Student Engagement. (2008a). NSSE 2008 overview. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved from http://nsse.iub.edu/pdf/2008_Institutional_Report/NSSE2008Overview.pdf.

  • National Survey of Student Engagement. (2008b). Promoting engagement for all students: The imperative to look within. Bloomington: Indiana University, Center for Postsecondary Research.

    Google Scholar 

  • National Survey of Student Engagement. (2009). Using NSSE to assess and improve undergraduate education: Lessons from the field, 2009. Bloomington: Indiana University Center for Postsecondary Research.

    Google Scholar 

  • National Survey of Student Engagement (2010a). 2009 Known groups validation. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2010b). Cognitive interviews and focus groups. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2010c). Consequential aspect of validity. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2010d). Do different versions of NSSE questions produce the “same” or similar results? Specifically, how often is often? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2010e). Do institutions participating in NSSE have enough respondents to adequately represent their population? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2010f). Do institutions use survey data as intended by NSSE? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2010g). Focus groups. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2010h). Predicting retention and degree progress. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • National Survey of Student Engagement (2011a). About NSSE. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 13, 2011 from http://www.nsse.iub.edu/html/about.cfn.

  • National Survey of Student Engagement (2011b). Does the NSSE survey produce similar results when administered to different cohorts of students at the same institutions across consecutive years? Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved August 14, 2011 from http://www.nsse.iub.edu/links/psychometric_portfolio.

  • Nora, A., Crisp, G., & Matthews, C. (2011). A reconceptualization of CCSSE’s benchmarks of student engagement. Review of Higher Education, 35, 105–130.

    Article  Google Scholar 

  • Ouimet, J. A., Bunnage, J. B., Carini, R. M., Kuh, G. D., & Kennedy, J. (2004). Using focus groups to establish the validity and reliability of a college student survey. Research in Higher Education, 45, 233–250.

    Article  Google Scholar 

  • Pace, C. R. (1980). Measuring the quality of student effort. Current Issues in Higher Education, 2, 10–16.

    Google Scholar 

  • Pace, C. R. (1984). Measuring the quality of college student experiences. An account of the development and use of the college student experiences questionnaire. Los Angeles: Higher Education Research Institute.

    Google Scholar 

  • Pace, C. R., & Friedlander, J. (1982). The meaning of response categories: How often is “Occasionally”, “Often”, and “Very Often?”. Research in Higher Education, 17, 267–281.

    Article  Google Scholar 

  • Pascarella, E. T., Seifert, T. A., & Blaich, C. (2009). How effective are the NSSE benchmarks in predicting important educational outcomes? Change, 42(1), 16–22.

    Article  Google Scholar 

  • Pike, G. R. (1994). Applications of generalizability theory in higher education assessment research. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. X) (pp. 45–87). New York: Agathon.

    Google Scholar 

  • Pike, G. R. (2006a). The convergent and discriminant validity of NSSE scalelet scores. Journal of College Student Development, 47, 550–563.

    Article  Google Scholar 

  • Pike, G. R. (2006b). The dependability of NSSE scalelets for college and department-level assessment. Research in Higher Education, 47, 177–195.

    Article  Google Scholar 

  • Pike, G. R. (2011). Using college students’ self-reported learning outcomes in scholarly research. In S. Herzog & N. A. Bowman (Eds.), Validity and limitations of college student self-report data (new directions for institutional researcher series, no. 150, pp. 41–58). San Francisco: Jossey-Bass.

  • Pike, G. R., Kuh, G. D., & McCormick, A. C. (2011). An investigation of the contingent relationships between learning community participation and student engagement. Research in Higher Education, 52, 300–322.

    Article  Google Scholar 

  • Pike, G. R., Smart, J. C., Kuh, G. D., & Hayek, J. C. (2006). Educational expenditures and student engagement: When does money matter? Research in Higher Education, 47, 847–872.

    Article  Google Scholar 

  • Porter, S. R. (2011). Do college student surveys have any validity? Review of Higher Education, 35, 45–76.

    Article  Google Scholar 

  • Porter, S. R., Rumann, C., & Pontius, J. (2011). The validity of student engagement survey questions: Can we accurately measure academic challenge? In S. Herzog & N. A. Bowman (Eds.), Validity and limitations of college student self-report data (new directions for institutional research series, no. 150, pp. 87–98). San Francisco: Jossey-Bass.

  • Rummel, R. J. (1970). Applied factor analysis. Evanston: Northwestern University Press.

    Google Scholar 

  • Ryan, J. F. (2004). The relationship between institutional expenditures and degree attainment at baccalaureate colleges. Research in Higher Education, 45, 97–114.

    Article  Google Scholar 

  • Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory: A primer. Newberry Park: Sage.

    Google Scholar 

  • StataCorp (2007). Stata 10 user’s guide. College Station, TX: StataCorp.

  • Toulmin, S. (1958). The uses of argument. Cambridge: Cambridge University Press.

    Google Scholar 

  • Tyler, R. W. (1932). Service studies in higher education. Columbus: Bureau of Educational Research, Ohio State University.

    Google Scholar 

  • Wänke, M. (2002). Conversational norms and the interpretation of vague quantifiers. Applied Cognitive Psychology, 16, 301–307.

    Article  Google Scholar 

  • Wright, D. B., Gaskell, G. D., & O’Muircheartaigh, C. A. (1994). How much is “quite a bit?” Mapping between numerical values and vague quantifiers. Applied Cognitive Psychology, 8, 479–496.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gary R. Pike.

Appendix

Appendix

Items Comprising the NSSE Benchmarks

Level of Academic Challenge

  • Preparing for class (studying, reading, writing, rehearsing etc. related to academic program)

  • Number of assigned textbooks, books, or book-length packs of course readings

  • Number of written papers or reports of 20 pages or more

  • Number of written papers or reports of between 5 and 19 pages

  • Number of written papers or reports of fewer than five pages

  • Coursework emphasizing analysis of the basic elements of an idea experience or theory

  • Coursework emphasizing synthesis and organizing of ideas, information, or experiences into new, more complex interpretations and relationships

  • Coursework emphasizing the making of judgments about the value of information, arguments, or methods

  • Coursework emphasizing application of theories or concepts to practical problems or in new situations

  • Working harder than you thought you could to meet an instructor’s standards or expectations

  • Campus environment emphasizing time studying and on academic work

Active and Collaborative Learning

  • Asked questions in class or contributed to class discussions

  • Made a class presentation

  • Worked with other students on projects during class

  • Worked with classmates outside of class to prepare class assignments

  • Tutored or taught other students

  • Participated in a community-based project as part of a regular course

  • Discussed ideas from your readings or classes with others outside of class (students, family members, co-workers, etc.)

Student–Faculty Interaction

  • Discussed grades or assignments with an instructor

  • Talked about career plans with a faculty member or advisor

  • Discussed ideas from your readings or classes with faculty members outside of class

  • Worked with faculty members on activities other than coursework (committees, orientation, student-life activities, etc.)

  • Received prompt feedback from faculty on your academic performance (written or oral)

  • Worked with a faculty member on a research project outside of course or program requirements

Enriching Educational Experiences

  • Participating in co-curricular activities (organizations, publications, student government, sports, etc.)

  • Practicum, internship, field experience, co-op experience, or clinical assignment

  • Community service or volunteer work

  • Foreign language coursework

  • Study abroad

  • Independent study or self-designed major

  • Culminating senior experience (comprehensive exam, capstone course, thesis, project, etc.)

  • Serious conversations with students of different religious beliefs, political opinions, or personal values

  • Serious conversations with students of a different race or ethnicity

  • Using electronic technology to discuss or complete an assignment

  • Campus environment encouraging contact among students from different economic, social, and racial or ethnic backgrounds

  • Participate in a learning community or some other formal program where groups of students take two or more classes together

Supportive Campus Environment

  • Campus environment provides the support you need to help you succeed academically

  • Campus environment helps you cope with your non-academic responsibilities (work, family, etc.)

  • Campus environment provides the support you need to thrive socially

  • Quality of relationships with other students

  • Quality of relationships with faculty members

  • Quality of relationships with administrative personnel and offices

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pike, G.R. NSSE Benchmarks and Institutional Outcomes: A Note on the Importance of Considering the Intended Uses of a Measure in Validity Studies. Res High Educ 54, 149–170 (2013). https://doi.org/10.1007/s11162-012-9279-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11162-012-9279-y

Keywords

Navigation