Advertisement

Research in Higher Education

, Volume 58, Issue 8, pp 880–903 | Cite as

Student Engagement and Student Learning: Examining the Convergent and Discriminant Validity of the Revised National Survey of Student Engagement

  • John Zilvinskis
  • Anthony A. Masseria
  • Gary R. Pike
Article

Abstract

The present study examined the relationships between student engagement, represented by two versions of the National Survey of Student Engagement (NSSE) and self-reported gains in learning. The study drew on institutional-level data from participating institutions in 2011 and 2013. The objective of the research was to compare evidence of convergence and discrimination for the two versions of NSSE using canonical correlation analysis. Results indicated that both versions of NSSE provided clear evidence of convergence in that student engagement measures were significantly and positively related to perceived gains in learning. However, only the most recent version of NSSE provided strong evidence of discrimination (i.e., differential relationships between engagement measures and self-reported learning outcomes). Thus, the revised NSSE appears to offer substantial advantages for institutions interested in more nuanced understandings of the relationships between student engagement and perceived learning outcomes. Implications for educators, with goals of enhancing student learning, and for researchers, who often compare complex sets of data, are included.

Keywords

Student engagement Student learning National Survey of Student Engagement Canonical correlation analysis 

References

  1. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standard for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
  2. An, B. P. (2015). The role of academic motivation and engagement on the relationship between dual enrollment and academic performance. Journal of Higher Education, 86, 98–126.CrossRefGoogle Scholar
  3. Anderson, P., Anson, C. M., Gonyea, R. M., & Paine, C. (2015). The contributions of writing to learning and development: Results from a large-scale multi-institutional study. Research in the Teaching of English, 50, 199.Google Scholar
  4. Astin, A. W. (1977). Four critical years: Effects of college on beliefs, attitudes, and knowledge. San Francisco: Jossey-Bass.Google Scholar
  5. Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25, 297–307.Google Scholar
  6. Axelson, R. D., & Flick, A. (2011). Defining student engagement. Change, 31(1), 38–43.Google Scholar
  7. Baird, L. L. (1976). Using self-reports to predict student performance. College Entrance Examination Board Research Monograph No. 7. New York: College Entrance Examination Board (ED 126 116).Google Scholar
  8. Banta, T. W., & Pike, G. R. (1989). Methods for evaluating assessment instruments. Research in Higher Education, 30, 455–470.CrossRefGoogle Scholar
  9. Banta, T. W., Pike, G. R., & Hansen, M. J. (2009). The use of engagement data in accreditation, planning and assessment. In R. M. Gonyea & G. D. Kuh (Eds.), Using NSSE in institutional research (pp. 21–34). New Directions for Institutional Research series, No. 141. San Francisco: Jossey-Bass.Google Scholar
  10. Berdie, R. F. (1971). Self-claimed and tested knowledge. Educational and Psychological Measurement, 31, 629–636.CrossRefGoogle Scholar
  11. Bowman, N. A. (2011). Examining systematic errors in predictors of college student self-reported gains. In S. Herzog & N. A. Bowman (Eds.), Validity and limitations of college student self-report data (pp. 7–20). New Directions for Institutional Research series, No. 150. San Francisco: Jossey-Bass.Google Scholar
  12. Bowman, N. A., & Hill, P. L. (2011). Measuring how college affects students: Social desirability and other potential biases in college student self-reported gains. In S. Herzog & N. A. Bowman (Eds.), Validity and limitations of college student self-report data (pp. 73–86). New Directions for Institutional Research series, No. 150. San Francisco: Jossey-Bass.Google Scholar
  13. Bowman, N. A., Park, J. J., & Denson, N. (2015). Examining student involvement in ethnic student organizations: Examining civic outcomes 6 years after graduation. Research in Higher Education, 56, 127–145.CrossRefGoogle Scholar
  14. Bowman, N. A., & Seifert, T. A. (2011). Can college students accurately assess what affects their learning and development. Journal of College Student Development, 52, 270–290.CrossRefGoogle Scholar
  15. Brackett, M. A., & Mayer, J. D. (2003). Convergent, discriminant, and incremental validity of competing measures of emotional intelligence. Personality and Social Psychology Bulletin, 29, 1147–1158.CrossRefGoogle Scholar
  16. Brennan, R. L. (1995). The conventional wisdom about group mean scores. Journal of Educational Measurement, 32, 385–396.CrossRefGoogle Scholar
  17. Brint, S., & Cantwell, A. M. (2014). Conceptualizing, measuring, and analyzing the characteristics of academically disengaged students: Results from UCUES 2010. Journal of College Student Development, 55, 808–823.CrossRefGoogle Scholar
  18. Campbell, D. T. (1960). Recommendations for APA test standards regarding construct, trait, and discriminant validity. American Psychologist, 15, 546–553.CrossRefGoogle Scholar
  19. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validity by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105.CrossRefGoogle Scholar
  20. Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47, 1–32.CrossRefGoogle Scholar
  21. Chickering, A. W., & Gamson, Z. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3–7.Google Scholar
  22. Cohen, J., & Cohen, P. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Hillsdate, NJ: Lawrence Erlbaum.Google Scholar
  23. Cole, J. S., & Korkmaz, A. (2013). First-year students’ psychological well-being and need for cognition: Are they important predictors of academic engagement? Journal of College Student Development, 54, 557–569.CrossRefGoogle Scholar
  24. Dougan, J. P. (2013). Patterns in group involvement experiences during college: Identifying a taxonomy. Journal of College Student Development, 54, 229–246.CrossRefGoogle Scholar
  25. Dumont, R. G., & Troelstrup, R. L. (1980). Exploring relationships between objective and subjective measures of instructional outcomes. Research in Higher Education, 12, 37–51.CrossRefGoogle Scholar
  26. Fiske, D. W. (1982). Convergent-discriminant validation measurements in research strategies. In L. H. Kidder (Ed.), Forms of validity research (pp. 79–92). New directions for the methodology of social and behavioral science series, No. 12. San Francisco: Jossey-Bass.Google Scholar
  27. Flynn, D. (2014). Baccalaureate attainment of college students at 4-year institutions as a function of student engagement behaviors: Social and academic student engagement behaviors matter. Research in Higher Education, 55, 467–493.CrossRefGoogle Scholar
  28. Gayles, J. G., & Ampaw, F. (2014). The impact of college experiences on degree completion in STEM fields at four-year institutions: Does gender matter? Journal of Higher Education, 85, 439–468.CrossRefGoogle Scholar
  29. Gellin, A. (2003). The effect of undergraduate student involvement on critical thinking: A meta-analysis of the literature, 1991-2000. Journal of College Student Development, 44, 746–762.CrossRefGoogle Scholar
  30. Gordon, J., Ludlum, J., & Hoey, J. J. (2008). Validating NSSE against student outcomes: Are they related? Research in Higher Education, 49, 19–39.CrossRefGoogle Scholar
  31. Griffin, K. A., & McIntosh, K. L. (2015). Finding a fit: Understanding Black immigrant students’ engagement in campus activities. Journal of College Student Development, 56, 243–260.CrossRefGoogle Scholar
  32. Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Odessa, FL: Psychological Assessment Resources.Google Scholar
  33. Hotelling, H. (1936). Relations between two sets of measures. Biometrika, 28, 321–377.CrossRefGoogle Scholar
  34. Hu, S., & Wolniak, G. C. (2013). College student engagement and early career earnings: Differences by gender, race/ethnicity, and academic preparation. Review of Higher Education, 36, 211–233.CrossRefGoogle Scholar
  35. Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport, CT: American Council on Education and Praeger.Google Scholar
  36. Kim, Y. K., & Sax, L. J. (2014). The effects of student–faculty interaction on academic self-concept: Does academic major matter? Research in Higher Education, 55, 780–809.CrossRefGoogle Scholar
  37. Kline, S., Benjamin, R., Shavelson, R., & Bolus, R. (2007). The Collegiate Learning Assessment: Facts and fantasies. Evaluation Review, 31, 415–439.CrossRefGoogle Scholar
  38. Kuh, G. D. (2001). The National Survey of Student Engagement: Conceptual framework and overview of psychometric properties. Bloomington, IN: Indiana University Center for Postsecondary Research.Google Scholar
  39. Kuh, G. D. (2003). What we are learning about student engagement from NSSE. Change, 35(2), 24–32.CrossRefGoogle Scholar
  40. Kuh, G. D. (2005). Putting student engagement results to use: Lessons from the field. Assessment Update: Progress, Trends, and Practices in Higher Education, 17(1), 12–13.Google Scholar
  41. Kuh, G. D. (2006). Making students matter. In J. C. Burke (Ed.), Fixing the fragmented university: Decentralization with discretion (pp. 235–264). Boston: Jossey-Bass.Google Scholar
  42. Kuh, G. D. (2009). The National Survey of Student Engagement: Conceptual and empirical foundations. In R. M. Gonyea & G. D. Kuh (Eds.), Using NSSE in institutional research (pp. 5–20). New Directions for Institutional Research series, No. 141. San Francisco: Jossey-Bass.Google Scholar
  43. Kuh, G. D., Hayek, J. C., Carini, R. M., Ouimet, J. A., Gonyea, R. M., & Kennedy, J. (2001). NSSE technical and norms report. Bloomington, IN: Indiana University Center for Postsecondary Research.Google Scholar
  44. Kuh, G. D., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Champaign: National Institute for Learning Outcomes Assessment.Google Scholar
  45. Kuh, G. D., Kinzie, J., Cruce, T., Shoup, R., & Gonyea, R. M. (2007). Connecting the dots: Multifaceted analyses of the relationships between student engagement results from the NSSE, and the institutional practices and conditions that foster student success. Final report prepared for Lumina Foundation for Education. Bloomington: Center for Postsecondary Research.Google Scholar
  46. Kuh, G. D., Schuh, J. H., Whitt, E. J., & Associates. (1991). Involving colleges: Encouraging student learning and personal development through out-of-class experiences. San Francisco: Jossey-Bass.Google Scholar
  47. LaNasa, S. M., Cabrera, A. F., & Transgrud, H. (2009). The construct validity of student engagement: A confirmatory factor analysis approach. Research in Higher Education, 50, 315–332.CrossRefGoogle Scholar
  48. Lingenfelter, P. E. (2013). Forward: Why a fresh look at student engagement? In A fresh look at student engagement: Annual results 2013 (pp. 2–3). Bloomington, IN: Indiana University Center for Postsecondary Research.Google Scholar
  49. McCormick, A. C. (2013). Director’s Message: If it’s not broken … make it better. In A fresh look at student engagement: Annual results 2013 (pp. 4–5). Bloomington, IN: Indiana University Center for Postsecondary Research.Google Scholar
  50. McCormick, A. C., Gonyea, R. M., & Kinzie, J. (2013). Refreshing engagement: NSSE at 13. Change, 45(3), 6–15.CrossRefGoogle Scholar
  51. McCormick, A. C., & McClenney, K. (2012). Will these trees ever bear fruit? A response to the special issue on student engagement. Review of Higher Education, 35, 307–333.CrossRefGoogle Scholar
  52. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). New York: American Council on Education and MacMillan.Google Scholar
  53. National Survey of Student Engagement. (2001). Improving the college experience: National benchmarks of effective educational practice. Bloomington, IN: Indiana University Center for Postsecondary Research.Google Scholar
  54. National Survey of Student Engagement. (2009). Using NSSE to assess and improve undergraduate education: Lessons from the field, 2009. Bloomington, IN: Indiana University Center for Postsecondary Research.Google Scholar
  55. National Survey of Student Engagement. (2014a). About NSSE. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved from http://nsse.iub.edu/html/about.cfm.
  56. National Survey of Student Engagement. (2014b). NSSE 2014 engagement indicators: Internal consistency statistics by class level. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved from http://nsse.iub.edu/2014_institutional_report/pdf/EI%20Intercorrelations%202014.pdf.
  57. Nelson Laird, T. F., Seifert, T. A., Pascarella, E. T., Mayhew, M. J., & Blaich, C. F. (2014). Deeply affecting first-year students’ thinking: Deep approaches to learning and three dimensions of cognitive development. Journal of Higher Education, 85, 402–432.CrossRefGoogle Scholar
  58. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259.CrossRefGoogle Scholar
  59. Ouimet, J. A., Bunnage, J. B., Carini, R. M., Kuh, G. D., & Kennedy, J. (2004). Using focus groups to establish the validity and reliability of a college student survey. Research in Higher Education, 45, 223–250.CrossRefGoogle Scholar
  60. Pace, C. R. (1980). Measuring the quality of student effort. Current Issues in Higher Education, 2, 10–16.Google Scholar
  61. Pace, C. R. (1984). Measuring the quality of college student experiences. Los Angeles, CA: Center for the Study of Evaluation, University of California Los Angeles.Google Scholar
  62. Pace, C. R. (1985). The credibility of student self-reports. Los Angeles: UCLA Center for the Study of Evaluation. (ED 266 174).Google Scholar
  63. Pascarella, E. T. (2001). Using student self-reported gains to estimate college impact: A cautionary tale. Journal of College Student Development, 42, 488–492.Google Scholar
  64. Pascarella, E. T., Seifert, T. A., & Blaich, C. (2009). How effective are the NSSE benchmarks in predicting important educational outcomes? Change, 42(1), 16–22.CrossRefGoogle Scholar
  65. Pike, G. R. (1989). Background, college experiences, and the ACT-COMP exam: Using construct validity to evaluate assessment instruments. Review of Higher Education, 13, 91–118.CrossRefGoogle Scholar
  66. Pike, G. R. (1992). Components of construct validity: A comparison of two measures of general education outcomes. Journal of General Education, 41, 130–150.Google Scholar
  67. Pike, G. R. (1995). The relationship between self-reports of college experiences and achievement test scores. Research in Higher Education, 36, 1–21.CrossRefGoogle Scholar
  68. Pike, G. R. (1996). Limitations of using students’ self-reports of academic development as proxies for traditional achievement measures. Research in Higher Education, 37, 89–114.CrossRefGoogle Scholar
  69. Pike, G. R. (1999). The constant error of the halo in educational outcomes research. Research in Higher Education, 40, 61–86.CrossRefGoogle Scholar
  70. Pike, G. R. (2006a). The convergent and discriminant validity of NSSE scalelet scores. Journal of College Student Development, 47, 550–563.CrossRefGoogle Scholar
  71. Pike, G. R. (2006b). The dependability of NSSE scalelets for college and department-level assessment. Research in Higher Education, 47, 177–195.CrossRefGoogle Scholar
  72. Pike, G. R. (2011). Using college students’ self-reported learning outcomes in scholarly research. In S. Herzog & N. A. Bowman (Eds.), Validity and limitations of college student self-report data (pp. 41–58). New Directions for Institutional Research series, no. 150. San Francisco: Jossey-Bass.Google Scholar
  73. Pike, G. R. (2013a). Assessment measures: The updated National Survey of Student Engagement (NSSE). Assessment Update: Progress, Trends, and Practices in Higher Education, 25(4), 10–11.Google Scholar
  74. Pike, G. R. (2013b). NSSE benchmarks and institutional outcomes: A note on the importance of considering the intended uses of a measure in validity studies. Research in Higher Education, 54, 149–170.CrossRefGoogle Scholar
  75. Pike, G. R., & Killian, T. S. (2001). Reported gains in student learning: Do academic disciplines make a difference? Research in Higher Education, 42, 429–454.CrossRefGoogle Scholar
  76. Pike, G. R., Smart, J. C., & Ethington, C. A. (2011, November). Differences in learning outcomes across academic environments: Further evidence concerning the construct validity of students’ self-reports. Paper presented at the annual meeting of the Association for the Study of higher Education, Charlotte, NC.Google Scholar
  77. Pike, G. R., Smart, J. C., & Ethington, C. A. (2012). The mediating effects of student engagement on the relationships between academic disciplines and learning outcomes: An extension of Holland’s theory. Research in Higher Education, 53, 550–575.CrossRefGoogle Scholar
  78. Pohlmann, J. T., & Beggs, D. L. (1974). A study of the validity of self‐reported measures of academic growth. Journal of Educational Measurement, 11(2), 115–119.CrossRefGoogle Scholar
  79. Porter, S. R. (2011). Do college student surveys have any validity? Review of Higher Education, 35, 45–76.CrossRefGoogle Scholar
  80. Rocconi, L. M., Ribera, A. K., & Nelson Laird, T. F. (2015). College seniors’ plans for graduate school: Do deep approaches to learning and Holland academic environments matter? Research in Higher Education, 56, 178–201.CrossRefGoogle Scholar
  81. Seifert, T. A., Gillig, B., Hanson, J. M., Pascarella, E. T., & Blaich, C. F. (2014). The conditional nature of high impact/good practices on student learning. Journal of Higher Education, 85, 531–564.CrossRefGoogle Scholar
  82. Sherry, A., & Henson, R. K. (2005). Conducting and interpreting canonical correlation analysis in personality research: A user-friendly primer. Journal of Personality Assessment, 84, 37–48.CrossRefGoogle Scholar
  83. StataCorp. (2014). Stata14: Multivariate statistics. College Station, TX: Stata Press.Google Scholar
  84. Thompson, B. (1984). Canonical correlation analysis: Uses and interpretation. Sage Quantitative Applications in the Social Sciences series, No. 47. Beverly Hills, CA: Sage.Google Scholar
  85. Thompson, B. (1991). A primer on the logic and use of canonical correlation analysis. Measurement and Evaluation in Counseling and Development, 24, 80–95.Google Scholar
  86. Thorndike, E. L. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 4, 25–29.CrossRefGoogle Scholar
  87. Tyler, R. W. (1932). Service studies in higher education. Columbus: Bureau of Educational Research, Ohio State University.Google Scholar
  88. Umbach, P. D., & Wawrzynski, M. R. (2005). Faculty do matter: The role of college faculty in student learning and engagement. Research in Higher Education, 46, 153–184.CrossRefGoogle Scholar
  89. Valiga, M. J. (1986). The accuracy of self-reported high school course and grade information. ACT Research Report Series 87–1. Iowa City, Iowa: American College Testing.Google Scholar
  90. Van Den Wollenberg, A. L. (1977). Redundancy analysis: An alternative for canonical correlation analysis. Psychometrika, 42, 207–219.CrossRefGoogle Scholar
  91. Wainer, H., & Kiely, G. L. (1987). Item clusters and computer adaptive testing: A case for testlets. Journal of Educational Measurement, 24, 185–201.CrossRefGoogle Scholar
  92. Ward, C., Fisher, R., Lam, F. S. Z., & Hall, L. (2009). The convergent, discriminant, and incremental validity of scores on a self-report measure of cultural intelligence. Educational and Psychological Measurement, 69, 85–105.CrossRefGoogle Scholar
  93. Webber, K. L., Krylow, R. B., & Zhang, Q. (2013). Does involvement really matter? Indicators of college student success and satisfaction. Journal of College Student Development, 54, 591–611.CrossRefGoogle Scholar
  94. Wilson, D., Jones, D., Bocell, F., Crawford, J., Kim, M. J., Veilleux, N., et al. (2015). Belonging and academic engagement among undergraduate STEM students: A multi-institutional study. Research in Higher Education, 56(3), 750–776.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • John Zilvinskis
    • 1
  • Anthony A. Masseria
    • 2
  • Gary R. Pike
    • 3
  1. 1.Center for Postsecondary ResearchIndiana University School of EducationBloomingtonUSA
  2. 2.School of MedicineIndiana UniversityIndianapolisUSA
  3. 3.Higher Education and Student AffairsIndiana University School of Education-IndianapolisIndianapolisUSA

Personalised recommendations