Research in Higher Education

, Volume 46, Issue 5, pp 571–591 | Cite as

Comparability of Data Gathered from Evaluation Questionnaires on Paper and Through the Internet

  • Doris Y. P. Leung
  • David Kember


Collecting feedback from students through course, program and other evaluation questionnaires has become a costly and time consuming process for most colleges. Converting to data collection through the internet, rather than completion on paper, can result in a cheaper and more efficient process. This article examines several research questions which need to be answered to establish that results collected by the two modes of administration are equivalent. Data were gathered for a program evaluation questionnaire from undergraduate students at a university in Hong Kong. Students were able to choose between completion on paper or through the internet. In six of the seven Faculties the number of responses through each mode was roughly the same. Students in the Engineering Faculty favored the internet. Scores on the 14 out of 18 scales in the instrument showed small differences by mode of response, which became smaller still with controls for pertinent demographic variables. The main response question addressed in the study was whether there was any difference in the way respondents to the two modes interpreted the questions. The study demonstrated the equivalence of the two data sets by showing that both could be fitted to a common model with structural equation modeling (SEM). Five levels of tests of invariance further confirmed the comparability of data by mode of administration. This study, therefore suggests that changing to internet collection for course and program evaluations will not affect the comparability of ratings.


response bias teaching evaluation internet surveys college student surveys web surveys 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Carini, R. M., Hayek, J. C., Kuh, G. D., Kennedy, J. M., Ouimet, J. A. 2003College student responses to web and paper surveys: Does mode matter?Research in Higher Education44119CrossRefGoogle Scholar
  2. Chou, C. P., Bentler, P. M., Satorra, A. 1991Scaled test statistics and robust standard errors for nonnormal data in covariance structure analysis: A Monte Carlo studyBritish Journal of Mathematical and Statistical Psychology44347357PubMedGoogle Scholar
  3. Cohen, J. 1988Statistical power analysis for the behavioural sciences(2nd ed). Lawrence Earlbaum, HillsdaleNJGoogle Scholar
  4. Couper, M. P. 2000Web surveys: A review of issues and approachesPublic Opinion Quarterly64464494CrossRefPubMedGoogle Scholar
  5. Cronk, B. C., West, J. L. 2002Personality research on the Internet: A comparison of Web-based and traditional instruments in take-home and in-class settingsBehavior Research Methods, Instruments, & Computers34177180Google Scholar
  6. Dillman, D. A. 2000Mail and Internet Surveys: The Tailor Design Method(2nd ed). John Wiley and SonsNew YorkGoogle Scholar
  7. Feldt, L. S., Woodruff, D. J., Salih, F. A. 1987Statistical inference for coefficient αApplied Psychological Measurement1193103Google Scholar
  8. Furse, D. H., Stewart, D. W., Rados, D. L. 1981Effect of foot-in-the-door, cash incentives, and follow-ups on survey responsesJournal of Marketing Research18473478Google Scholar
  9. Hancock, D. R., and Flowers, C. P. (2000). Social desirability responding on world wide web and paper-administrated surveys. Paper Presented at the National Convention of the Association for Educational Communications and Technology, Denver, OCGoogle Scholar
  10. Henly, S. J. 1993Robustness of some estimators for the analysis of covariance structuresBritish Journal of Mathematical and Statistical Psychology46313338PubMedGoogle Scholar
  11. Hu, L.-T., Bentler, P. M., Kano, Y. 1992Can test statistics in covariance structure analysis be trusted?Psychological Bulletin112351362CrossRefPubMedGoogle Scholar
  12. Kember, D., and Leung, D. Y. P. (2005). The influence of active learning experiences on the development of graduate capabilities. Studies in Higher Education. 30(2): 157–172Google Scholar
  13. Kember, D., and Leung, D. Y. P. The impact of the teaching and learning environment on the development of generic capabilities. (Submitted for publication)Google Scholar
  14. Kuh, G. D. 2001Assessing what really matters to student learning: Inside the National Survey of Student EngagementChange331017, 66Google Scholar
  15. Layne, B. H., DeCristoforo, J. R., McGinty, D. 1999Electronic versus traditional student ratings of instructionResearch in Higher Education40221232CrossRefGoogle Scholar
  16. Little, T. D. 1997Mean and covariance structures (MACS) analyses of cross-cultural data: Practical and theoretical issuesMultivariate Behavioral Research325376Google Scholar
  17. Moss, J., Hendry, G. 2002Use of electronic surveys in course evaluationBritish Journal of Educational Technology33583592CrossRefGoogle Scholar
  18. Olsen, D. R., Wygant, S. A., and Brown B. L. (1999). Entering the next millennium with web-based assessment: Considerations of efficiency and reliability. Paper Presented at the Conference of the Rocky Mountain Association for Institutional Research, Las Vegas, NVGoogle Scholar
  19. Rindskopf, F. 1984Using phantom and imaginary latent variables to parameterize constraints in linear structural modelsPsychometrika493747Google Scholar
  20. Rosenthal, R., Rosnow, R., Rubin, D. B. 2000Contrasts and Effect Sizes in Behavioral Research: A Correlation ApproachCambridge University PressNew YorkGoogle Scholar
  21. Sax, L. J., Gilmartin, S. K., Bryant, A. N. 2003Assessing response rates and non-response bias in web and paper surveysResearch in Higher Education,44409431Google Scholar
  22. Solomon, D. J. 2001Conducting web-based surveys Practical AssessmentResearch and Evaluation71923Google Scholar
  23. Steenkamp, J .E. M., Baumgartner, H. 1998Assessing measurement invariance in cross-national consumer researchJournal of Consumer Research257890CrossRefGoogle Scholar
  24. Tomsic, M. L., Hendel, D. D., and Matross, R. P. (2000). A World Wide Web response to student satisfaction surveys: Comparisons using paper and Internet formats. Paper Presented at the Annual Meeting of the Association for Institutional Research, CincinnatiGoogle Scholar
  25. Van de Vijver, F. J. R., Harsveld, M. 1994The incomplete equivalence of the paper-and-pencil and computerized versions of the general aptitude test batteryJournal of Applied Psychology79852859CrossRefGoogle Scholar
  26. Vandenberg, R. J., Lance, C. E. 2000A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational researchOrganizational Research Methods3470Google Scholar
  27. Vispoel, W. P. 2000Computerized versus paper-and-pencil assessment of self-concept: Score comparability and respondent preferencesMeasurement and Evaluation in Counseling and Development33130143Google Scholar
  28. Vispoel, W. P., Boo, J., Blieiler, T. 2001Computerized and paper-and-pencil versions of the Rosenberg self-esteem scale: A comparison of psychometric features and respondent preferencesEducational & Psychological Measurement61461474Google Scholar
  29. Woodruff, D. J., Feldt, L. S. 1986Tests for equality of several alpha coefficients when their sample estimates are dependentPsychometrika51393413Google Scholar
  30. Yun, G. W., and Trumbo, G. W. (2000). Comparative response to a survey executed by post, email, and web form. Journal of Computer Mediated Communication 6(1)Google Scholar
  31. Zusman, B. J., and Duby, P. B. (1984). An evaluation of the use of token monetary incentives in enhancing the utility of postsecondary survey research techniques. Paper Presented at the Annual Meeting of the American Educational Research Association, New Orleans, LAGoogle Scholar

Copyright information

© Springer Science+Business Media, Inc. 2005

Authors and Affiliations

  1. 1.Centre for Learning Enhancement And ResearchThe Chinese University of Hong KongShatinHong Kong

Personalised recommendations