Skip to main content
Log in

Comparability of Data Gathered from Evaluation Questionnaires on Paper and Through the Internet

  • Published:
Research in Higher Education Aims and scope Submit manuscript

Abstract

Collecting feedback from students through course, program and other evaluation questionnaires has become a costly and time consuming process for most colleges. Converting to data collection through the internet, rather than completion on paper, can result in a cheaper and more efficient process. This article examines several research questions which need to be answered to establish that results collected by the two modes of administration are equivalent. Data were gathered for a program evaluation questionnaire from undergraduate students at a university in Hong Kong. Students were able to choose between completion on paper or through the internet. In six of the seven Faculties the number of responses through each mode was roughly the same. Students in the Engineering Faculty favored the internet. Scores on the 14 out of 18 scales in the instrument showed small differences by mode of response, which became smaller still with controls for pertinent demographic variables. The main response question addressed in the study was whether there was any difference in the way respondents to the two modes interpreted the questions. The study demonstrated the equivalence of the two data sets by showing that both could be fitted to a common model with structural equation modeling (SEM). Five levels of tests of invariance further confirmed the comparability of data by mode of administration. This study, therefore suggests that changing to internet collection for course and program evaluations will not affect the comparability of ratings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • R. M. Carini J. C. Hayek G. D. Kuh J. M. Kennedy J. A. Ouimet (2003) ArticleTitleCollege student responses to web and paper surveys: Does mode matter? Research in Higher Education 44 IssueID1 1–19 Occurrence Handle10.1023/A:1021363527731

    Article  Google Scholar 

  • C. P. Chou P. M. Bentler A. Satorra (1991) ArticleTitleScaled test statistics and robust standard errors for nonnormal data in covariance structure analysis: A Monte Carlo study British Journal of Mathematical and Statistical Psychology 44 347–357 Occurrence Handle1772802

    PubMed  Google Scholar 

  • J. Cohen (1988) Statistical power analysis for the behavioural sciences (2nd ed). Lawrence Earlbaum, Hillsdale NJ

    Google Scholar 

  • M. P. Couper (2000) ArticleTitleWeb surveys: A review of issues and approaches Public Opinion Quarterly 64 464–494 Occurrence Handle10.1086/318641 Occurrence Handle11171027

    Article  PubMed  Google Scholar 

  • B. C. Cronk J. L. West (2002) ArticleTitlePersonality research on the Internet: A comparison of Web-based and traditional instruments in take-home and in-class settings Behavior Research Methods, Instruments, & Computers 34 IssueID2 177–180

    Google Scholar 

  • D. A. Dillman (2000) Mail and Internet Surveys: The Tailor Design Method EditionNumber(2nd ed). John Wiley and Sons New York

    Google Scholar 

  • L. S. Feldt D. J. Woodruff F. A. Salih (1987) ArticleTitleStatistical inference for coefficient α Applied Psychological Measurement 11 IssueID1 93–103

    Google Scholar 

  • D. H. Furse D. W. Stewart D. L. Rados (1981) ArticleTitleEffect of foot-in-the-door, cash incentives, and follow-ups on survey responses Journal of Marketing Research 18 473–478

    Google Scholar 

  • Hancock, D. R., and Flowers, C. P. (2000). Social desirability responding on world wide web and paper-administrated surveys. Paper Presented at the National Convention of the Association for Educational Communications and Technology, Denver, OC

  • S. J. Henly (1993) ArticleTitleRobustness of some estimators for the analysis of covariance structures British Journal of Mathematical and Statistical Psychology 46 313–338 Occurrence Handle8297792

    PubMed  Google Scholar 

  • L.-T. Hu P. M. Bentler Y. Kano (1992) ArticleTitleCan test statistics in covariance structure analysis be trusted? Psychological Bulletin 112 351–362 Occurrence Handle10.1037//0033-2909.112.2.351 Occurrence Handle1:STN:280:ByyD1cjmsFQ%3D Occurrence Handle1454899

    Article  CAS  PubMed  Google Scholar 

  • Kember, D., and Leung, D. Y. P. (2005). The influence of active learning experiences on the development of graduate capabilities. Studies in Higher Education. 30(2): 157–172

    Google Scholar 

  • Kember, D., and Leung, D. Y. P. The impact of the teaching and learning environment on the development of generic capabilities. (Submitted for publication)

  • G. D. Kuh (2001) ArticleTitleAssessing what really matters to student learning: Inside the National Survey of Student Engagement Change 33 IssueID3 10–17, 66

    Google Scholar 

  • B. H. Layne J. R. DeCristoforo D. McGinty (1999) ArticleTitleElectronic versus traditional student ratings of instruction Research in Higher Education 40 221–232 Occurrence Handle10.1023/A:1018738731032

    Article  Google Scholar 

  • T. D. Little (1997) ArticleTitleMean and covariance structures (MACS) analyses of cross-cultural data: Practical and theoretical issues Multivariate Behavioral Research 32 IssueID1 53–76

    Google Scholar 

  • J. Moss G. Hendry (2002) ArticleTitleUse of electronic surveys in course evaluation British Journal of Educational Technology 33 IssueID5 583–592 Occurrence Handle10.1111/1467-8535.00293

    Article  Google Scholar 

  • Olsen, D. R., Wygant, S. A., and Brown B. L. (1999). Entering the next millennium with web-based assessment: Considerations of efficiency and reliability. Paper Presented at the Conference of the Rocky Mountain Association for Institutional Research, Las Vegas, NV

  • F. Rindskopf (1984) ArticleTitleUsing phantom and imaginary latent variables to parameterize constraints in linear structural models Psychometrika 49 37–47

    Google Scholar 

  • R. Rosenthal R. Rosnow D. B. Rubin (2000) Contrasts and Effect Sizes in Behavioral Research: A Correlation Approach Cambridge University Press New York

    Google Scholar 

  • L. J. Sax S. K. Gilmartin A. N. Bryant (2003) ArticleTitleAssessing response rates and non-response bias in web and paper surveys Research in Higher Education, 44 IssueID4 409–431

    Google Scholar 

  • D. J. Solomon (2001) ArticleTitleConducting web-based surveys Practical Assessment Research and Evaluation 7 19–23

    Google Scholar 

  • J .E. M. Steenkamp H. Baumgartner (1998) ArticleTitleAssessing measurement invariance in cross-national consumer research Journal of Consumer Research 25 78–90 Occurrence Handle10.1086/209528

    Article  Google Scholar 

  • Tomsic, M. L., Hendel, D. D., and Matross, R. P. (2000). A World Wide Web response to student satisfaction surveys: Comparisons using paper and Internet formats. Paper Presented at the Annual Meeting of the Association for Institutional Research, Cincinnati

  • F. J. R. Van de Vijver M. Harsveld (1994) ArticleTitleThe incomplete equivalence of the paper-and-pencil and computerized versions of the general aptitude test battery Journal of Applied Psychology 79 IssueID6 852–859 Occurrence Handle10.1037//0021-9010.79.6.852

    Article  Google Scholar 

  • R. J. Vandenberg C. E. Lance (2000) ArticleTitleA review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research Organizational Research Methods 3 IssueID1 4–70

    Google Scholar 

  • W. P. Vispoel (2000) ArticleTitleComputerized versus paper-and-pencil assessment of self-concept: Score comparability and respondent preferences Measurement and Evaluation in Counseling and Development 33 130–143

    Google Scholar 

  • W. P. Vispoel J. Boo T. Blieiler (2001) ArticleTitleComputerized and paper-and-pencil versions of the Rosenberg self-esteem scale: A comparison of psychometric features and respondent preferences Educational & Psychological Measurement 61 IssueID3 461–474

    Google Scholar 

  • D. J. Woodruff L. S. Feldt (1986) ArticleTitleTests for equality of several alpha coefficients when their sample estimates are dependent Psychometrika 51 IssueID3 393–413

    Google Scholar 

  • Yun, G. W., and Trumbo, G. W. (2000). Comparative response to a survey executed by post, email, and web form. Journal of Computer Mediated Communication 6(1)

  • Zusman, B. J., and Duby, P. B. (1984). An evaluation of the use of token monetary incentives in enhancing the utility of postsecondary survey research techniques. Paper Presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Doris Y. P. Leung.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Leung, D.Y.P., Kember, D. Comparability of Data Gathered from Evaluation Questionnaires on Paper and Through the Internet. Res High Educ 46, 571–591 (2005). https://doi.org/10.1007/s11162-005-3365-3

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11162-005-3365-3

Keywords

Navigation