Skip to main content

Evaluating College and University Teaching: Reflections of A Practitioner

  • Chapter
Book cover Higher Education: Handbook of Theory and Research

Part of the book series: Higher Education: Handbook of Theory and Research ((HATR,volume 18))

Abstract

Whenever there is a discussion about evaluating college and university teaching, almost always someone will ask: Why are we focusing on teaching; learning is what is really important? Of course that is true. Learning is the real goal of higher education; our teaching is simply one means to help students learn.

I would like to thank two reviewers, Kenneth A. Feldman and Raymond P. Perry; their extensive comments substantially improved the clarity of this chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • ** All of the IDEA Papers cited in this chapter are available on the IDEA Center’s webpage: www.idea.ksu.edu, and are indicated by a double asterisk.

    Google Scholar 

  • Abrami, P.C., d’Apollon ia, S., and Rosen field, S. (1996). The dimensionality ol: student ratings of instruction: What we know and what we do not. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. XI). New York: Agathon Press*

    Google Scholar 

  • Anderson, L.W. and Krathwohl, D. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman

    Google Scholar 

  • Angelo, T.A., and Cross, K.P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass.

    Google Scholar 

  • Arreola, R.A. (1986). Evaluating the dimensions of teaching. Instructional Evaluation, 8, 4–12.

    Google Scholar 

  • Arreola, R.A. (1989). Defining and evaluating the elements of teaching. Proceedings of Academic Chairpersons Conference: Evaluating Faculty, Students, and Programs. Manhattan, Kan.: Kansas State University.

    Google Scholar 

  • Arreola, R.A. (2000). Developing a comprehensive faculty evaluation system: A handbook for college faculty and administrators on designing and operating a comprehensive faculty evaluation system (2nd ed.). Bolton, Mass.: Anker Publishing.

    Google Scholar 

  • Bellack, A.A., Hyman, R., Smith, F.L., and Kliebard., H.M. (1966). Language of the classroom. New York: Teachers College Press

    Google Scholar 

  • Bennett, W.E. (1987). Small Group Instructional Diagnosis: A dialogic approach to instructional improvement for tenured faculty. Journal of Staff, Program, and Organizational Development, 5,100–104.

    Google Scholar 

  • Biglan, A. (1973a). The characteristics of subject matter in different academic areas. Journal of Applied Psychology, 57, 195–203.

    Google Scholar 

  • Biglan, A. (1973b). Relationships between subject matter characteristics and the structure and output of university departments. Journal of Applied Psychology, 57, 204–213.

    Google Scholar 

  • Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H., and Krathwohl, D.R. (1956). Taxonomy of educational objectives: Handbook I, the cognitive domain. New York: David McKay.

    Google Scholar 

  • Borich, G.D., and Madden, S.K. (1977). Evaluating classroom instruction: A sourcebook of instruments. Reading, Mass.: Addison-Wesley.

    Google Scholar 

  • Boyer, E.L. (1987). College: The undergraduate experience in America. New York: Harper & Row.

    Google Scholar 

  • Braskamp, L.A., and Ory. J.C. (1994). Assessing/acuity work: Enhancing individual and institutional performance. San Francisco: Jossey-Bass.

    Google Scholar 

  • Brinko, K.T. (1993). The practice of giving feedback to improve teaching: What is effective? Journal of Higher Education, 64, 574–593.

    Google Scholar 

  • Carnegie Commission. (1973). Priorities for action: Final report of the Carnegie Commission on Higher Education. New York: McGraw Hill.

    Google Scholar 

  • Cashin, W.E. (1988a). Student ratings of teaching: A summary of the research, IDEA Paper No, 20. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development. **

    Google Scholar 

  • Cashin, W.E. (1988b). Using evaluation data to improve college classroom teaching. In I.S. Cohen (Ed.), The G. Stanley Hall lecture series: Vol 8. Washington, DC: American Psychological Association.

    Google Scholar 

  • Cashin, W.E. (1989). Defining and evaluating college teaching. IDEA Paper No. 21. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.**

    Google Scholar 

  • Cashin, W.E. (1990a). Students do rate different academic fields differently. In M. Theall, and J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice: New Directions for leaching and Learning, No. 43 (pp. 113-121). San Francisco: jossey-Bass.

    Google Scholar 

  • Cashin, W.E. (1990b). Student ratings of teaching: Recommendations for use. IDEA Paper No. 22, Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development. **

    Google Scholar 

  • Cashin, W.E. (1992). Student ratings: The need for comparative data. Instructional Evaluation and Faculty Development, 12(2), 1–6.

    Google Scholar 

  • Cashin, W.E. (1995). Student ratings of teaching: The research revisited. IDEA Paper No. 32. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development. **

    Google Scholar 

  • Cashin, W.E. (1996a). Developing an effective faculty evaluation system. IDEA Paper No. 33. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.**

    Google Scholar 

  • Cashin, W.E. (1996b). IDEA R&D-Instructor objectives and student learning. Exchange, 1996, January. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.

    Google Scholar 

  • Cashin, W.E. (1997). Should student ratings be interpreted absolutely or relatively? Reaction to McKeachie (1996). instructional Evaluation and Faculty Development, 16(2), 14–19.

    Google Scholar 

  • Cashin, W.E. (1999). Student ratings of teaching: Uses and misuses. In P. Seldin, and Associates, Changing practices in evaluating teaching: A practical guide to improved faculty performance and promotion/tenure decisions. Bolton, Mass.: Anker.

    Google Scholar 

  • Cashin, WE., and Downey, R.G. (1992). Using global student ratings for summative evaluation, Journal of Educational Psychology, 84, 563–572.

    Google Scholar 

  • Cashin, W.E., and Downey, R.G. (1995). Disciplinary differences in what is taught and in students’perceptions of what they learn and of how they are taught. In N. Hativa, and M. Marincovich (Eds.), Disciplinary differences in teaching and learning: Implications for practice: New Directions for Teaching and Learning, No. 64 (pp. 81-92). San Francisco: Jossey-Bass.

    Google Scholar 

  • Cashin, W.E., and Downey, R.G. (1999, April). Using global student ratings for summative evaluation: Convergence with a second criterion. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Quebec, Canada.

    Google Scholar 

  • Cashin, W.E., Downey, R.G., and Sixbury, G.R. (1994). Global and specific ratings of teaching effectiveness and their relation to course objectives: Reply to Marsh (1994). Journal of Educational Psychology, 86, 649–657.

    Google Scholar 

  • Cashin, WE., and Perrin, B.M. (1978). IDEA technical report no. 4: Description of IDEA Standard Form data base. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.

    Google Scholar 

  • Cashin, W.E., and Sixbury, G.R. (1993). IDEA technical report no. 8: Comparative data by academic field. Manhattan, Kan,: Kansas State University, Center for Faculty Evaluation and Development.

    Google Scholar 

  • Centra, J.A. (1976). Faculty development practices in U.S. colleges and universities (Report PR-76-30). Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Centra, J.A. (1977). How universities evaluate /acuity performance: A survey of department heads (Report GREB No. 75-5bR). Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Centra, J.A. (1979). Determining facuity effectiveness: Assessing leaching, research, and service for personnel decisions and improvement. San Francisco: Jossey-Bass.

    Google Scholar 

  • Centra, J.A. (1989). Faculty evaluation and faculty development in higher education. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. V). New York: Agathon Press

    Google Scholar 

  • Centra, J.A. (1993). Reflective faculty evaluation: Enhancing leaching and determiningfaculty effectiveness. San Francisco: Jossey-Bass.

    Google Scholar 

  • Centra, J.A., and Gaubatz, N.B. (2000). Is there a gender bias in student evaluations of teaching? Journal of Higher Education, 70, 17–33.

    Google Scholar 

  • Checkering, A., and Gamson, Z. (1987). Seven principles for good practice in higher education. AAHE Bulletin, 39, 3–7.

    Google Scholar 

  • Chisrn, N.V.N. (1999). Peer review of teaching: A sourcebook. Bolton, Mass.: Anker.

    Google Scholar 

  • Cohen, P.A. (1981). Student ratings of instruction and student achievement: A meta-analysis of multisection validity studies. Review of Educational Research, 51, 281–309.

    Google Scholar 

  • d’Apollonia, S. and Abrami, P.C. (1997). Navigating student ratings of instruction. American Psychologist, 52, 1198–1208.

    Google Scholar 

  • DeZure, D. (1999). Evaluating teaching through peer classroom observation. In P. Seldin, & Associates, Changing practices in evaluating teaching: A practical guide to improved faculty performance and promotion/tenure decisions. Bolton, Mass.: Anker.

    Google Scholar 

  • Diamond, R.M. (1999). Aligning faculty rewards with institutional mission: Statements, policies, and guidelines. Bolton, Mass.: Anker Publishing.

    Google Scholar 

  • Diamond, R.M., and Adam, B.E. (Eds.) (1993). Recognizing faculty work: Reward systems for the year 2000: New directions for higher education, No. 81. San Francisco: Jossey-Bass.

    Google Scholar 

  • Dunkln, M. J., and Barnes, J. (1986). Research on teaching in higher education. In M.C. Wittrock (Ed.), Handbook of research on teaching (3rd ed.). (June 25, 2001) New York: Macmillan.

    Google Scholar 

  • Edgerton, R., Hatchings, P., and Qainlan, K. (1991). The teaching portfolio: Capturing the scholarship in teaching. Washington, DC: American Association for Higher Education.

    Google Scholar 

  • Educational Testing Service (1995). SIR II: Student instructional Report II. Princeton, NJ: author.

    Google Scholar 

  • Erickson, G. (1986). A survey of faculty development practices. In To improve the academy: Resources for student, faculty, & institutional development. NPP: The Professional and Organizational Development Network and the National Council for Staff, Program and Organizational Development.

    Google Scholar 

  • Evertson, CM., and Green, J.I.. (1986). Observations as inquiry and method. In M.C. Wittrock (Ed.), Handbook of research on teaching (3rd ed.). New York: Macmillan.

    Google Scholar 

  • Evertson, CM., and Holley, F.M. (1981). Classroom observation. In J. Millman (Ed.), Handbook of teacher evaluation. Beverly Hills, Calif.: Sage Publications.

    Google Scholar 

  • Feldman, K.A. (1978). Course characteristics and college students’ratings of their teachers: What we know and what we don’t. Research in Higher Education, 9, 199–242.

    Google Scholar 

  • Feldman, K.A. (1989), The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multisection validity studies. Research in Higher Education, 30, 583–645.

    Google Scholar 

  • Feldman, K.A. (1992). College students’views of male and female college teachers: Part I Evidence from the social laboratory and experiments. Research in Higher Education, 33, 317–375.

    Google Scholar 

  • Feldman, K.A. (1993). College students’views of male and female college teachers: Part II — Evidence from students’evaluations of their classroom teachers. Research in Higher Education, 34, 151–211.

    Google Scholar 

  • Feldman, K.A. (1998). Reflections on the study of effective college teaching and student ratings: One continuing quest and two unresolved issues. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. XIII). New York: Agathon Press

    Google Scholar 

  • Greenwald, A.G. (1997). Validity concerns and usefulness of student ratings of instruction. American Psychologist, 52, 1182–1186.

    Google Scholar 

  • Greenwald, A.G., and Gillmore, G.M. (1997a). Grading leniency is a removable contaminant of student ratings. American Psychologist., 52, 1209–1217.

    Google Scholar 

  • Greenwald, A.G., and Gillmore, G.M. (1997b). No pain, no gain? The importance of measuring course workload in student ratings on instruction. Journal of Educational Psychology, 89, 743–751.

    Google Scholar 

  • Hanna, G.S., and Cashin, W.E. (1987). Matching instructional objectives, subject matter, tests, and score interpretations. IDEA Paper No. 18. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.**

    Google Scholar 

  • Hanna, G.S., and Cashin, W.E. (1988). Improving college grading. IDEA Paper No. 19. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.**

    Google Scholar 

  • Hanna, G.S., and Dettmer, P. (in press). Better teaching through better assessment. Boston: McGraw-Hill.

    Google Scholar 

  • Helling, B.B. (1988). Looking for good teaching: A guide to peer observation. Journal of Staff, Program, & Organizational Development, 6, 147–158. Also (ERIC Document Reproduction Service No. ED 186 380).

    Google Scholar 

  • Howard, G.S., and Maxwell, S.E. (1980). Correlation between student satisfaction and grades: A case of mistaken causation? Journal of Educational Psychology, 72, 810–820.

    Google Scholar 

  • Howard, G.S., and Maxwell, S.E. (1982). Do grades contaminate student evaluations of instruction? Research in Higher Education, 16, 175–188.

    Google Scholar 

  • Hoyt, D.P. (1973). Measurement of instructional effectiveness. Research in Higher Education, 1, 367–378.

    Google Scholar 

  • Hoyt, D.P., and Cashin, W.E. (1977). IDEA technical report no. I: Development of the IDEA system. Manhattan, Kan,: Kansas State University, Center for Faculty Evaluation and Development.

    Google Scholar 

  • Hoyt, D.P., and Pallett, W.H. (1999). Appraising teaching effectiveness: Beyond student ratings. IDEA Paper No. 36. Manhattan, Kan.: Kansas State University, IDEA Center.**

    Google Scholar 

  • IDEA Center. (1998). IDEA survey form — Students reactions to instruction and courses. Manhattan, Kan.: Kansas State University, author. (Available on the Internet: http://www.idea.ksu.edu)

    Google Scholar 

  • Kuh, G.D., Douglas, K.B., Lund, J.P., and Ramin-Gyurmek, J. (1994). Student learning outside the classroom: Transcending artificial boundaries. ASHE-ERIC Higher Education Report No. 8. Washington, DC: George Washington University, Graduate School of Education and Human Development.

    Google Scholar 

  • Kulik, J.A. (2001). Student ratings: Validity, utility, and controversy. In M. Theall, P.C. Abrami, and L.A. Mets (Eds.). The student ratings debate: Are they valid? How can we best use them? New Directions for Institutional Research, No. 109. San Francisco: Jossey-Bass.

    Google Scholar 

  • Kulik, J.A., Cohen, P.A., and Ebeling, B.J. (1980). Effectiveness of programmed instruction in higher education: A meta-analysis of findings. Educational Evaluation and Policy Analysis, 2, 51–64.

    Google Scholar 

  • Kulik, J.A., Kulik, C.C., and Cohen, P.A. (1979a). A meta-analysis of outcome studies of Keller’s Personalized System of Instruction. American Psychologist, 34, 307–318.

    Google Scholar 

  • Kulik, J.A., Kulik, C.C., and Cohen, P.A. (1979b). Research on audio-tutorial instruction: A meta-analysis of comparative studies. Research in Higher Education, 11, 321–341.

    Google Scholar 

  • Lends, K.G. (1986). Using an objective observation system to diagnose teaching problems. Journal of Staff Program, & Organizational Development, 4, 81–90.

    Google Scholar 

  • Marsh, H.W. (1994). Weighting for the right criteria in the IDEA System: Global and specific ratings of teaching effectiveness and their relation to course objectives. Journal of Educational Psychology, 86, 631–648.

    Google Scholar 

  • Marsh, H.W. (2001). Distinguishing between good (useful) and bad workloads on student evaluations of teaching. American Educational Research journal, 38, 183–212.

    Google Scholar 

  • Marsh, H.W., and Dunkin, M.J. (1992). Students’evaluations of university teaching: A multidimensional perspective. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. VIII). New York: Agathon Press*

    Google Scholar 

  • Marsh, H.W., Overall, J.U., and Kesler, S.P. (1979). Validity of student evaluations of instructional effectiveness: A comparison of faculty self-evaluations and evaluation by their students. Journal of Educational Psychology, 71, 149–160.

    Google Scholar 

  • Marsh, H.W., and Roche, L.A. (1997). Making student evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist, 52, 1187–1197.

    Google Scholar 

  • Marsh, H.W., and Roche, L.A. (2000). Effects of grading leniency and low workload on students’ evaluations of teaching: Popular myth, bias, validity, and innocent bystanders, Joumal of Educational Psychology, 92, 202–228.

    Google Scholar 

  • McKeachie, W.J. (1996). Do we need norms of student ratings to evaluate faculty? Instructional Evaluation and Faculty Development, 15, 14–17.

    Google Scholar 

  • McKeachie, W.J. (1999). Teaching, learning, and thinking about teaching and learning. In J.C. Smart (Ed,), Higher education: Handbook of theory and research (Vol. XIV). New York: Agathon Press

    Google Scholar 

  • Menec, V.M., and Perry, R.P. (1995). Disciplinary differences in students’perceptions of success: Modifying misperceptions with attributional retraining. In N. Hativa, & M. Marincovich (Eds.), Disciplinary differences in teaching and learning: Implications for practice: New Directions for Teaching and Learning, No. 64 (pp. 105–112). San Francisco: Jossey-Bass.

    Google Scholar 

  • Murphy, K.R., and Cleveland, J.N. (1991). Performance appraisal: An organizational perspective. Boston: Allyn and Bacon.

    Google Scholar 

  • Murray, H. G. (1983). Low-inference classroom teaching behaviors and student ratings of college teaching effectiveness. Journal of Educational Psychology, 75, 138–149.

    Google Scholar 

  • Murray, H.G. (1991). Effective teaching behaviors in the college classroom. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. VII). New York: Agathon Press*

    Google Scholar 

  • Murray, H.G. (2001). Low-inference teaching behaviors and college teaching effectiveness: Recent developments and controversies. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. XVI). New York: Agathon.

    Google Scholar 

  • Perry, R.P. (1991). Perceived control in college students: Implications for instruction in higher education. Tn J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. VII). New York: Agathon Press*

    Google Scholar 

  • Perry, R.P. (1999). Teaching for success: Assisting helpless students in their academic devel-opment. Education Canada, 39, 16–19.

    Google Scholar 

  • Perry, R.P., Hechter, F.J., Menec, V.H., and Weinberg, L.E. (1993). Enhancing achievement motivation and performance in college students: An attributional retraining perspective. Research in Higher Education, 34, 687–723.

    Google Scholar 

  • Perry, R.P., Hladkyj, S., Pekrun, R.H., and Pelletier, S.T. (2001). Academic control and action control in the achievement of college students: A longitudinal field study. Journal of Educational Psychology, 93, 1–14.

    Google Scholar 

  • Perry, R.P., and Smart, J.C. (Eds.). (1997). Effective teaching in higher education: Research and practice. New York: Agathon Press

    Google Scholar 

  • Redding, R.E. (1998). Students’evaluations of teaching fuel grade inflation. American Psychologist, 53, 1227–1228.

    Google Scholar 

  • Redmond, M.V., and Clark, DJ. (1982). A practical approach to teaching. AAHE Bulletin, 1, 9–10.

    Google Scholar 

  • Scriven, M. (1981). Summative teacher evaluation. In J. Millman (Ed.), Handbook of teacher evaluation (pp. 244–271). Beverly Hills, Calif.: age.

    Google Scholar 

  • Seldin, P. (1997). The teaching portfolio: A practical guide to improving performance and promotion/tenure decisions. Bolton, Mass.: Anker Publishing.

    Google Scholar 

  • Seldin, P. (1999). Current practices — good and bad — nationally. In P. Seldin, and Associates, Changing practices in evaluating teaching: A practical guide to improved faculty performance and promotion/tenure decisions. Bolton, Mass.: Anker.

    Google Scholar 

  • Seldin, P., and Associates. (1991). Successful use of teaching portfolios. Bolton, Mass.: Anker Publishing.

    Google Scholar 

  • Shore, B., Foster, S., Knapper, C, Nadeau, G. and. Sim, V. (1986). The teaching dossier: A guide to its preparation and use (Rev. edn.). Montreal, Canada: Canadian Association of University Teachers.

    Google Scholar 

  • Simon, A,, and Boyer, E.G. (Eds.). (1974). Mirrors for behavior III: An anthology of observation instruments. Wyncote, Pa: Communications Materials Center.

    Google Scholar 

  • Sixbury, G.R., and Cashin, W.E. (1995a). IDEA technical report no. 9: Description of database for the IDEA Diagnostic Form. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.

    Google Scholar 

  • Sixbury, G.R., and Cashin, W.E. (1995b). IDEA technical report no. 10: Comparative data by academic field. Manhattan, Kan.: Kansas State University, Center for Faculty Evaluation and Development.

    Google Scholar 

  • Stallings, J.A. (1977). Learning to look: A handbook on classroom observation and teaching models. Belmont, Calif.: Wadsworth.

    Google Scholar 

  • Stark, J.S., Lowther, M.A., Ryan, M.P., and Genthon, M. (1988). Faculty reflect on course planning. Research in Higher Education, 29, 219–240.

    Google Scholar 

  • Tinto, V. (1986). Theories of student departure revisited. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. II). New York: Agathon Press

    Google Scholar 

  • Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago: University of Chicago Press

    Google Scholar 

  • Trower, C.A. (Ed.). (2000). Policies on facility appointment: Standard practices and unusual arrangements. Bolton, Mass.: Anker.

    Google Scholar 

  • Tucker, A. (1984). Chairingthe academic department: Leadership among peers (2nd ed.). Washington, DC: AmericanCouncil on Education..

    Google Scholar 

  • University of Washington. (1993). Instructional Assessment System, Form X. Seattle: author.

    Google Scholar 

  • U.S. Dept. of Education. (1990). Assessing teaching performance. Cited in The Department Chair: A Newsletter for Academic Administrators, Winter, 1991.

    Google Scholar 

  • Webb, W.R. (1981). An essay on consciousness. Teaching of Psychology, 8, 15–19.

    Google Scholar 

  • Weimer, M., and Lenze, L. F. (1991). Instructional interventions: A review of the literature on efforts to improve instruction. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. VII). New York: Agathon Press*

    Google Scholar 

  • Weimer, M., Parrett, J.L., and Kerns, M. (1988). How am I teaching? Forms and activities for acquirin instructional input. Madison, Wis.: Magna Publications.

    Google Scholar 

  • Wilson, R. (1998, January 16). New research casts doubt on value of student evaluations of professors. The Chronicle of Higher Education, A12-14.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Cashin, W.E. (2003). Evaluating College and University Teaching: Reflections of A Practitioner. In: Smart, J.C. (eds) Higher Education: Handbook of Theory and Research. Higher Education: Handbook of Theory and Research, vol 18. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-0137-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-94-010-0137-3_10

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-1-4020-1232-7

  • Online ISBN: 978-94-010-0137-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics