Journal of Computing in Higher Education

, Volume 2, Issue 2, pp 84–113 | Cite as

Ten Commandments for the evaluation of interactive multimedia in higher education

  • Thomas C. Reeves
Selected Conference Paper


A SET OF GUIDELINES for redirecting evaluation and research involving interactive multimedia (IMM) in higher education are presented in the form of “Ten Commandments.” Each commandment is “illuminated” with anecdotes and stories to illustrate its importance and application. In light of the complexity involved in human learning via IMM and the politics of higher education, the commandments stress descriptive approaches to research and evaluation, including “modeling” methods that integrate quantitative and qualitative data.


Educational Research Instructional Technology Interactive Video American Educational Research Association Interactive Multimedia 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Albenese, M., & Huntley, J. (1988, July). Letter: Response to evaluation review of assessment of neuromotor dysfunction in infants.The Videodisc Monitor, 6(1), 26–27.Google Scholar
  2. Anderson, S.B., & Ball, S. (1978).The profession and practice of program evaluation. San Francisco: Jossey-Bass.Google Scholar
  3. Bayard-White, C. (1985).Interactive video case studies and directory. London, Great Britain: National Interactive Video Center.Google Scholar
  4. Borich, G.D., & Jemelka, R.P. (1982).Programs and systems: An evaluation perspective. New York: Academic Press.Google Scholar
  5. Campbell, D.T., & Stanley, J.C. (1963).Experimental and quasi-experimental designs for research. Chicago: Rand McNally.Google Scholar
  6. Carroll, J.B. (1963), A model of school learning.Teachers College Record, 64, 723–733.Google Scholar
  7. Clark, R.E. (1983), Reconsidering research on learning from media,Review of Educational Research.53(4), 445–459.Google Scholar
  8. Cooley, W., & Bickel, W. (1986).Decision-oriented educational research. Boston: Kluwer-Nijhoff.Google Scholar
  9. Cooley, W.W., & Lohnes, P.R. (1976)Evaluation research in education: Theory, principles, and practice. New York: Irvington.Google Scholar
  10. Cronbach, L.J. (1982).Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.Google Scholar
  11. Cziko, G.A. (1989). Unpredictability and indeterminism in human behavior: Arguments and implications for educational research.Educational Researcher, 18(3), 17–25.Google Scholar
  12. DeBloois, M.C. (1988).Use and effectiveness of videodisc training: A status report. Falls Church, VA: Future Systems.Google Scholar
  13. Dewdney, A.K. (1988).The armchair universe: An exploration of computer worlds. New York: W.H. Freeman.Google Scholar
  14. Eisner, E.W. (1985).The art of educational evaluation: A personal view. Philadelphia, PA: Falmer Press.Google Scholar
  15. Fetterman, D.M. (1984),Ethnography in educational evaluation. Beverly Hills, CA: Sage.Google Scholar
  16. Gagné, R.M. (1985).The conditions of learning (4th ed.). New York: Holt, Rinehart and Winston.Google Scholar
  17. Geertz, C. (1973).The interpretation of cultures. New York: Basic Books.Google Scholar
  18. Gleick, J. (1987).CHAOS: Making a new science. New York: Penguin.Google Scholar
  19. Greenfield, P.M. (1989, May).Information technology and visual literacy. Paper presented at the Third International Conference on Children in the Information Age, Sofia, Bulgaria.Google Scholar
  20. Gruber, H.E. (1985). From epistemic subject to unique creative person at work.Archives de Psychologie, 53, 167–185.Google Scholar
  21. Guba, E.G., & Lincoln, Y.S. (1981).Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco, CA: Jossey-Bass.Google Scholar
  22. Hawking, W.H. (1988).A brief history of time: From the big bang to black holes. New York: Bantam.Google Scholar
  23. Herbert, N. (1985).Quantum physics: Beyond the new physics. New York: Anchor Press/Doubleday.Google Scholar
  24. Hoban, C.F. (1958). Research on Media.AV Communication Review, 6(3), 169–178.Google Scholar
  25. House, E.R. (1980)Evaluating with validity. Beverly Hills, CA: Sage.Google Scholar
  26. Hunter, J.E. (1987). Multiple dependent variables in program evaluation. In M.L. Mark & R.L. Shotland (Eds.),Multiple methods in program evaluation. (pp. 43–56) San Francisco, CA: Jossey-Bass.Google Scholar
  27. Huntley, J.S., Albanese, M., Blackman, J., & Lough, L. (1985, April).Evaluation of a computer-controlled videodisc program to teach pediatric neuromotor assessment. Paper presented at Annual Meeting of the American Educational Research Association, Chicago, IL.Google Scholar
  28. Jaeger, R.M. (1988).Complementary methods for research in education. Washington, DC: American Educational Research Association.Google Scholar
  29. Judd, C.M. (1987). Combining process and outcome evaluation. In M.L. Mark & R.L. Shotland (Eds.),Multiple methods in program evaluation. (pp. 23–41) San Francisco, CA: Jossey-Bass.Google Scholar
  30. Kenny, D.A. (1979).Correlation and causality. New York: Wiley.Google Scholar
  31. Leinhardt, G. (1980). Modeling and measuring educational treatment in evaluation.Review of Educational Research, 50(3), 393–420.Google Scholar
  32. Lorenz, E.N. (1963). Deterministic nonperiodic flow.Journal of the Atmospheric Sciences, 20, 130–141.CrossRefGoogle Scholar
  33. Lorenz, E.N. (1979, December).Predictability: Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? Paper presented at the annual meeting of the American Association for the Advancement of Science, Washington, DC.Google Scholar
  34. Mark, M.L., & Shotland, R.L. (1987).Multiple methods in program evaluation. San Francisco, CA: Jossey-Bass.Google Scholar
  35. Marlino, M.R. (1989).An examination of the effective dimensions of interactive videodisc instruction using a process modeling approach. An unpublished doctoral dissertation. The University of Georgia, Athens, GA.Google Scholar
  36. National Study of Secondary School Evaluation. (1960).Evaluative criteria. Washington, DC.Google Scholar
  37. Pagels, H.R. (1988).The dreams of reason: The computer and the rise of the sciences of complexity. New York: Simon and Schuster.Google Scholar
  38. Peters, C.L. (1988).The effects of advisement, content mapping, and interactive video on learner control and achievement in computer-based instruction. Unpublished doctoral dissertation, The University of Georgia, Athens, GA.Google Scholar
  39. Phillips, D.C. (1980). What do the researcher and the practitioner have to offer each other?Educational Researcher, 9(11), 17–24.Google Scholar
  40. Popper, K.R. (1982).Quantum theory and the schism in physics. Totowa, NJ: Rowan and Littlefield.Google Scholar
  41. Reeves, T.C. (1989). The role, methods, and worth of evaluation in instructional design. In K. Johnson and L. Foa, (Eds.),Instructional design: New strategies for education and training. New York: MacMillan.Google Scholar
  42. Reeves, T.C. (1988). Effective dimensions of interactive videodisc for training. In T. Bernold and J. Finklestein, (Eds.).Computer-assisted approaches to training: Foundations of industry’s future, (pp. 119–132) Amsterdam, NR: Elsevier Science.Google Scholar
  43. Reeves, T.C. (1988, April). Evaluation review: Assessment of neuromotor dysfunction in infants.The Videodisc Monitor, 6(4), 26–27.Google Scholar
  44. Reeves, T.C. (1986). Research and evaluation models for the study of interactive video.Journal of Computer-Based Instruction, 13(4), 102–106.Google Scholar
  45. Reeves, T.C., Brandt, R., & Marlino, M.R. (1988, April). Evaluation review: Guidelines, introduction, and overview.The Videodisc Monitor, 6(4), 24–25.Google Scholar
  46. Reeves, T.C., & Lent, R.M. (1984). Levels of evaluation of computer-based instruction. In D.F. Walker & R.D. Hess (Eds.),Instructional software: Principles and perspectives for design and use. Belmont, CA: Wadsworth.Google Scholar
  47. Reeves, T.C., & Marlino, M.R. (1989, April).An evaluation of the emergency medical conditions interactive videodisc. Paper presented at Annual Meeting of the American Educational Research Association, San Francisco, CA.Google Scholar
  48. Reeves, T.C., Marlino, M.R., & Henderson, J.V. (1988, April).Evaluation of acute trauma life support interactive videodisc training. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.Google Scholar
  49. Roueche, J.E., & Herrscher, B.R. (1973).Toward instructional accountability: A practical guide to educational change. Palo Alto, CA: Westinghouse Learning Press.Google Scholar
  50. Sanders, D.P. (1981). Education inquiry as developmental research.Educational Researcher, 10(3), 8–13.Google Scholar
  51. Schroeder, J.E. (1982). U.S. Army VISTA results.Proceedings of the Fourth Annual Conference on Interactive Instruction Delivery. Warrenton, VA: Society for Applied Learning Technology.Google Scholar
  52. Scriven, M. (1967). The methodology of evaluation. In R.E. Stake (Ed.),Curriculum evaluation. American Educational Research Association Monograph Series on Evaluation, No. 1. Chicago: Rand McNally.Google Scholar
  53. Scriven, M. (1973). The methodology of evaluation. In B.R. Worthen & J.R. Sanders, (Eds.),Educational evaluation: Theory and practice. Belmont, CA: Wadsworth.Google Scholar
  54. Stake, R.E. (1967). The countenance of educational evaluation.Teachers College Record, 68, 523–540.Google Scholar
  55. Stake, R.E. (1978). The case study method in social inquiry.Educational Researcher, 7(2), 5–8.Google Scholar
  56. Stufflebeam, D.L. (1983). The CIPP model for program evaluation, in G.L. Madaus, M. Scriven, & D.L. Stufflebeam (Eds.),Evaluation models: View-points on educational and human services evaluation. Boston: Kluwer-Nijhoff.Google Scholar
  57. Taubes, G. (1986).Nobel dreams: Power, deceit, and the ultimate experiment.Google Scholar
  58. Tiene, D., Evans, A., Milheim, W., Callahan, B., & Buck, S. (1989). The instructional effectiveness of interactive video versus computer assisted instruction.Interact Journal, 1(1), 15–21.Google Scholar
  59. Tyler, R.E. (1942). General statement on evaluation.Journal of Educational Research, 55(7), 492–501.Google Scholar
  60. Wang, M.C., & Walberg, H.J. (1983). Evaluating educational programs: An intergrative causal-modeling approach.Educational Evaluation and Policy Analysis, 5(4), 347–366.Google Scholar

Copyright information

© Springer 1991

Authors and Affiliations

  • Thomas C. Reeves
    • 1
  1. 1.Department of Instructional TechnologyThe University of GeorgiaUSA

Personalised recommendations