Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

A Case Study for the 10-Step Approach to Program Evaluation

Abstract

Well-designed instructional programs seamlessly promote human performance. Students are often unaware of countless iterations of formative evaluation completed to improve effectiveness, efficiency, motivation, and flow of that instructional design. This paper examines one research-based, comprehensive, systematic evaluation approach as applied by students through two case studies where they evaluated instructional design at a macro level of real-world training programs. The 10-Step Evaluation for Training and Performance Improvement (Chyung 2019) was piloted in a graduate-level instructional design and technology class. Lessons learned include evaluators recognizing the importance of honing in on the questions to ask to determine the dimensions to evaluate while remaining unbiased with their assumptions. For example, results from the first case initially focused on a summative evaluation that hypothesized that the program should be discontinued. Conversely, that case actually uncovered unanticipated findings that resulted in recommendations for the program to be revamped in an effort to continue improvement of human performance.

This is a preview of subscription content, log in to check access.

References

  1. Chyung, S. Y. (2019). 10-step evaluation for training and performance improvement. Thousand Oaks: SAGE Publications.

  2. Kellogg, W. K. (1998). Logic Model Development Guide. Battle Creek: W.K. Kellogg Foundation Retrieved from https://www.wkkf.org.

  3. Kennedy, P. E., Chyung, S. Y., Winiecki, D. J., & Brinkerhoff, R. O. (2014). Training professionals' usage and understanding of Kirkpatrick's level 3 and level 4 evaluations. International Journal of Training and Development, 18(1), 1–21.

  4. Kirkpatrick, D. (2007). The four levels of evaluation (no. 701). Pewaukee: American Society for Training and Development Press.

  5. Kirkpatrick, D., & Kirkpatrick, J. (2006). Evaluating training programs: The four levels. San Francisco: Berrett-Koehler Publishers.

  6. Morrison, G. R., Ross, S. M., Kalman, H. K., & Kemp, J. E. (2013). Designing effective instruction (7th ed.). Hoboken: John Wiley & Sons, Inc..

  7. Scriven, M. (1991). Evaluation thesaurus. Point Reyes: Sage Publications.

  8. Stufflebeam, D. L. (2007). CIPP evaluation model checklist. Retrieved from https://kwschochconsulting.com/wp-content/uploads/2017/04/cippchecklist_mar07.pdf.

Download references

Author information

Correspondence to Suzanne Ensmann.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Research Involving Human Participants and/or Animals

All procedures performed in the program evaluations provided to the company and higher education institution program involving human participants were in accordance with ethical standards and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.

Informed Consent

In compliance with Ethical Standards, informed consent was obtained from all individual participants included in the evaluations.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ensmann, S., Ward, A., Fonseca, A. et al. A Case Study for the 10-Step Approach to Program Evaluation. TechTrends 64, 329–342 (2020). https://doi.org/10.1007/s11528-019-00473-4

Download citation

Keywords

  • Human performance
  • Program evaluation
  • 10-step approach
  • Instructional design