Educational Psychology Review

, Volume 25, Issue 3, pp 345–351 | Cite as

Using Principles of Evidence-Based Practice to Improve Prescriptive Recommendations



We draw on the evidence-based practice (EBP) literature to consider the relationship between empirical results reported in primary research journals and prescriptive recommendations for practice based on those results. We argue that the relationship between individual empirical findings and practice should be mediated by two additional steps in which empirical findings are aggregated and evaluated, and policy decisions are made by multiple stakeholders with possibly competing value systems to determine how synthesized empirical evidence may best guide practice. We discuss three best practices in the EBP literature that promote generalizability, including aggregation of individual empirical findings, translating aggregated findings into a plan for EBPs, and developing field-based implementation guidelines. We outline a logical sequence in which empirical findings are peer reviewed, aggregated and compared across moderating variables, used to develop EBPs, and deployed by a team of instructional experts after a careful policy review and cost analysis using generalization criteria spelled out in the literature. We argue this process is the best strategy for making prescriptive recommendations regardless of the professional outlet they appear in because it is most likely to be based on replicable data and matched to the field-base context by instructional experts.


Evidence-based practice Four-step process Valid generalization Prescriptive recommendations 


  1. Bellamy, J. L., Bledsoe, S. E., & Traube, D. E. (2006). The current state of evidence-based practice in social work: a review of the literature and qualitative analysis of expert interviews. Journal of Evidence-Based Social Work, 3, 23–48.CrossRefGoogle Scholar
  2. Carey, M., Buchan, H., & Sanson-Fischer, R. (2009). The cycle of change: implementing best-evidence clinical practice. International Journal for Quality in Health Care, 21, 37–43.CrossRefGoogle Scholar
  3. Chorpita, B. F., Bernstein, A., & Daleiden, E. L. (2011). Empirically guided coordination of multiple evidence-based treatments: an illustration of relevance mapping in children's mental health services. Journal of Consulting and Clinical Psychology, 79, 470–480.CrossRefGoogle Scholar
  4. Chou, R. (2008a). Using evidence in pain practice. Part I: Assessing quality of systematic reviews and clinical practice guidelines. Pain Medicine, 9, 518–530.CrossRefGoogle Scholar
  5. Chou, R. (2008b). Using evidence in pain practice. Part II: Interpreting and applying systematic reviews and clinical practice guidelines. Pain Medicine, 9, 531–541.CrossRefGoogle Scholar
  6. Cooper, H. (2007). Evaluating and interpreting research syntheses in adult learning and literacy. Cambridge: National Center for the Study of Adult learning and Literacy.Google Scholar
  7. Cooper, H. (2010). Research synthesis and meta-analysis: a step-by-step approach (4th ed.). Thousand Oaks: Sage.Google Scholar
  8. Cooper, H., Hedges, L. V., & Valentine, J. C. (Eds.). (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York: Russell Sage.Google Scholar
  9. Cooper, H., & Ribble, R. G. (1989). Influences on the outcome of literature searches for integrative research reviews. Knowledge: Creation, Diffusion, Utilization, 10, 179–201.Google Scholar
  10. Dozois, D. J. (2013). Psychological treatments: putting evidence into practice and practice into evidence. Canadian Psychology, 54, 1–11.CrossRefGoogle Scholar
  11. Harris, K. A. (2013). Disallowing recommendations for practice: A Proposal that is both too much and too little. Educational Psychology Review. doi: 10.1007/s10648-013-9235-1.
  12. Hedges, L. V. (2013). Recommendations for practice: Justifying claims of generalizability. Educational Psychology Review, this issue.Google Scholar
  13. Hemsley-Brown, J., & Sharp, C. (2003). The use of research to improve professional practice: a systematic review of the literature. Oxford Review of Education, 29, 449–470.CrossRefGoogle Scholar
  14. Jenson, W. A., Clark, E., Kircher, J. C., & Kristjansson, S. D. (2007). Statistical reform: evidence-based practice, meta-analyses, and single subject designs. Psychology in the Schools, 44, 483–493.CrossRefGoogle Scholar
  15. Linkov, I., Welle, P., Loney, D., Tkachuk, A., Canis, L., Kim, J. B., & Bridges, T. (2011). Use of multicriteria decision analysis to support weight of evidence evaluation. Risk Analysis, 31, 1211–1225.CrossRefGoogle Scholar
  16. Lepper, M. R., Henderlong, J., & Gingras, I. (1999). Understanding the effects of extrinsic rewards on intrinsic motivation: uses and abuses of meta-analysis: comment on Deci, Koestner, and Ryan (1999). Psychological Bulletin, 125, 669–676.CrossRefGoogle Scholar
  17. Krainovich-Miller, B., Haber, J., & Jacobs, S. K. (2009). Evidence-based practice challenge: teaching critical appraisal of systematic reviews and clinical practice guidelines to graduate students. Journal of Nursing Education, 48, 186–195.CrossRefGoogle Scholar
  18. Miller, J., & Schwarz, W. (2011). Aggregate and individual replication probability within an explicit model of the research process. Psychological Methods, 16, 337–360.CrossRefGoogle Scholar
  19. Robinson, D. H., Levin, J. R., Schraw, G., Patall, E. A., & Hunt, E. B. (2013). On going (way) beyond one's data: a proposal to restrict recommendations for practice in primary educational research journals. Educational Psychology Review, 25, 291–302.CrossRefGoogle Scholar
  20. Rubin, A., & Parrish, D. (2007). Problematic phrases in the conclusions of published outcome studies: implications for evidence-based practice. Research on Social Work Practice, 17, 334–347.CrossRefGoogle Scholar
  21. Schalock, R. L., Verdugo, M. A., & Gomez, L. E. (2011). Evidence-based practices in the field of intellectual and developmental disabilities: an international consensus approach. Evaluation and Program Planning, 34, 273–282.CrossRefGoogle Scholar
  22. Shadish, W. R. (1995). The logic of generalization: five principles common to experiments and ethnographies. American Journal of Community Psychology, 23, 419–428.CrossRefGoogle Scholar
  23. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
  24. Staines, G. L. (2008). The causal generalization paradox: the case of treatment outcome research. Review of General Psychology, 12, 236–252.CrossRefGoogle Scholar
  25. Valentine, J. C., & Cooper, H. (2008). A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: the Study Design and Implementation Assessment Device (Study DIAD). Psychological Methods, 13, 130–149.CrossRefGoogle Scholar
  26. Vaughn, S., & Fuchs, L. S. (2013). Staying within one's data to make recommendations for practice in primary educational research journals. Educational Psychology Review. doi: 10.1007/s10648-013-9232-4.
  27. Wecker, C. (2013). How to support prescriptive statements by empirical research: some missing parts. Educational Psychology Review, 25, 1–18.CrossRefGoogle Scholar
  28. Wilczynski, S. M. (2012). Risk and strategic decision-making in developing evidence-based practice guidelines. Education and Treatment of Children, 35, 291–311.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.University of NevadaLas VegasUSA
  2. 2.The University of Texas at AustinAustinUSA

Personalised recommendations