Advertisement

Moving Beyond the Evidence-Based Proverb

  • Steven Putansu
Chapter
  • 15 Downloads

Abstract

This chapter concludes the book with lessons learned for producers of policy knowledge, decision makers, and reformers that attempt to link them together. It reflects on the contributions, assumptions, and limitations of this study, and discusses a potential future scholarship. This discussion is followed with practical advice for producers of policy knowledge. It urges them to consider the purpose and quality of the data, information, and evidence they produce; to relate it to the broader context of policy knowledge; and to be mindful of how it may be advocated, interpreted, and used in governance decisions. The chapter then identifies principles that evidence-based reformers and relevant decision makers should consider as they design, promote, and assess the success or failure of efforts to advance evidence-based governance. It concludes with a discussion of the implications for current debates surrounding student loans, Title I, and the recently passed Foundations for Evidence-Based Policymaking Act.

References

  1. Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of Evaluation, 37(4), 568–579.CrossRefGoogle Scholar
  2. Alkin, M. C., & King, J. A. (2017). Definitions of evaluation use and misuse, evaluation influence, and factors affecting use. American Journal of Evaluation, 38(3), 434–450.CrossRefGoogle Scholar
  3. Berry, F. S., & Berry, W. D. (2018). Innovation and diffusion models in policy research. Theories of the policy process (pp. 263–308). New York: Routledge.Google Scholar
  4. Cairney, P. (2016). The politics of evidence-based policy making. Dordrecht, The Netherlands: Springer.Google Scholar
  5. Daviter, F. (2019). Policy analysis in the face of complexity: What kind of knowledge to tackle wicked problems? Public Policy and Administration, 34(1), 62–83.CrossRefGoogle Scholar
  6. Esterling, K. M. (2004). The political economy of expertise: Information and efficiency in American national politics. Ann Arbor, MI: University of Michigan Press.CrossRefGoogle Scholar
  7. Feldman, M. S. (1989). Order without design: Information production and policy making. Stanford, CA: Stanford University Press.Google Scholar
  8. General Accounting Office (GAO). (1977). Problems and needed improvements in evaluating office of education programs, HRD-76-165. Washington, DC: GAO.Google Scholar
  9. Glass, G. V., & Smith, M. L. (1977). “Pull out” in compensatory education. Boulder: University of Colorado.Google Scholar
  10. Hale, K. (2011). How information matters: Networks and public policy innovation. Washington, DC: Georgetown University Press.Google Scholar
  11. Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314.Google Scholar
  12. Herd, P., & Moynihan, D. P. (2019). Administrative burden: Policymaking by other means. New York: Russell Sage.CrossRefGoogle Scholar
  13. Hird, J. A. (2005). Policy analysis for what? The effectiveness of nonpartisan policy research organizations. Policy Studies Journal, 33(1), 83–105.CrossRefGoogle Scholar
  14. Julnes, G. (2007). Promoting evidence-informed governance: Lessons from evaluation. Public Performance & Management Review, 30(4), 550–573.CrossRefGoogle Scholar
  15. Key, V. O. (1940). The lack of a budgetary theory. American Political Science Review, 34(6), 1137–1144.CrossRefGoogle Scholar
  16. King, J. A., & Alkin, M. C. (2018). The centrality of use: Theories of evaluation use and influence and thoughts on the first 50 years of use research. American Journal of Evaluation, 40(3), 431–458.CrossRefGoogle Scholar
  17. Lindblom, C. E. (1959). The science of “muddling through”. Public Administration Review, 19(1), 79–88. CrossRefGoogle Scholar
  18. Lindblom, C. E., & Cohen, D. K. (1979). Usable knowledge: Social science and social problem solving (Vol. 21). New Haven: Yale University Press.Google Scholar
  19. Lowry, R. C. (2009). Reauthorization of the federal higher education act and accountability for student learning: The dog that didn’t bark. Publius: The Journal of Federalism, 39(3), 506–526.CrossRefGoogle Scholar
  20. Lynn, L. E., Jr. (1999). A place at the table: Policy analysis, its postpositive critics, and future of practice. Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 18(3), 411–425.CrossRefGoogle Scholar
  21. Moynihan, D. P. (2008). The dynamics of performance management: Constructing information and reform. Washington, DC: Georgetown University Press.Google Scholar
  22. National Academy of Public Administrators Social Equity Committee. (2000, November). Standing panel on social equity in governance: Issue paper and work plan. Washington, DC: NAPA.Google Scholar
  23. National Commission on Excellence in Education. (1983). A nation at risk: The imperative for educational reform—A report to the nation and the secretary of education, United States department of education. Washington, DC: The Commission.Google Scholar
  24. Negri, K. (2013). Mortgaging the American dream: The misplaced role of accreditation in the federal student loan system. Fordham Law Review, 82, 1905.Google Scholar
  25. Perna, L. W., Orosz, K., & Kent, D. C. (2019). The role and contribution of academic researchers in congressional hearings: A critical discourse analysis. American Educational Research Journal, 56(1), 111–145.CrossRefGoogle Scholar
  26. Riccucci, N. M. (2010). Public administration: Traditions of inquiry and philosophies of knowledge. Washington, DC: Georgetown University Press.Google Scholar
  27. Rittel, H. W., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169.CrossRefGoogle Scholar
  28. Rivlin, A. (1971). Systematic thinking for social action. Washington, DC: The Brookings Institution.Google Scholar
  29. Rosenbloom, D. H. (1993). Have an administrative Rx? Don’t forget the politics! Public Administration Review, 53(6), 503–507.CrossRefGoogle Scholar
  30. Schattschneider, E. E. (1960). Semi-sovereign people. New York: Holt, Rinehart and Winston.Google Scholar
  31. Schlaufer, C., Stucki, I., & Sager, F. (2018). The political use of evidence and its contribution to democratic discourse. Public Administration Review, 78(4), 645–649.CrossRefGoogle Scholar
  32. Shulock, N. (1999). The paradox of policy analysis: If it is not used, why do we produce so much of it? Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 18(2), 226–244.CrossRefGoogle Scholar
  33. Stone, D. A. (1997). Policy paradox: The art of political decision making (Vol. 13). New York, NY: W.W. Norton.Google Scholar
  34. Termeer, C. J., & Dewulf, A. (2019). A small wins framework to overcome the evaluation paradox of governing wicked problems. Policy and Society, 38(2), 298–314.CrossRefGoogle Scholar
  35. Vinovskis, M. A. (2015). Using knowledge of the past to improve education today: US education history and policy-making. Paedagogica Historica, 51(1–2), 30–44.CrossRefGoogle Scholar
  36. Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39(5), 426–431.CrossRefGoogle Scholar
  37. Weiss, C. H. (1995). The four I’s of school reform: How interests, ideology, information, and institutions affect teachers and principals. Harvard Educational Review, 65(4), 571–593.CrossRefGoogle Scholar
  38. Wilson, J. Q. (1989). Bureaucracy: What government agencies do and why they do it (2nd ed.). New York: Basic Books.Google Scholar

Copyright information

© The Author(s) 2020

Authors and Affiliations

  • Steven Putansu
    • 1
  1. 1.US Government Accountability OfficeWashington, DCUSA

Personalised recommendations