Challenging the Proverb: A Balanced Model for Governance Decisions

  • Steven Putansu


This chapter examines dominant models of decision making to examine how different types of policy knowledge can impact decisions. It considers how different actors receive, interpret, and use policy knowledge through interactive dialog. First, it compares rational-comprehensive and political-incremental models—highlighting common critiques of each and reviewing models developed to address them. It argues that reliance on the evidence-based proverb stymies consideration of complex relationships between politics (Ideology, Interests, and Institutions) and policy knowledge. The chapter expands a model of interaction among these factors, developed by Carol Weiss (Weiss, C. [1995]. The four “I’s” of school reform: How interests, ideology, information, and institution affect teachers and principals. Harvard Educational Review, 65(4), 571–593.), which is used throughout the text to explore another core argument: Changes in these factors contribute to decisions that diverge from expectations set by program design, and changes to program design can drive longer-term rebalancing of these factors.


  1. Allison, G. T. (1971). Essence of decision: Explaining the Cuban missile crisis. Boston: Little, Brown, and Company.Google Scholar
  2. Askim, J. (2007). How do politicians use performance information? An analysis of the Norwegian local government experience. International Review of Administrative Sciences, 73(3), 453–472.CrossRefGoogle Scholar
  3. Behn, R. D. (2002). The psychological barriers to performance management: Or why isn’t everyone jumping on the performance-management bandwagon? Public Performance & Management Review, 26(1), 5–25.Google Scholar
  4. Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public Administration Review, 63(5), 586–606.CrossRefGoogle Scholar
  5. Bertelli, A. M., & Lynn, L. E. (2006). Madison’s managers: Public administration and the constitution. Baltimore: Johns Hopkins University Press.Google Scholar
  6. Cairney, P. (2016). The politics of evidence-based policy making. Dordrecht: Springer.Google Scholar
  7. Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organizational choice. Administrative Science Quarterly, 17(1), 1–25.CrossRefGoogle Scholar
  8. D’Alessio, D., & Allen, M. (2002). Selective exposure and dissonance after decisions. Psychological Reports, 91(2), 527–532.CrossRefGoogle Scholar
  9. Durant, R. F., Legge, Jr. J. S., & Moussios, A. (1998). People, profits, and service delivery: Lessons from the privatization of British Telecom. American Journal of Political Science, 42(1), 117–140.Google Scholar
  10. Feldman, M. S. (1989). Order without design: Information production and policy making. Stanford, CA: Stanford University Press.Google Scholar
  11. Frederickson, D. G., & Frederickson, H. G. (2006). Measuring the performance of the hollow state. Washington, DC: Georgetown University Press.Google Scholar
  12. Gordon, R. A. (1996). Impact of ingratiation on judgments and evaluations: A meta-analytic investigation. Journal of Personality and Social Psychology, 71(1), 54.CrossRefGoogle Scholar
  13. Kahneman, D. (2012). Thinking, fast and slow. New York: Macmillan. Google Scholar
  14. Kingdon, J. (2003). Agendas, alternatives and public policy. Boston: Longman.Google Scholar
  15. Lindblom, C. E. (1959). The science of “muddling through.” Public Administration Review, 19(1), 79–88.CrossRefGoogle Scholar
  16. Lindblom, C. E., & Cohen, D. K. (1979). Usable knowledge: Social science and social problem solving (Vol. 21). New Haven: Yale University Press.Google Scholar
  17. Lomas, J., & Brown, A. D. (2009). Research and advice giving: A functional view of evidence-informed policy advice in a Canadian ministry of health. The Milbank Quarterly, 87(4), 903–926.CrossRefGoogle Scholar
  18. Lynn Jr., L. E. (1999). A place at the table: Policy analysis, its postpositive critics, and future of practice. Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 18(3), 411–425.CrossRefGoogle Scholar
  19. March, J. G. (2002). A primer on decision making: How decisions happen. New York: Free Press.Google Scholar
  20. McCubbins, M. D., & Schwartz, T. (1984). Congressional oversight overlooked: Police patrols versus fire alarms. American Journal of Political Science, 28, 165–179.Google Scholar
  21. Melkers, J., & Willoughby, K. (2005). Models of performance-measurement use in local governments: Understanding budgeting, communication, and lasting effects. Public Administration Review, 65(2), 180–190.CrossRefGoogle Scholar
  22. Moynihan, D. P. (2006). Managing for results in state government: Evaluating a decade of reform. Public Administration Review, 66(1), 77–89.CrossRefGoogle Scholar
  23. Moynihan, D. P. (2008). The dynamics of performance management: Constructing information and reform. Washington, DC: Georgetown University Press.Google Scholar
  24. Pollitt, C. (2006). Performance management in practice: A comparative study of executive agencies. Journal of Public Administration Research and Theory, 16(1), 25–44.CrossRefGoogle Scholar
  25. Radin, B. A. (2006). Challenging the performance movement. Washington, DC: Georgetown University Press.Google Scholar
  26. Shepsle, K. A., & Weingast, B. R. (1981). Political preferences for the pork barrel: A generalization. American Journal of Political Science, 25, 96–111.Google Scholar
  27. Shulock, N. (1999). The paradox of policy analysis: If it is not used, why do we produce so much of it? Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 18(2), 226–244.CrossRefGoogle Scholar
  28. Simon, H. A. (1947). Administrative behavior. New York: Free Press.Google Scholar
  29. Simon, H. A. (1952). “Development of theory of democratic administration”: Replies and comments. American Political Science Review, 46(2), 494–503.CrossRefGoogle Scholar
  30. Stone, D. A. (1997). Policy paradox: The art of political decision making (Vol. 13). New York: W. W. Norton.Google Scholar
  31. Waldo, D. (1952a). Development of theory of democratic administration. American Political Science Review, 46(1), 81–103.CrossRefGoogle Scholar
  32. Waldo, D. (1952b). “Development of theory of democratic administration”: Replies and comments. American Political Science Review, 46(2), 494–503.CrossRefGoogle Scholar
  33. Weber, M. (2013). The protestant ethic and the spirit of capitalism. London: Routledge.Google Scholar
  34. Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39(5), 426–431.CrossRefGoogle Scholar
  35. Weiss, C. H. (1989). Congressional committees as users of analysis. Journal of Policy Analysis and Management, 8(3), 411–431.CrossRefGoogle Scholar
  36. Weiss, C. H. (2001). What kind of evidence in evidence-based policy? Paper presented at the third international evidence-based policies and indicator systems conference, pp. 284–291. Durham, UK: CEM centre, University of Durham.Google Scholar
  37. Weiss, C. H. (2002). What to do until the random assigner comes? In F. Mosteller & R. F. Boruch (Eds.), Evidence matters: Randomized trials in education research (pp. 198–233). Washington, DC: Brookings Institution Press.Google Scholar
  38. Wildavsky, A. B. (1964). Politics of the budgetary process. Boston: Little, Brown.Google Scholar
  39. Wildavsky, A. B. (1974). Budgeting: A comparative theory of budgetary processes. Boston: Little, Brown.Google Scholar

Copyright information

© The Author(s) 2020

Authors and Affiliations

  • Steven Putansu
    • 1
  1. 1.US Government Accountability OfficeWashington, DCUSA

Personalised recommendations