Advertisement

Societal Experimentation

Chapter
  • 541 Downloads
Part of the Evaluation in Education and Human Services book series (EEHS, volume 6)

Abstract

It is a special feature of modern societies that we identify, plan, and carry out programs designed to improve our social systems. It is also our experience that the programs do not always produce their intended improvements. Often, it is difficult to decipher whether programs have any impact at all, so complex is the ongoing social system into which programs are introduced and so inadequate are our methodologies for determining impact. To overcome these problems, to help obtain valid empirical evidence about the effectiveness of new social programs, a branch of methodology termed societal experimentation was suggested by Campbell (1969). Common pseudonyms for societal experimentation are evaluation research (Suchman, 1967), social experiments, experimental policy research, and field experiments.

Keywords

Program Evaluation Head Start Social Program Reform Effort Reform Program 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abt, Clark C. (ed.). The Evaluation of Social Programs. Beverely Hills, California: Sage Publications, 1976.Google Scholar
  2. Airasian, P.W. “Designing Summative Evaluation Studies at the Local Level.” In: W.J. Popham (ed.), Evaluation in Education. Berkeley, California: McCutchan Publishing Co., 1974, 145–200.Google Scholar
  3. Airasian, P.W., Kellaghan, T., and Madaus, G.F. Societal Experimentation. Position paper prepared for a conference on “The Consequences of Educational Testing: A Societal Experiment,” funded by the Russell Sage Foundation, Dublin, Ireland, October, 1971.Google Scholar
  4. American Psychological Association, Ethical Principles in the Conduct of Research with Human Participants, 1973.Google Scholar
  5. Averch, H.A. et al. How Effective is Schooling? Santa Monica, California: Rand Corporation, 1972. Prepared for President’s Commission on School Finance.Google Scholar
  6. Bennett, C.A. and Lumsdaine, A.A. Evaluation and Experiment, New York: Academic Press, Inc., 1975.Google Scholar
  7. Boruch, R.F. “Bibliography: Illustrated Randomized Field Experiments for Program Planning and Evaluation.” Evaluation, no. 1, 2 (1974), 83–87.Google Scholar
  8. Campbell, D.T. “Reforms as Experiments.” American Psychologist, no. 4, 24 (1969), 404–29.CrossRefGoogle Scholar
  9. Campbell, D.T. “Considering the Case Against Experimental Evaluations of Social Innovations.” Administrative Science Quarterly, no. 1, 15(1970), 110–13.CrossRefGoogle Scholar
  10. Campbell, D.T. Methods for an Experimenting Society. Paper presented to the Eastern Psychological Association, April 17, 1971, and to the American Psychological Association, Sunday, September 5, Washington, D.C.Google Scholar
  11. Campbell, D.T. “Assessing the Impact of Planned Social Change.” In:Lyons, G.M. (ed.), Social Research and Public Policies, Hanover, New Hampshire: The Public Affairs Center, Dartmouth College, 1975, 3–45.Google Scholar
  12. Campbell, D.T. and Boruch, R.F. “Making the Case for Randomized Assignment to Treatments by Considering the Alternatives: Six Ways in Which Quasi-Experimental Evaluations in Compensatory Education Tend to Underestimate Effects.” In: Bennett, C.A., and Lumsdaine, A.A. (eds.), Evaluation and Experiment, New York: Academic Press, Inc., 1975, pp. 195–296.Google Scholar
  13. Campbell, D.T. and Erlebacher, A.E. “How Regression Artifacts in Quasi-Experimental Evaluations Can Mistakenly Make Compensatory Education Look Harmful.” In Hellmuth, J. (ed.), Disadvantaged Child, vol. 3, Compensatory Education: A National Debate. New York: Brunner/Mazel, 1970.Google Scholar
  14. Campbell, D.T. and Stanley, J.C. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally, 1966.Google Scholar
  15. Caro, F.G. (ed.) Readings in Evaluation Research. New York: Russell Sage Foundation, 1971.Google Scholar
  16. Cicirelli, V.G. et al. The Impact of Head Start: An Evaluation of the Effects of Head Start on Children’s Cognitive and Affective Development. A report presented to the Office of Economic Opportunity pursuant to Contract B89-4536, June 1969. Westinghouse Learning Corporation, Ohio University. (Distributed by Clearinghouse for Federal Scientific and Technical Information, U.S. Department of Commerce, National Bureau of Standards, Institute for Applied Technology, PV 184 328).Google Scholar
  17. Cohen, D.K. “Politics and Research: Evaluation of Social Action Programs in Education.” Review of Educational Research, no. 2, 40 (1970), 213–38.Google Scholar
  18. Coleman, J.S. et al. Equality of Educational Opportunity. U.S. Department of Health, Education and Welfare, U.S. Office of Education, Washington, D.C., 1966.Google Scholar
  19. Cook, T.D. and Campbell, D.T. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally, 1979.Google Scholar
  20. Gilbert, J.P., and Mosteller, F. “The Urgent Need for Experimentation.” In: Mosteller, F. and Moynihan, D.P. (eds.) On Equality of Educational Opportunity. New York: Random House, 1972,371–83.Google Scholar
  21. Gilbert, J.P., Light, R.T., and Mosteller, F. “Assessing Social Innovations: An Empirical Base for Policy.” In: Bennett, C.A. and Lumsdaine, A.A. (eds.), Evaluation and Experiment, New York: Academic Press, Inc., 1975, 39–194.Google Scholar
  22. Glass, G.V., Wilson, V.L. and Gottman, J.M. Design and Analysis of Time Series Experiments, Boulder, Colorado: University of Colorado Press, 1974.Google Scholar
  23. Grant, G. “Shaping Social Policy: The Politics of the Coleman Report,” Teachers College Record, no. 1, 75 (1973), 17–54.Google Scholar
  24. Hodgson, G. “Do Schools Make a Difference?” Atlantic Monthly, (March 1973), 35–46.Google Scholar
  25. House, E.R. (ed.) School Evaluation, the Politics and Process. Berkeley, California: McCutchan Publishing Co., 1973.Google Scholar
  26. Jencks, C. et al. Inequality: A Reassessment of the Effect of Family and Schooling in America. New York: Basic Books, 1972.Google Scholar
  27. Kellaghan, T., Madaus, G.F. and Airasian, P.W. The Effects of Standardized Testing. Boston, Massachusetts: Kluwer-Nijhoff, 1982.CrossRefGoogle Scholar
  28. Kelman, H.C. On Human Values and Social Research. San Francisco, California: Jossey-Bass lnc, 1968.Google Scholar
  29. Kershaw, D.N. Issues in Income Maintenance Experiments. In: Rossi, P. and Williams, W. (eds.), Evaluating Social Action Programs: Theory, Practice and Politics. New York: Seminar Press, 1972,221–245.Google Scholar
  30. Madaus, G.F., Airasian, P.W., Kellaghan, T. School Effectiveness: A Reassessment of the Evidence. New York: McGraw-Hill, 1980.Google Scholar
  31. McDill, E.L., McDill, M.S., and Sprehe, J.T. Strategies for Success in Compensatory Education. Baltimore, Maryland: Johns Hopkins Press, 1969.Google Scholar
  32. Mosteller, F. and Moynihan, D.P. (eds.), On Equality of Educational Opportunity. New York: Random House, 1972.Google Scholar
  33. Moynihan, D.P. “Sources of Resistance to the Coleman Report.” Harvard Educational Review, no. 1, 38 (1968), 23–25.Google Scholar
  34. Moynihan, D.P. The Politics of a Guaranteed Income. The Nixon Administration and the Family Assistance Plan. New York: Random House, 1973.Google Scholar
  35. Mullen, E.J. et al. Evaluation of Social Intervention. San Francisco, California: Jossey-Bass, Inc., 1972.Google Scholar
  36. Riecken, H.W. and Boruch, R.F. Social Experimentation. New York: Academic Press, 1974.Google Scholar
  37. Rivlin, A.M. Systematic Thinking for Social Action. Washington, D.C.: The Brookings Institution, 1971.Google Scholar
  38. Rivlin, A.M. and Timpane, P.M. (eds.) Planned Variation in Education. Washington, D.C.: The Brookings Institution, 1975.Google Scholar
  39. Rossi, P.C. “Testing for Success and Failure in Social Action.” In: Rossi, P.C. and Williams, W. (eds.), Evaluating Social Action Programs: Theory, Practice and Politics. New York: Seminar Press, 1972, 11–49.Google Scholar
  40. Rossi, P.C. and Williams, W. (eds.). Evaluating Social Action Programs: Theory, Practice and Politics. New York: Seminar Press, 1972.Google Scholar
  41. Rotberg, I.C. and Wolf, A. Compensatory Education: Some Research Issues. Policy Studies Program, Division of Research, National Institute of Education, Washington, D.C., 1974.Google Scholar
  42. Russell Sage Foundation, Guidelines for Collection, Maintenance and Dissemination of Pupil Records, 1970.Google Scholar
  43. Skipper, J.S. and Leonard, R.C. “Children, Stress, and Hospitalization: A Field Experiment.” Journal of Health and Social Behavior, 9 (1968), 275–87.CrossRefGoogle Scholar
  44. Smith, M.S. “Equality of Educational Opportunity: The Basic Findings Re-considered.” In: Mosteller, F. and Moynihan, D.P. (eds.), On Equality of Educational Opportunity. New York: Random House, 1972, 230–342.Google Scholar
  45. Stanley, J.C. “Controlled Field Experiments as a Model for Evaluation.” In: Rossi, P. and Williams, W. (eds.) Evaluating Social Action Programs: Theory, Practice and Politics. New York: Seminar Press, 1972, 67–71.Google Scholar
  46. Suchman, E.A. Evaluation Research. New York: Russell Sage Foundation, 1967.Google Scholar
  47. Tobin, J. “The Case for an Income Guarantee.” Public Interest, (Summer 1966), 31–41.Google Scholar
  48. Williams, W. “Implementation Problems in Federally Funded Programs.” In: Williams, W. and Elmore, R.F. (eds.), Social Program Implementation. New York: Academic Press, 1976, 15–40.Google Scholar
  49. Zaltman, G. and Duncan, R. Strategies for Planned Change. New York: Wiley Inter-science, 1977.Google Scholar

Copyright information

© Kluwer-Nijhoff Publishing 1983

Authors and Affiliations

There are no affiliations available

Personalised recommendations