In criminal justice as well as other areas, practitioners and/or policy makers often wish to know whether something “works” or is “effective.” Does a certain form of family therapy reduce troubled adolescents’ involvement in crime more than what would be seen if they were on probation? Does a jail diversion policy substantially increase indicators of community adjustment for mentally ill individuals who are arrested and processed under this policy? If so, by how much?
Trying to gauge the impact of programs or policies is eminently logical for several reasons. Obviously, this type of information is important from a traditional cost-benefit perspective. Knowing the overall impact of a program in terms of tangible and measurable benefits to some target group of interest is necessary to assess whether an investment in the program buys much. For instance, a drug rehabilitation program, which requires a large fixed cost of opening plus additional considerable operating expenses, should be able to show that this investment is worth it in terms of reduced drug use or criminal activity among its clients. Quantifiable estimates about the impact of policies or programs are also important in assessing the overall social benefit of particular approaches; it is often useful to know how much a recent change in policy has affected some subgroup in an unintended way. For instance, more stringent penalties for dealing crack, rather than powdered cocaine, appears to have provided only a marginal decrease in drug trafficking at the expense of considerable racial disparity in sentencing. Informed practice and policy rests on empirical quantifications of how much outcomes shift when certain approaches or policies are put into place.
Rosenbaum P, Rubin DB (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55CrossRefGoogle Scholar
Rosenbaum PR, Rubin DB (1985) The bias due to incomplete matching. Biometrics 41:103–116CrossRefGoogle Scholar
Rubin DB (1974) Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol 66:688–701CrossRefGoogle Scholar
Rubin DB (1977) Assignment to treatment groups on the basis of a covariate. J Educ Stat 2:1–26CrossRefGoogle Scholar
Rubin DB (1978) Bayesian inference for causal effects: the role of randomization. Ann Stat 6:34–58CrossRefGoogle Scholar
Shadish WR, Cook TD, Campbell DT (2001) Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin, BostonGoogle Scholar
Sherman LW, Berk RA (1984) The specific deterrent effects of arrest for domestic assault. Am Sociol Rev 49(2):261–272CrossRefGoogle Scholar
Weisburd D, Lum C, Petronsino A (2001) Does research design affect study outcomes in criminal justice? Ann Am Acad Pol Soc Sci 578:50–70CrossRefGoogle Scholar
Wright JP, Dietrich KN, Ris MD, Hornung RW, Wessel SD, Lanphear BP, Ho M, Rae MN (2008) Association of prenatal and childhood blood lead concentrations with criminal arrests in early adulthood. PLoS Med 5:e101CrossRefGoogle Scholar