Skip to main content

Estimating Treatment Effects: Matching Quantification to the Question

  • Chapter
  • First Online:
Handbook of Quantitative Criminology

Abstract

In criminal justice as well as other areas, practitioners and/or policy makers often wish to know whether something “works” or is “effective.” Does a certain form of family therapy reduce troubled adolescents’ involvement in crime more than what would be seen if they were on probation? Does a jail diversion policy substantially increase indicators of community adjustment for mentally ill individuals who are arrested and processed under this policy? If so, by how much?

Trying to gauge the impact of programs or policies is eminently logical for several reasons. Obviously, this type of information is important from a traditional cost-benefit perspective. Knowing the overall impact of a program in terms of tangible and measurable benefits to some target group of interest is necessary to assess whether an investment in the program buys much. For instance, a drug rehabilitation program, which requires a large fixed cost of opening plus additional considerable operating expenses, should be able to show that this investment is worth it in terms of reduced drug use or criminal activity among its clients. Quantifiable estimates about the impact of policies or programs are also important in assessing the overall social benefit of particular approaches; it is often useful to know how much a recent change in policy has affected some subgroup in an unintended way. For instance, more stringent penalties for dealing crack, rather than powdered cocaine, appears to have provided only a marginal decrease in drug trafficking at the expense of considerable racial disparity in sentencing. Informed practice and policy rests on empirical quantifications of how much outcomes shift when certain approaches or policies are put into place.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For a thorough overview of the framework of the Rubin Causal Model, see Holland (1986).

  2. 2.

    For a discussion of these more complicated situations involving time-dependency in treatment and covariate confounding, see Robins et al. (2000) and Robins et al. (1999).

  3. 3.

    The concept of noncompliance should be thought of in a purely statistical interpretation in this case, where it literally means not adhering to the randomly assigned treatment. Often, particularly in some clinical applications, the term noncompliant can have a negative connotation, as in lack of willingness to accept a helpful therapy. Noncompliance can occur for a variety of reasons, not simply lack of insight or stubbornness, and should therefore not be thought to indicate anything negative about an individual when used in this context. For instance, if a chronic headache sufferer is randomized into the treatment group testing the effectiveness of a new drug and chooses to not take the drug for the simple reason that there is no pain at the time of treatment, then this individual is a non-complier as defined here.

  4. 4.

    Angrist, Imbens and Rubin also defines a group known as never-takers, or those who, regardless of the instrument, never select into treatment, and therefore are not included as part of the treatment group. Furthermore, the assumption of monotonicity effectively rules out the existence of defiers, or those who would have selected into treatment had the instrument made them less likely to do so, but not selected into treatment had their value of the instrument made them more likely to do so.

References

  • Angrist JD (1990) Lifetime earnings and the Vietnam era draft lottery: evidence from social security administrative records. Am Econ Rev 80:313–335

    Google Scholar 

  • Angrist JD (2004) Treatment effect heterogeneity in theory and practice, The Royal Economic Society Sargan Lecture. Econ J 114:C52–C83

    Article  Google Scholar 

  • Angrist JD (2006) Instrumental variables methods in experimental criminological research: what, why, and how. J Exp Criminol 2:23–44

    Article  Google Scholar 

  • Angrist J, Imbens G, Rubin DB (1996) Identification of causal effects using instrumental variables. J Am Stat Assoc 91:444–455

    Article  Google Scholar 

  • Heckman JJ (1997) Instrumental variables: a study of implicit behavioral assumptions used in making program evaluations. J Hum Resour 32(2):441–462

    Article  Google Scholar 

  • Heckman JJ, Smith JA (1995) Assessing the case for social experiments. J Econ Perspect 9(2):85–110

    Google Scholar 

  • Holland PW (1986) Statistics and causal inference. J Am Stat Assoc 81:945–960

    Article  Google Scholar 

  • Berk RA, Sherman LW (1988) Police response to family violence incidents: an analysis of an experimental design with incomplete randomization. J Am Stat Assoc 83(401):70–76

    Article  Google Scholar 

  • Imbens GW, Angrist JD (1994) Identification and estimation of local average treatment effects. Econometrica 62: 467–475

    Article  Google Scholar 

  • LaLonde RJ (1986) Evaluating the econometric evaluations of training programs with experimental data. Am Econ Rev 76:604–620

    Google Scholar 

  • Manski CF (1995) Identification problems in the social sciences. Harvard University Press, Cambridge

    Google Scholar 

  • McCaffrey DF, Ridgeway G, Morral AR (2004) Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychol Methods 9(4):403–425

    Article  Google Scholar 

  • Needleman HL, Riess JA, Tobin MJ, Biesecker GE, Greenhouse JB (1996) Bone lead levels and delinquent behavior. J Am Med Assoc 275(5):363–369

    Article  Google Scholar 

  • Nevin R (2000) How lead exposure relates to temporal changes in IQ, violent crime, and unwed pregnancy. Environ Res 83(1):1–22

    Article  Google Scholar 

  • Neyman JS (1923) On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Stat Sci 4:465–480

    Google Scholar 

  • Ridgeway G (2006) Assessing the effect of race bias in post-traffic stop outcomes using propensity scores. J Quant Criminol 22(1):1–29

    Article  Google Scholar 

  • Robins JM, Greenland S, Hu F-C (1999) Estimation of the causal effect of a time-varying exposure on the marginal mean of a repeated binary outcome. J Am Stat Assoc 94:687–700

    Article  Google Scholar 

  • Robins JM, Hernan MA, Brumback B (2000) Marginal structural models and causal inference in epidemiology. Epidemiology 11(5):550–560

    Article  Google Scholar 

  • Rosenbaum PR (2002) Observational studies, 2nd edn. Springer-Verlag, New York

    Google Scholar 

  • Rosenbaum P, Rubin DB (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55

    Article  Google Scholar 

  • Rosenbaum PR, Rubin DB (1985) The bias due to incomplete matching. Biometrics 41:103–116

    Article  Google Scholar 

  • Rubin DB (1974) Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol 66:688–701

    Article  Google Scholar 

  • Rubin DB (1977) Assignment to treatment groups on the basis of a covariate. J Educ Stat 2:1–26

    Article  Google Scholar 

  • Rubin DB (1978) Bayesian inference for causal effects: the role of randomization. Ann Stat 6:34–58

    Article  Google Scholar 

  • Shadish WR, Cook TD, Campbell DT (2001) Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin, Boston

    Google Scholar 

  • Sherman LW, Berk RA (1984) The specific deterrent effects of arrest for domestic assault. Am Sociol Rev 49(2):261–272

    Article  Google Scholar 

  • Weisburd D, Lum C, Petronsino A (2001) Does research design affect study outcomes in criminal justice? Ann Am Acad Pol Soc Sci 578:50–70

    Article  Google Scholar 

  • Wright JP, Dietrich KN, Ris MD, Hornung RW, Wessel SD, Lanphear BP, Ho M, Rae MN (2008) Association of prenatal and childhood blood lead concentrations with criminal arrests in early adulthood. PLoS Med 5:e101

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Loughran, T.A., Mulvey, E.P. (2010). Estimating Treatment Effects: Matching Quantification to the Question. In: Piquero, A., Weisburd, D. (eds) Handbook of Quantitative Criminology. Springer, New York, NY. https://doi.org/10.1007/978-0-387-77650-7_9

Download citation

Publish with us

Policies and ethics