Skip to main content

An Introduction to Experimental Criminology

  • Chapter
  • First Online:
Handbook of Quantitative Criminology

Abstract

Experimental criminology is scientific knowledge about crime and justice discovered from random assignment of different conditions in large field tests. This method is the preferred way to estimate the average effects of one variable on another, holding all other variables constant (Campbell and Stanley 1963; Cook and Campbell 1979). While the experimental method is not intended to answer all the research questions in criminology, it can be used far more often than most criminologists assume (Federal Judicial Center, 1981). Opportunities are particularly promising in partnership with criminal justice agencies.

The highest and best use of experimental criminology is to develop and test theoretically coherent ideas about reducing harm (Sherman 2006, 2007), rather than just “evaluating” government programs. Those tests, in turn, can help to accumulate an integrated body of grounded theory (Glaser and Strauss 1967) in which experimental evidence plays a crucial role. When properly executed, randomized field experiments provide the ideal tests of theories about both the prevention and causation of crime.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See http://www.online-literature.com/doyle/adventures_sherlock/11/, downloaded on July 24, 2009.

  2. 2.

    See http://www.rothamsted.bbsrc.ac.uk/corporate/Origins.html, downloaded on July 26, 2009.

References

  • Angrist JD (2006) Instrumental variables methods in experimental criminological research: What, why and how. J Exp Criminol 2(1):23–44

    Article  Google Scholar 

  • Angrist J, Imbens G, Rubin D (1996). J Am Stat Assoc 91:444 – 455

    Article  Google Scholar 

  • Ares CE, Rankin A, Sturz H (1963) The Manhattan Bail Project: an interim report on the use of pre-trial parole. N Y Univ Law Rev 38:67–95

    Google Scholar 

  • Ariel B (2008) Seminar presented to the Jerry Lee Centre for Experimental Criminology, Institute of Criminology, University of Cambridge, October

    Google Scholar 

  • Ariel B (2009) Taxation and compliance: an experimental study. Doctoral Dissertation, Hebrew University of Jerusalem, Israel

    Google Scholar 

  • Barnes G, Ahlman L, Gill C, Kurtz E, Sherman L, Malvestuto R (2010) Low-intensity community supervision for low-risk offenders: a randomized, controlled trial. Journal of Experimental Criminology, forthcoming

    Google Scholar 

  • Berk RA (2005) Randomized experiments as the bronze standard. J Exp Criminol 1(4):417–433

    Article  Google Scholar 

  • Bliss M (1999) William Osler: a life in medicine. University of Toronto Press, Toronto

    Google Scholar 

  • Boruch RF (1997) Randomized experiments for policy and planning. Sage, Newbury Park, CA

    Google Scholar 

  • Bradley RS, Jones PD (1992) Climate since A.D. 1500. Routledge, London

    Google Scholar 

  • Braithwaite J (1989) Crime, Shame and Reintegration. Cambridge: Cambridge University Press

    Google Scholar 

  • Braithwaite J (2002) Restorative Justice and Responsive Regulation. NY: Oxford U. Press

    Google Scholar 

  • Buerger M, Cohn E, Petrosino A (1995) Defining the “hotspots of crime”: operationalizing theoretical concepts for field research. In: Eck JE, Weisburd D (eds) Crime and place. Crime Prevention Studies, vol 4. Police Executive Research Forum. Criminal Justice Press, Monsey, NY

    Google Scholar 

  • Campbell DT, Stanley JC (1963) Experimental and quasi-experimental designs for research. Rand-McNally, Chicago, IL

    Google Scholar 

  • Cook TD, Campbell DT (1979) Quasi-experimentation: design and analysis issues for field settings. Rand-McNally, Chicago

    Google Scholar 

  • Einstein A, Infeld L ([1938] 1971) The evolution of physics, 2nd edn. Downloaded at Google Books on 12 July 2009

    Google Scholar 

  • Eisner MP (2009) No effects in independent prevention trials: can we reject the cynical view? J Exp Criminol 5(2):163–183

    Article  Google Scholar 

  • Erwin BS (1986) Turning up the heat on probationers in Georgia. Fed Probat 50:17–24

    Google Scholar 

  • Farrington DP (1983) Randomized experiments on crime and justice. In: Tonry M, Morris N (eds) Crime and justice: an annual review of research, vol 4. University of Chicago Press, Chicago, IL

    Google Scholar 

  • Farrington DP (2003) British randomized experiments on crime and justice. Ann Am Acad Polit Soc Sci 589:150–169

    Article  Google Scholar 

  • Farrington DP, Knight BJ (1980) Stealing from a “Lost” letter: effects of victim characteristics. Crim Justice Behav 7:423–436

    Article  Google Scholar 

  • Farrington DP, Welsh BC (2005) Randomized experiments in criminology: what have we learned in the last two decades? J Exp Criminol 1(1):9–38

    Article  Google Scholar 

  • Federal Judicial Center (1981) Experimentation in the law. Federal Judicial Center, Administrative Office of the US Courts, Washington, DC

    Google Scholar 

  • Fisher RA (1935) The design of experiments. Oliver and Boyd, Edinburgh

    Google Scholar 

  • Gartin PR (1992) A Replication and Extension of the Minneapolis Domestic Violence Experiment. PhD. Dissertation, University of Maryland

    Google Scholar 

  • Gibbs JP (1975) Crime, punishment and deterrence. Elsevier, New York

    Google Scholar 

  • Gladwell M (2005) Blink: the power of thinking without thinking. Little, Brown, Boston, MA

    Google Scholar 

  • Gladwell M (2008) Outliers: the story of success. Little, Brown, Boston, MA

    Google Scholar 

  • Glaser BG, Strauss AL (1967) The discovery of grounded theory: strategies for qualitative research. Aldine Publishing Company, Chicago, IL

    Google Scholar 

  • Gorman DM, Huber JC (2009) The social construction of “evidence-based” drug prevention programs: a reanalysis of data from the Drug Abuse Resistance Education (DARE) Program. Eval Rev 33:396–414

    Article  Google Scholar 

  • Hanley D (2006) Appropriate services: examining the case classification principle. J Offender Rehabil 42:1–22

    Article  Google Scholar 

  • Home Office (2005) The economic and social costs of crime against individuals and households 2003/04. Home office on-line report 30/05 downloaded on 26 July, 2009 from http://www.homeoffice.gov.uk/rds/pdfs05/rdsolr3005.pdf

  • Kirk DS (2009) A natural experiment on residential change and recidivism: lessons from hurricane Katrina. Am Sociol Rev 74(3):484–504

    Article  Google Scholar 

  • Laub J, Sampson R (2003) Shared Beginnings, Duivergent Lives. Cambridge: Harvard University Press

    Google Scholar 

  • Loudon I (2002) Ignaz Phillip Semmelweis’ studies of death in childbirth. The James Lind Library (http://www.jameslindlibrary.org). Accessed FxTuesday 4 August 2009

  • Palmer T, Petrosino A (2003) The “experimenting agency”. The California Youth Authority Research Division. Eval Rev 27:228–266

    Google Scholar 

  • Pate T, McCullough JW, Bowers R, Ferra A (1976) Kansas City Peer Review Panel: An Evaluation Report. Washington, DC: Police Foundation

    Google Scholar 

  • Paternoster R, Brame R, Bachman R, Sherman L. (1997) Do fair procedures matter? The effect of procedural justice on spouse assault. Law Soc Rev 31(1):163–204

    Article  Google Scholar 

  • Piantadosi S (1997) Clinical trials: a methodologic perspective. Wiley, New York

    Google Scholar 

  • Pocock SJ, Hughes MD, Lee RJ (1987) Statistical problems in the reporting of clinical trials. A survey of three medical journals. N Engl J Med 317:426–432

    Google Scholar 

  • Salsburg D (2001) The lady tasting tea: how statistics revolutionized science in the twentieth century. Henry Holt, New York

    Google Scholar 

  • Shapland J, Atkinson A, Atkinson H, Dignan J, Edwards L, Hibbert J, Howes M, Johnstone J, Robinson G, Sorsby A (2008) Does restorative justice affect reconviction? The fourth report from the evaluation of three schemes. Ministry of Justice Research Series 10/08, June. Ministry of Justice, London

    Google Scholar 

  • Sherman LW (1979) The case for the research police department. Police Mag 2(6):58–59

    Google Scholar 

  • Sherman LW (1992) Policing Domestic Violence: Experiments and Dilemmas. NY: Free Press

    Google Scholar 

  • Sherman LW (1993) Defiance, deterrence and irrelevance: a theory of the criminal sanction. J Res Crim and Delin 30:445–473

    Article  Google Scholar 

  • Sherman LW (2006) To develop and test: the inventive difference between evaluation and experimentation. J Exp Criminol 2:393–406

    Article  Google Scholar 

  • Sherman LW (2007) The power few: experimental criminology and the reduction of harm. The 2006 Joan McCord Prize Lecture. J Exp Criminol 3(4):299–321

    Google Scholar 

  • Sherman LW (2009) Evidence and liberty: the promise of experimental criminology. Criminol Crim Justice 9:5–28

    Article  Google Scholar 

  • Sherman LW, Berk RA (1984) The specific deterrent effects of arrest for domestic assault. Am Sociol Rev 49:261–271

    Article  Google Scholar 

  • Sherman LW, Rogan DP (1995a) Effects of gun seizures on gun violence: “hot spots” patrol in Kansas city. Justice Q 12(4):673–693

    Article  Google Scholar 

  • Sherman LW, Rogan DP (1995b) Deterrent effects of police raids on crack houses: a randomized controlled experiment. Justice Q 12(4):755–781

    Article  Google Scholar 

  • Sherman LW, Strang H (2004a) Verdicts or inventions? Interpreting randomized controlled trials in criminology. Am Behav Sci 47(5):575–607

    Article  Google Scholar 

  • Sherman LW, Strang H (2004b) Experimental ethnography: the marriage of qualitative and quantitative research. In: Anderson E, Brooks SN, Gunn R, Jones N (eds) Annals of the American academy of political and social science, vol 595, pp 204–222

    Google Scholar 

  • Sherman LW, Strang H (2010) Doing experimental criminology. In: Gadd D, Karstedt S, Messner S (eds) Handbook of criminological research methods. Sage, Thousand Oaks, CA

    Google Scholar 

  • Sherman LW, Weisburd D (1995) General deterrent effects of police patrol in crime hot spots: a randomized, controlled trial. Justice Q 12(4):635–648

    Google Scholar 

  • Sherman LW, Smith DA, Schmidt J, Rogan DP (1992) Crime, punishment and stake in conformity: legal and informal control of domestic violence. Am Sociol Rev 57:680–690

    Article  Google Scholar 

  • Sherman LW, Strang H, Woods D (2000) Recidivism patterns in the Canberra reintegrative shaming experiments (RISE). Downloaded on 5 August 2009 at http://www.aic.gov.au/criminal_justice_system/rjustice/rise/aspx

  • Sherman LW, Strang H, Angel C, Woods D, Barnes G, Bennett S, Rossner M, Inkpen N (2005) Effects of face-to-face restorative justice on victims of crime in four randomized controlled trials. J Exp Criminol 1(3):367–395

    Article  Google Scholar 

  • The Multisite Violence Prevention Project (2008) Impact of a universal school-based violence prevention program on social-cognitive outcomes. Prev Sci 9(4):231–244

    Article  Google Scholar 

  • Tilley N (2009) Sherman vs Sherman: realism vs rhetoric. Criminol Crim Justice 9(2):135–144

    Article  Google Scholar 

  • Tröhler U (2003) ‘James Lind and Scurvy: 1747 to 1795.’ The James Lind Library (http://www.jameslindlibrary.org). Downloaded 4 August, 2009

  • Weisburd D (1993) Design sensitivity in criminal justice experiments. Crime and Justice 17:337–379

    Article  Google Scholar 

  • Weiss C (2002) What to do until the random assigner comes. In: Mosteller F, Boruch R (eds) Evidence matters. Brookings Institution, Washington, DC

    Google Scholar 

  • Welsh BC, Farrington DP, Sherman LW (2001) Costs and benefits of preventing crime. Westview Press, Boulder, CO

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Appendix 1

Appendix 1

Crim-PORT 1.0:

Criminological Protocol for Operating Randomized Trials

@ 2009 by Lawrence W. Sherman and Heather Strang

INSTRUCTIONS: Please use this form to enter information directly into the WORD document as the protocol for your registration on Cambridge University’s Jerry Lee Centre of Experimental Criminology’s Registry of EXperiments in Policing Strategy and Tactics (REX-POST) or the separate Registry of Experiments in Corrections Strategy and Tactics (REX-COST) at http://www.crim.cam.ac.uk/experiments.

CONTENTS:

  1. 1.

    Name and Hypotheses

  2. 2.

    Organizational Framework

  3. 3.

    Unit of Analysis

  4. 4.

    Eligibility Criteria

  5. 5.

    Pipeline: Recruitment or Extraction of Cases

  6. 6.

    Timing

  7. 7.

    Random Assignment

  8. 8.

    Treatment and Comparison Elements

  9. 9.

    Measuring and Managing Treatments

  10. 10.

    Measuring Outcomes

  11. 11.

    Analysis Plan

  12. 12.

    Due Date and Dissemination Plan

1. Name and Hypotheses

  1. A.

    Name of Experiment _____________

  2. B.

    Principal Investigator

    (Name) _____________

    (Employer) _____________

  3. C.

    1st Co-Principal Investigator

    (Name) _____________

    (Employer) _____________

  4. D.

    2d Co-Principal Investigator (Name) _____________

    (Employer) _____________

  5. E.

    General Hypothesis_____________: (Experimental or Primary Treatment) _____________ causes (less or more) _____________ (crime or justice outcome) _____________ than (comparison or control treatment) _____________.

  6. F.

    Specific Hypotheses_____________:

    1. 1.

      List all variations of treatment delivery to be tested.

    2. 2.

      List all variations of outcome measures to be tested.

    3. 3.

      List all subgroups to be tested for all varieties of outcome measures.

2. Organizational Framework: Check only one from a, b, c, or d

  1. A.

    In-House delivery of treatments, data collection and analysis _____________

  2. B.

    Dual Partnership: Operating agency delivers treatments with independent research organization providing random assignment, data collection, analysis _____________

    • Name of Operating Agency _____________

    • Name of Research Organization _____________

  3. C.

    Multi-Agency Partnership: Operating agencies delivers treatments with independent research organization providing random assignment, data collection, analysis

    • Name of Operating Agency

    • 1 _____________

    • Name of Operating Agency

    • 2 _____________

    • Name of Operating Agency

    • 3 _____________

    • Name of Research

    • Organization _____________

  4. D.

    Other Framework (describe in detail).

3. Unit of Analysis

Check only one

  1. _____________A.

    People (describe role: offenders, victims, etc.) _____________

  2. _____________B.

    Places (describe category: school, corner, face-block, etc.) _____________

  3. _____________C.

    Situations (describe: police-citizen encounters, fights, etc.) _____________

  4. _____________D.

    Other (describe) _____________

4. Eligibility Criteria

  1. A.

    Criteria Required (list all)

  2. B.

    Criteria for Exclusion (list all)

5. Pipeline: Recruitment or Extraction of Cases (answer all questions)

  1. A.

    Where will cases come from?

  2. B.

    Who will obtain them?

  3. C.

    How will they be identified?

  4. D.

    How will each case be screened for eligibility?

  5. E.

    Who will register the case identifiers prior to random assignment?

  6. F.

    What social relationships must be maintained to keep cases coming?

  7. G.

    Has a Phase I (no-control, “dry-run”) test of the pipeline and treatment process been conducted? If so,

    • How many cases were attempted to be treated

    • How many treatments were successfully delivered

    • How many cases were lost during treatment delivery

6. Timing: Cases come into the experiment in (check only one)

  1. A.

    A trickle-flow process, one case at a time _____________

  2. B.

    A single batch assignment _____________

  3. C.

    Repeated batch assignments _____________

  4. D.

    Other (describe below) _____________

7. Random Assignment

  1. A.

    How is random assignment sequence to be generated?

    (coin-toss, every Nth case, and other nonrandom tools are banned from CCR-RCT).

    Check one from 1, 2 or 3 below

    1. 1.

      Random numbers table → case number sequence → sealed envelopes with case numbers outside and treatment assignment inside, with 2-sheet paper surrounding treatment _____________

    2. 2.

      Random numbers case–treatment generator program in secure computer _____________

    3. 3.

      Other (please describe below) _____________

  2. B.

    Who is entitled to issue random assignments of treatments?

    Role:

    Organization:

  3. C.

    How will random assignments be recorded in relation to case registration?

    Name of data base:

    Location of data entry:

    Persons performing data entry:

8. Treatment and Comparison Elements

  1. A.

    Experimental or Primary Treatment

    1. 1.

      What elements must happen, with dosage level (if measured) indicated.

      • Element A:

      • Element B:

      • Element C:

      • Other Elements:

    2. 2.

      What elements must not happen, with dosage level (if measured) indicated.

      • Element A:

      • Element B:

      • Element C:

      • Other Elements:

  1. B.

    Control or Secondary Comparison Treatment

    1. 3.

      What elements must happen, with dosage level (if measured) indicated.

      • Element A:

      • Element B:

      • Element C:

      • Other Elements:

    2. 4.

      What elements must not happen, with dosage level (if measured) indicated.

      • Element A:

      • Element B:

      • Element C:

      • Other Elements:

9. Measuring and Managing Treatments

  1. A.

    Measuring

    1. 1.

      How will treatments be measured?

    2. 2.

      Who will measure them?

    3. 3.

      How will data be collected?

    4. 4.

      How will data be stored?

    5. 5.

      Will data be audited?

    6. 6.

      If audited, who will do it?

    7. 7.

      How will data collection reliability be estimated?

    8. 8.

      Will data collection vary by treatment type?

      If so, how?

  1. B.

    Managing

    1. 1.

      Who will see the treatment measurement data?

    2. 2.

      How often will treatment measures be circulated to key leaders?

    3. 3.

      If treatment integrity is challenged, whose responsibility is correction?

10. Measuring and Monitoring Outcomes

  1. A.

    Measuring

    1. 1.

      How will outcomes be measured?

    2. 2.

      Who will measure them?

    3. 3.

      How will data be collected?

    4. 4.

      How will data be stored?

    5. 5.

      Will data be audited?

    6. 6.

      If audited, who will do it?

    7. 7.

      How will data collection reliability be estimated?

    8. 8.

      Will data collection vary by treatment type?

      If so, how?

  1. B.

    Monitoring

    1. 1.

      How often will outcome data be monitored?

    2. 2.

      Who will see the outcome monitoring data?

    3. 3.

      When will outcome measures be circulated to key leaders?

    4. 4.

      If experiment finds early significant differences, what procedure is to be followed?

11. Analysis Plan

  1. A.

    Which outcome measure is considered to be the primary indicator of a difference between experimental treatment and comparison group?

  2. B.

    What is the minimum sample size to be used to analyze outcomes?

  3. C.

    Will all analyses employ an intention-to-treat framework?

  4. D.

    What is the threshold below which the percent Treatment-as-Delivered would be so low as to bar any analysis of outcomes?

  5. E.

    Who will do the data analysis?

  6. F.

    What statistic will be used to estimate effect size?

  7. G.

    What statistic will be used to calculate P values?

  8. H.

    What is the magnitude of effect needed for a P = 0. 05 difference to have an 80% chance of detection with the projected sample size (optional but recommended calculation of power curve) for the primary outcome measure.

12. Dissemination Plan

  1. A.

    What is the date by which the project agrees to file its first report on CCR-RCT? (report of delay, preliminary findings, or final result).

  2. B.

    Does the project agree to file an update every 6 months from date of first report until date of final report?

  3. C.

    Will preliminary and final results be published, in a 250-word abstract, on CCR-RCT as soon as available?

  4. D.

    Will CONSORT requirements be met in the final report for the project? (See http://www.consort-statement.org/)

  5. E.

    What organizations will need to approve the final report? (include any funders or sponsors)

  6. F.

    Do all organizations involved agree that a final report shall be published after a maximum review period of 6 months from the principal investigator’s certification of the report as final?

  7. G.

    Does principal investigator agree to post any changes in agreements affecting items 12A to 12F above?

  8. H.

    Does principal investigator agree to file a final report within 2 years of cessation of experimental operations, no matter what happened to the experiment? (e.g., “random assignment broke down after 3 weeks and the experiment was cancelled” or “only 15 cases were referred in the first 12 months and experiment was suspended”).

An Introduction to Experimental Criminology: Lawrence W. Sherman

  1. Background

    Experimental criminology (EC) is scientific knowledge about crime and justice discovered from random assignment of different conditions in large field tests.

    1. 1.

      This method is the preferred way to estimate the average effects of one variable on another, holding all other variables constant

    2. 2.

      While the experimental method is not intended to answer all research questions in criminology, it can be used far more often than most criminologists assume

      • Opportunities are particularly promising in partnership with criminal justice agencies

    Note: The goal of this chapter is to help its readers improve the design and conduct of criminological experiments. This chapter’s method is to describe the necessary steps and preferred decisions in planning, conducting, completing, analyzing, reporting, and synthesizing high-quality randomized controlled trials (RCTs) in criminology.

  • EC use The highest and best use of experimental criminology is to develop and test theoretically coherent ideas about reducing harm (Sherman 2006, 2007), rather than just “evaluating” government programs.

    • Those tests, in turn, can help to accumulate an integrated body of grounded theory in which experimental evidence plays a crucial role.

    • The advantages depend entirely on the capability of the experimenters to insure success in achieving the many necessary elements of an unbiased comparison:

      1. 1.

        Many randomized field experiments in criminology suffer flaws that could have been avoided with better planning.

  • Metaphors for experiments The success of experimental criminology may depend on choosing the right metaphor.

    • The most useful metaphor is constructing a building.

      • The recurrent metaphor of constructing a building helps to illustrate the order of steps to take for best results.

      • The steps presented in this chapter begin with the intellectual property of every experiment: formulating the research question.

    • Once a protocol is agreed and approved, the experimenters (like builders) must find and “contract” with a wide range of agents and others to best construct and sustain the experiment.

    • When and if all these steps are completed, the experiment will be ready for analysis.

    • This chapter briefly maps out those principles and the arguments for and against fundamentally different analytic approaches in EC.

    Part 1:Intellectual Property: Formulating the Research Question

    • Great experiments Great experiments in criminology are arguably based on three criteria:

      1. 1.

        They test theoretically central hypotheses – experimentalists can do the most good for science when they are the most focused on the theoretical implications of their experiments.

      2. 2.

        They eliminate as many competing explanations as possible – it is the capacity to limit ambiguity by eliminating competing explanations that makes EC so important to criminological theory.

      3. 3.

        They show one intervention to be far more cost effective than others – rising interest in this principle alone has done more to encourage evidence-based government programs than any other.

        • Experiments must be planned to measure costs of delivering programs, both in a start-up phase and in a “rollout” model with perhaps more efficiencies from mass production.

      Note: Putting these criteria together in the formulation of an experimental research question may seem to be more a matter of “art” than of science. Such a judgment would demean the importance of intuition, inspiration, and insight in science, as in many fields involving complex decisions.

    Part 2: Social Foundation: Developing a “Field Station”

    • Field stations The history of experimental field science shows many examples of research centered in what looks much like an indoor laboratory – but with a crucial difference – studies consider questions that cannot be answered in a laboratory

      • Field research stations have collected various kinds of observational data systematically in the same places for at least 300 years.

    • By the 1950s, hospitals associated with medical schools took on the same character as field stations, linking teaching and research with a large number of clinical randomized controlled trials (RCT)

    • From at least the 1960s, similar concentrations of field experiments have been found in the criminal justice system

    • The concept of a field station where data are recorded and experiments can last for many decades is an explicit vision for how to conduct experiments in criminology

    • Social elements The key to holding an experiment together is understanding a cognitive map of its social elements which include the funders, the executive leadership of an operating agency, the mid-level operating liaison person, the agents delivering treatments, and where necessary the agents providing cases.

    Part 3: Deciding on the Experimental Protocol

    • Experiment blueprints The future is clear: experimental criminology will need to design blueprints and stick to them, absent approval from oversight bodies.

      • It can be argued that EC will be substantially improved by wider use of experimental protocols.

      • One reason is so many RCTs in criminology have either violated good design standards or failed to report fundamental information.

      • CONSORT – CONsolidated Standards On Reporting of Trials can lead to better planning of experiments with protocols that anticipate reporting requirements.

      • The CONSORT checklist includes 22 reporting elements

    Note: CONSORT alone is not enough. A reporting system does not tell you how to design a protocol for an experiment before it starts. It only tells you what readers need to know after it is finished. Included in this chapter is the first version of a standard protocol format for experiments in criminology. The appendix lays out the elements of the protocol.

    Table 1

    Part 4: “Contracts”: Recruiting, Consulting, and Training of Key People

    • Using the construction metaphor, experiments must be built by contractors (who may also have been the architect) who know how to “contract” with the right people (for money or pro bono love of the research).

      • Principal investigators in experimental criminology will generally be PhD-level academics.

      • The experimental criminologist – or “random assigner,” serving as principal investigator should be seen supported and held accountable as the primary leader of the experiment

      • Other key roles are the field coordinator, the agency liaison, the data manager, and the agent supplying the cases

    Part 5: Starting and Sustaining the Experiment

    • Design How an experiment begins depends on how it must be designed.

      • There is nothing easier than launching a “batch” random assignment project, and nothing harder than launching a “trickle-flow” random assignment project.

        1. 1.

          While months or years of preparation may be required for batch random assignment, they sometimes offer a capacity to literally push a button that will launch the experiment.

        2. 2.

          Trickle-flow experiments, in contrast, may require the cooperation of hundreds of people to supply eligible cases for random assignment – After launch, the even greater challenge with trickle-flow experiments is to sustain the case flow.

    Part 6: Supplying: Obtaining Cases

    • Supplying cases The biggest challenge in trickle-flow experiments is to find and extract the cases that are “leaking” out of the experimental sample.

      • Increasingly, information technology applications can be used to identify the missed cases and the agents who could have referred them to the experimenters.

      • If the agency strongly supports the experiment, there may be ways to encourage agents who do not refer the cases to start doing so.

      • But if the agency will not, or cannot, attempt to persuade those who can contribute cases to an experiment, the only tool left is the ingenuity of the site coordinator or principal investigator.

    Part 7: Screening for Eligibility

    • Avoiding ineligible cases The best solution to the problem of including ineligible cases is to prevent it prior to random assignment.

      • The best time to exclude ineligible cases is when writing the budget: making sure that you spend enough money to have an independent check on the eligibility of cases.

    Part 8: Assigning Treatments

    • Assignment system Saving money on random assignment costs is penny wise and pound foolish.

      • The chance for biased selection of eligible cases can be virtually eliminated by having research staff answer the phone by a secure computer, take the identifying details of the officers and the suspects, and then open a numbered envelope sealed with red sealing wax

      • The credibility of an independent random assignment system is well worth the increased budget.

      • It is important to design a protocol that separates random assignment from operating staff.

    Part 9: Delivering Treatments Consistently

    • Protocol matters Anything that creates differences within the units of analysis, or in the way treatments are delivered, can cause a misleading result: no significant difference despite a “true,” underlying difference.

      • While eligibility criteria can limit the differences in cases (for example, in age or prior record), it is much harder for a protocol to insure consistency in the treatment

      • The best plan is to invest heavily in measurements, such as observations and interviews – even then, however, it is much harder to deliver consistency than to measure consistency

      • Repeating experiments done in Canberra Australia and then again in England showed that protocol matters and that as a result, consistency was far higher in England.

    Part 10: Measuring Treatments Delivered

    • Failure to measure The failure to measure treatment delivery is one of the most common in experimental criminology.

      • Numerous experiments assume that once treatment is assigned it will be delivered; yet when budgets are invested in measuring delivery, it shows at least some portion of cases in which delivery did not occur.

      • Great sums can be saved by using two kinds of electronic data:

        1. 1.

          One is the Automatic Radio Locator System (ARLS) that will record where each and every police officer is at all times.

        2. 2.

          The other technology is CCTV cameras, which are trained on many hot spots and can record what happens 100% of the time.

    Part 11: Measuring Outcomes

    • Principles Several principles can help in measuring outcomes.

      • Choose universal measures over low response rate measures

      • Choose crime frequency over prevalence

      • Choose a seriousness index over categorical counts

      • Choose one measure as primary at the outset

    Part 12: Analyzing Results

    • Analysis issues There are two issues to supplement what textbooks say on analysis of results.

      1. 1.

        The first is to seek simplicity in analysis.

      2. 2.

        The other is test policies, not treatments – in general, Intention-To-Treat analysis makes the most sense in keeping the analysis simple.

        • As long as the experiment is limited to the random assignment of policy and cannot control treatment, the honest thing to do is to analyze the effects of policy.

    Part 13: Communicating Results

    • Avoid complexity Simplicity is also a great virtue in communicating results.

      • Academics inclined to making fine distinctions are often impatient with simplicity, but they lack evidence to support the claim that complexity will lead to better policymaking or even better science.

    Part 14: Synthesizing Results

    • The goal of synthesis The goal of research synthesis is to draw conclusions from a universe of all tests of a single hypothesis.

      • Randomized experiments are especially valuable for systematic reviews and meta-analysis of accumulated tests of a program or policy.

      • Even those who are generally critical of the statistical basis of meta-analysis are ready to endorse its use when only randomized experiments are included in the calculations

    Part 15: Becoming a Random Assigner

    • Recruitment Who makes the best random assigner?

      • Experimental criminology requires a more extroverted personality than is needed for scholarship in general.

      • Experimental criminology may also require a greater readiness to accept the big problems that cannot be changed quickly, in order to attack smaller problems that can be.

      • The best experimental criminology will feature the best traits of scholarship in general: erudition, broad theoretical vision, a nuanced grasp of causal inference, and abiding curiosity.

    • Appendix The appendix contains a Criminological Protocol for Operating Randomized Trials.

This information map was prepared with the support of the Jerry Lee Foundation and the assistance of Herbert Fayer.

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Sherman, L.W. (2010). An Introduction to Experimental Criminology. In: Piquero, A., Weisburd, D. (eds) Handbook of Quantitative Criminology. Springer, New York, NY. https://doi.org/10.1007/978-0-387-77650-7_20

Download citation

Publish with us

Policies and ethics