Advertisement

Prevention Science

, Volume 7, Issue 1, pp 43–56 | Cite as

The Role of Behavior Observation in Measurement Systems for Randomized Prevention Trials

  • James Snyder
  • John Reid
  • Mike Stoolmiller
  • George Howe
  • Hendricks Brown
  • Getachew Dagne
  • Wendi Cross
Original Article

The role of behavior observation in theory-driven prevention intervention trials is examined. A model is presented to guide choice of strategies for the measurement of five core elements in theoretically informed, randomized prevention trials: (1) training intervention agents, (2) delivery of key intervention conditions by intervention agents, (3) responses of clients to intervention conditions, (4) short-term risk reduction in targeted client behaviors, and (5) long-term change in client adjustment. It is argued that the social processes typically thought to mediate interventionist training (Element 1) and the efficacy of psychosocial interventions (Elements 2 and 3) may be powerfully captured by behavior observation. It is also argued that behavior observation has advantages in the measurement of short-term change (Element 4) engendered by intervention, including sensitivity to behavior change and blinding to intervention status.

KEY WORDS:

prevention trials behavior observation mediators short-term outcomes 

Notes

ACKNOWLEDGMENTS

This report is a result of a collaborative effort by members of the Workgroup for the Analysis of Observational Data (WODA), supported in part by grants 3P30 MH46690-13S1, R01 MH 57342, R01 MH40859, MH59855, R01 DA015409, and T32 MH18911.

REFERENCES

  1. Bakeman, R., & Gottman, J. M. (1997). Observing interaction (2nd ed.). New York: Cambridge University Press.Google Scholar
  2. Botvin, G. J. (2000). Life skills training: Promoting health and personal development. Princeton, NJ: Princeton Health Press.Google Scholar
  3. Brown, C. H. (1993). Statistical methods for prevention trials in mental health. Statistics in Medicine, 12, 289–300.PubMedCrossRefGoogle Scholar
  4. Cairns, R. B., & Green, J. A. (1979). How to assess personality and social patterns: Observations or ratings? In R. B. Cairns (Ed.), The analysis of social interactions: Methods, issues, and illustrations (pp. 209–226). Hillsdale, NJ: Erlbaum.Google Scholar
  5. Carroll, K. M., Rounsaville, B. J., & Nich, C. (1994). Blind man's bluff: Effectiveness and significance of psychotherapy and pharmacotherapy blinding procedures in a clinical trial. Journal of Consulting and Clinical Psychology, 62, 276–280.PubMedCrossRefGoogle Scholar
  6. Collins, L. M. (1991). Measurement in longitudinal research. In L. M. Collins & J. L. Horn (Eds.), Best methods for analysis of change (pp. 137–148). Washington, DC: American Psychological Association.Google Scholar
  7. Crosby, L., Stubbs, J., Forgatch, M., & Capaldi, D. (1998). Family and peer process code training manual. Eugene: Oregon Social Learning Center.Google Scholar
  8. Dagne, G. A., Howe, G. W., Brown, C. H., & Muthen, B. O. (2002). Hierarchical modeling of sequential behavioral data: An empirical Bayesian approach. Psychological Methods, 7, 262–280.PubMedCrossRefGoogle Scholar
  9. DeGarmo, D. S., Patterson, G. R., & Forgatch, M. S. (2004). How do outcomes in a specified parent training intervention maintain or wane over time? Prevention Science, 5, 73–90.PubMedCrossRefGoogle Scholar
  10. Dishion, T. J., & Kavanagh, K. (2004). Adolescent problem behavior: An intervention and assessment sourcebook for working with families in schools. New York: Guilford Press.Google Scholar
  11. Dishion, T. J., Rivera, E. K., Jones, L., Verberkmoes, S., & Patras, J. (2002). Relationship process code. Unpublished Coding Manual, Child and Family Research Center, University of Oregon, Eugene.Google Scholar
  12. Eddy, J. M., Dishion, T. J., & Stoolmiller, M. (1998). The analysis of intervention change in children and families: Methodological and conceptual issues embedded in intervention studies. Journal of Abnormal Child Psychology, 26, 53–69.PubMedCrossRefGoogle Scholar
  13. Follette, W. C. (1995). Correcting methodological weaknesses in the knowledge base used to derive practice standards. In S. C. Hayes, W. C. Follette, R. M. Dawes, & K. E. Grady (Eds.), Scientific standards of psychological practice: Issues and recommendations (pp. 229–247). Reno, NV: Context Press.Google Scholar
  14. Forgatch, M. S. (1994). Parenting through change: A training manual. Eugene: Oregon Social Learning Center.Google Scholar
  15. Forgatch, M. S., Patterson, G. R., & DeGarmo, D. S. (2005). Evaluating fidelity: Predictive validity for a measure of competent adherence to the Oregon Model of Parent Management Training (PMTO). Behavior Therapy, 36, 3–14.PubMedCrossRefGoogle Scholar
  16. Gardner, W., & Griffin, W. A. (1989). Methods for the analysis of parallel streams of continuously recorded behavior. Psychological Bulletin, 105, 446–455.CrossRefGoogle Scholar
  17. Hogue, A., Liddle, H. A., & Rowe, C. (1996). Treatment adherence process research in family therapy: A rationale and some practical guidelines. Psychotherapy, 33, 332–345.Google Scholar
  18. Ialongo, N., Poduska, J., Werthamer, L., & Kellam, S. (2001). The distal impact of two first grade preventive interventions on conduct problems and disorder in early adolescence. Journal of Emotional and Behavioral Disorders, 9, 146–160.CrossRefGoogle Scholar
  19. Jones, R. R., Reid, J. B., & Patterson, G. R. (1975). Naturalistic observation in clinical assessment. In P. McReynolds (Ed.), Advances in psychological assessment (Vol. 3, pp. 42–95). San Francisco, CA: Jossey-Bass.Google Scholar
  20. Kellam, S. G. (1990). Developmental epidemiologic framework for family research on depression and aggression. In G. R. Patterson (Ed.), Depression and aggression in family interaction (pp. 1–48). Hillsdale, NJ: Erlbaum.Google Scholar
  21. Kent, R. N., O'Leary, K. D., Diament, C., & Dietz, A. (1974). Expectation biases in observational evaluation of therapeutic change. Journal of Consulting and Clinical Psychology, 42, 774–780.PubMedCrossRefGoogle Scholar
  22. Kraemer, H. C. (1991). To increase power in randomized clinical trials without increasing sample size. Psychopharmacology Bulletin, 27, 217–224.PubMedGoogle Scholar
  23. MacKinon, D. P., & Lockwood, C. M. (2003). Advances in statistical methods for substance use prevention research. Prevention Science, 4, 155–171.CrossRefGoogle Scholar
  24. Miller, W. R., & Rollnick, S. (2002). Motivational interviewing: Preparing people to change addictive behavior. New York: Guilford Press.Google Scholar
  25. Muthén, L. K., & Muthén, B. O. (1998–2004). MPLUS user's guide (3rd ed.). Los Angeles, CA: Muthén &Muthén.Google Scholar
  26. Olds, D. L. (2002). Prenatal and infancy home-visiting by nurses: From randomized trials to community replication. Prevention Science, 3, 153–172.PubMedCrossRefGoogle Scholar
  27. Patterson, G. R. (1982). Coercive family process. Eugene, OR: Castalia.Google Scholar
  28. Patterson, G. R., & Reid, J. B. (1973). Interventions for aggressive boys: A replication study. Behavior Research and Therapy, 11, 383–394.CrossRefGoogle Scholar
  29. Radke-Yarrow, M., & Zahn-Waxler, C. (1979). Observing interaction: A confrontation with methodology. In R. B. Cairns (Ed.), The analysis of social interactions: Methods, issues, and illustrations (pp. 37–66). Hillsdale, NJ: Erlbaum.Google Scholar
  30. Reid, J. (2003). Development of measurement systems for randomized trials. Paper presented to the American Institutes for Research, Center for Integrating Education and Prevention Research in Schools, Washington, DC.Google Scholar
  31. Reid, J. B., Patterson, G. R., & Snyder, J. (2002). Antisocial behavior in children and adolescents: A developmental analysis and model for intervention. Washington, DC: American Psychological Association.Google Scholar
  32. Rotheram-Borus, M. J., Song, J., Gwadz, M., Lee, M., Van Rossem, R., & Koopman, C. (2003). Reductions in HIV risk among runaway youth. Prevention Science, 4, 173–188.PubMedCrossRefGoogle Scholar
  33. Schelle, J. (1974). A brief report on the invalidity of parent evaluations of behavior change. Journal of Applied Behavior Analysis, 7, 341–343.CrossRefGoogle Scholar
  34. Schrepferman, L., & Snyder, J. (2002). Coercion: The link between treatment mechanisms in behavioral parent training and risk reduction in child antisocial behavior. Behavior Therapy, 33, 339–359.CrossRefGoogle Scholar
  35. Schrepferman, L., Snyder, J., Prichard, J., & Suarez, M. (2004). An observational system for children's social interaction with peers: Reliability and validity. Manuscript submitted for publication, Wichita State University, Wichita, KS.Google Scholar
  36. Skindrud, K. D. (1973). Field observation of observer bias under overt and covert monitoring. In L. Handy & E. Mash (Eds.), Behavior change: Methodology, concepts, and practice (pp. 97–118). Champaign, IL: Research Press.Google Scholar
  37. Snyder, J., & Stoolmiller, M. (2002). Reinforcement and coercion mechanisms in the development of antisocial behavior: Family processes. In J. R. Reid, J. Snyder, & G. R. Patterson (Eds.), Antisocial behavior: Prevention, intervention and basic research (pp. 65–100). Washington, DC: American Psychological Association.Google Scholar
  38. Stoolmiller, M., Duncan, T. E., & Patterson, G. R. (1995). Some problems and solutions in the study of change: Significant patterns of client resistance. Journal of Consulting and Clinical Psychology, 61, 920–928.CrossRefGoogle Scholar
  39. Stoolmiller, M., Eddy, J. M., & Reid, J. B. (2000). Detecting and describing preventive intervention effects in a universal school-based randomized trial targeting delinquent and violent behavior. Journal of Consulting and Clinical psychology, 68, 296–306.PubMedCrossRefGoogle Scholar
  40. Walter, H. I., & Gilmore, S. K. (1973). Placebo versus social learning effects in parent training procedures designed to alter the behaviors of aggressive boys. Behavior Therapy, 4, 361–377.CrossRefGoogle Scholar
  41. White, L., Tursky, B., & Schwartz, G. E. (1985). Placebo: Theory, research and mechanisms. New York: Guilford Press.Google Scholar
  42. Willett, J. B. (1989). Some results on the reliability for longitudinal measurement of change: Implications for the design of studies of individual growth. Educational and Psychological Measurement, 49, 587–602.CrossRefGoogle Scholar

Copyright information

© Society of Prevention Research 2005

Authors and Affiliations

  • James Snyder
    • 1
    • 7
  • John Reid
    • 2
  • Mike Stoolmiller
    • 3
  • George Howe
    • 4
  • Hendricks Brown
    • 5
  • Getachew Dagne
    • 5
  • Wendi Cross
    • 6
  1. 1.Department of PsychologyWichita State UniversityWichitaUS
  2. 2.Oregon Social Learning CenterEugeneUS
  3. 3.Research and Statistical ConsultingMarquetteUS
  4. 4.Psychiatry and Human BehaviorGeorge Washington UniversityWashingtonUS
  5. 5.Department of Epidemiology and BiostatisticsCollege of Public Health MDC-56, University of South FloridaTampaUS
  6. 6.Department of Psychiatry and PediatricsUniversity of Rochester Medical CenterRochesterUS
  7. 7.Department of PsychologyWichita State UniversityWichitaUS

Personalised recommendations