Advertisement

Behavior Analysis in Practice

, Volume 12, Issue 2, pp 491–502 | Cite as

Systematic Protocols for the Visual Analysis of Single-Case Research Data

  • Katie WolfeEmail author
  • Erin E. Barton
  • Hedda Meadan
Technical and Tutorials

Abstract

Researchers in applied behavior analysis and related fields such as special education and school psychology use single-case designs to evaluate causal relations between variables and to evaluate the effectiveness of interventions. Visual analysis is the primary method by which single-case research data are analyzed; however, research suggests that visual analysis may be unreliable. In the absence of specific guidelines to operationalize the process of visual analysis, it is likely to be influenced by idiosyncratic factors and individual variability. To address this gap, we developed systematic, responsive protocols for the visual analysis of A-B-A-B and multiple-baseline designs. The protocols guide the analyst through the process of visual analysis and synthesize responses into a numeric score. In this paper, we describe the content of the protocols, illustrate their application to 2 graphs, and describe a small-scale evaluation study. We also describe considerations and future directions for the development and evaluation of the protocols.

Keywords

Visual analysis Single-case research Visual inspection Data analysis 

Notes

Compliance with Ethical Standards

Conflict of Interest

Katie Wolfe declares that she has no conflict of interest. Erin E. Barton declares that she has no conflict of interest. Hedda Meadan declares that she has no conflict of interest.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The University of South Carolina Institutional Review Board approved the procedures in this study.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

References

  1. Barton, E. E., Ledford, J. R., Lane, J. D., Decker, J., Germansky, S. E., Hemmeter, M. L., & Kaiser, A. (2016). The iterative use of single case research designs to advance the science of EI/ECSE. Topics in Early Childhood Special Education, 36(1), 4–14.  https://doi.org/10.1177/0271121416630011.CrossRefGoogle Scholar
  2. Barton, E. E., Lloyd, B. P., Spriggs, A. D., & Gast, D. L. (2018). Visual analysis of graphic data. In J. R. Ledford & D. L. Gast (Eds.), Single-case research methodology: Applications in special education and behavioral sciences (pp. 179–213). New York, NY: Routledge.CrossRefGoogle Scholar
  3. Barton, E. E., Meadan, H., & Fettig, A. (2019). Comparison of visual analysis, non-overlap methods, and effect sizes in the evaluation of parent implemented functional assessment based interventions. Research in Developmental Disabilities, 85, 31–41.  https://doi.org/10.1016/j.ridd.2018.11.001.CrossRefGoogle Scholar
  4. Brossart, D. F., Parker, R. I., Olson, E. A., & Mahadevan, L. (2006). The relationship between visual analysis and five statistical analyses in a simple AB single-case research design. Behavior Modification, 30, 531–563.  https://doi.org/10.1177/0145445503261167.CrossRefGoogle Scholar
  5. Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284–290.  https://doi.org/10.1037/1040-3590.6.4.284.CrossRefGoogle Scholar
  6. Cooper, C. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis. St. Louis: Pearson Education.Google Scholar
  7. DeProspero, A., & Cohen, S. (1979). Inconsistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 12(4), 573–579.  https://doi.org/10.1901/jaba.1979.12-573.CrossRefGoogle Scholar
  8. Fisch, G. S. (1998). Visual inspection of data revisited: Do the eyes still have it? The Behavior Analyst, 21(1), 111–123.  https://doi.org/10.1007/BF03392786.CrossRefGoogle Scholar
  9. Furlong, M. J., & Wampold, B. E. (1982). Intervention effects and relative variation as dimensions in experts’ use of visual inference. Journal of Applied Behavior Analysis, 15(3), 415–421.  https://doi.org/10.1901/jaba.1982.15-415.CrossRefGoogle Scholar
  10. Hagopian, L. P., Fisher, W. W., Thompson, R. H., Owen-DeSchryver, J., Iwata, B. A., & Wacker, D. P. (1997). Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis, 30(2), 313–326.  https://doi.org/10.1901/jaba.1997.30-313.CrossRefGoogle Scholar
  11. Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview and tutorial. Tutorial in Quantitative Methods for Psychology, 8(1), 23–34.CrossRefGoogle Scholar
  12. Hitchcock, J. H., Horner, R. H., Kratochwill, T. R., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. M. (2014). The what works Clearinghouse single-case design pilot standards: Who will guard the guards? Remedial and Special Education, 35(3), 145–152.  https://doi.org/10.1177/0741932513518979.CrossRefGoogle Scholar
  13. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S. L., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practices in special education. Exceptional Children, 71, 165–179.  https://doi.org/10.1177/001440290507100203.Google Scholar
  14. Horner, R. H., & Spaulding, S. A. (2010). Single-subject designs. In N. E. Salkind (Ed.), The encyclopedia of research design (Vol. 3, pp. 1386–1394). Thousand Oaks: Sage Publications.Google Scholar
  15. Horner, R. H., Swaminathan, H., Sugai, G., & Smolkowski, K. (2012). Considerations for the systematic analysis and use of single-case research. Education and Treatment of Children, 35(2), 269–290.  https://doi.org/10.1353/etc.2012.0011.CrossRefGoogle Scholar
  16. Kahng, S. W., Chung, K. M., Gutshall, K., Pitts, S. C., Kao, J., & Girolami, K. (2010). Consistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 43(1), 35–45.  https://doi.org/10.1901/jaba.2010.43-35.CrossRefGoogle Scholar
  17. Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2nd ed.). New York: Oxford University Press.Google Scholar
  18. Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34, 26–38.  https://doi.org/10.1177/0741932512452794.CrossRefGoogle Scholar
  19. Ledford, J. R., & Gast, D. L. (2018). Single case research methodology: Applications in special education and behavioral sciences. New York: Routledge.CrossRefGoogle Scholar
  20. Lieberman, R. G., Yoder, P. J., Reichow, B., & Wolery, M. (2010). Visual analysis of multiple baseline across participants graphs when change is delayed. School Psychology Quarterly, 25(1), 28–44.  https://doi.org/10.1037/a0018600.CrossRefGoogle Scholar
  21. Maggin, D. M., Briesch, A. M., & Chafouleas, S. M. (2013). An application of the what works Clearinghouse standards for evaluating single subject research: Synthesis of the self-management literature base. Remedial and Special Education, 34(1), 44–58.  https://doi.org/10.1177/0741932511435176.CrossRefGoogle Scholar
  22. Penny, J., Johnson, R. L., & Gordon, B. (2000a). The effect of rating augmentation on inter-rater reliability: An empirical study of a holistic rubric. Assessing Writing, 7(2), 143–164.  https://doi.org/10.1016/S1075-2935(00)00012-X.CrossRefGoogle Scholar
  23. Penny, J., Johnson, R. L., & Gordon, B. (2000b). Using rating augmentation to expand the scale of an analytic rubric. Journal of Experimental Education, 68(3), 269–287.  https://doi.org/10.1080/00220970009600096.CrossRefGoogle Scholar
  24. Scruggs, T. E., & Mastropieri, M. A. (1998). Summarizing single-subject research: Issues and applications. Behavior Modification, 22(3), 221–242.  https://doi.org/10.1177/01454455980223001.CrossRefGoogle Scholar
  25. Shadish, W. R. (2014). Statistical analyses of single-case designs: The shape of things to come. Current Directions in Psychological Science, 23(2), 139–146.  https://doi.org/10.1177/0963721414524773.CrossRefGoogle Scholar
  26. Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43(4), 971–980.  https://doi.org/10.3758/s13428-011-0111-y.CrossRefGoogle Scholar
  27. Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428.  https://doi.org/10.1037/0033-2909.86.2.420.CrossRefGoogle Scholar
  28. Smith, J. D. (2012). Single-case experimental designs: A systematic review of published research and current standards. Psychological Methods, 17, 510–550.  https://doi.org/10.1037/a0029312.CrossRefGoogle Scholar
  29. What Works Clearinghouse. (2017). Procedures and standards handbook (Version 4.0). Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_standards_handbook_v4.pdf. Accessed 9 Jan 2018.
  30. Wolfe, K., Seaman, M. A., & Drasgow, E. (2016). Interrater agreement on the visual analysis of individual tiers and functional relations in multiple baseline designs. Behavior Modification, 40(6), 852–873.  https://doi.org/10.1177/0145445516644699.CrossRefGoogle Scholar

Copyright information

© Association for Behavior Analysis International 2019

Authors and Affiliations

  1. 1.Department of Educational StudiesUniversity of South CarolinaColumbiaUSA
  2. 2.Department of Special EducationVanderbilt UniversityNashvilleUSA
  3. 3.Department of Special EducationUniversity of Illinois at Urbana–ChampaignChampaignUSA

Personalised recommendations