Skip to main content
  • 673 Accesses

Abstract

Situational bias is a systematic error that is caused by the research situation and participants’ reactions to this situation. Situational factors that equally affect E- and C-participants do not cause spurious differences between conditions, but factors that differentially affect E- and C-participants cause bias. Standardization of the research situation equalizes the situation for all participants, and calibration equalizes the research situation across time. Participants may differentially react to conditions, and experimenters and data analysts may differentially affect conditions. Blinding of these persons prevents the differential influence. Random assignment of research persons (e.g., experimenters, interviewers, etc.) to conditions turns their systematic influence on participants into random error. Necessary conditions for a causal effect of an IV on a DV are that the conditions are correctly implemented, and conditions are not contaminated. A manipulation check is a procedure to check the implementation of conditions. Contamination of conditions is prevented by separating conditions in location or time. Random assignment of participants counteracts selection bias , but it may induce randomization bias , for example, if participants dislike their assigned condition. Usually it cannot be prevented in randomized experiments , but it can be assessed by applying a double randomized preference design. Pretest -posttest studies are threatened by pretest effects, which are the effects of a pretest on participants’ behavior. It can be prevented by, for example, replacing the pretest by an unobtrusive proxy pretest , and it can be assessed by using Solomon’s four-group design. Additionally to pretest effects, studies that use a self-report pretest may show a response shift , which is a change in meaning of a participant’s self-evaluating from pretest to posttest. It can be assessed by administering a retrospective pretest .

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Adèr, H. J. (2008a). The main analysis phase. In H. J. Adèr & G. J. Mellenbergh (with contributions by D. J. Hand), Advising on research methods: A consultant’s companion (pp. 357–386). Huizen, The Netherlands: van Kessel.

    Google Scholar 

  • Goeleven, E., de Raedt, R., & Koster, H. W. (2007). The influence of induced mood on the inhibition of emotional information. Motivation and Emotion, 31, 208–218.

    Article  Google Scholar 

  • Holland, P. W., & Dorans, N. (2006). Linking and equating. In R. L. Brennan (Ed.), Educational measurement (4th ed.). Westport, CT: American Council on Education/Praeger.

    Google Scholar 

  • Hoogstraten, J. (1985). Influence of objective measures on self-reports in a retrospective pretest-posttest design. Journal of Experimental Education, 53, 207–210.

    Article  Google Scholar 

  • Hoogstraten, J. (2004). De machteloze onderzoeker: Voetangels en klemmen van sociaal-wetenschappelijk onderzoek [The helpless researcher: Pittfalls of social science research]. Amsterdam, The Netherlands: Boom.

    Google Scholar 

  • Howard, G. S., Ralph, K. M., Gulanick, N. A., Maxwell, S. E., Nance, D. W., & Gerber, S. K. (1979). Internal invalidity in pretest-posttest self-report evaluations and a re-evaluation of retrospective pretests. Applied Psychological Measurement, 3, 1–23.

    Article  Google Scholar 

  • Marcus, S. M., Stuart, E. A., Wang, P., Shadish, W. R., & Steiner, P. M. (2012). Estimating the causal effect of randomization versus treatment preference in a doubly randomized preference trial. Psychological Methods, 17, 244–254.

    Article  Google Scholar 

  • Moerbeek, M. (2005). Randomization of clusters versus randomization of persons within clusters: Which is preferable? American Statistician, 59, 173–179.

    Article  Google Scholar 

  • Orne, M. T. (1962). On the social psychology of the experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776–783.

    Article  Google Scholar 

  • Rhoads, C. H. (2011). The implications of ‘contamination’ for experimental design in education. Journal of Educational and Behavioral Statistics, 36, 76–104.

    Article  Google Scholar 

  • Rosenberg, M. J. (1965). When dissonance fails: On eliminating evaluation apprehension from attitude measurement. Journal of Personality and Social Psychology, 1, 18–42.

    Article  Google Scholar 

  • Rücker, G. (1989). A two-stage trial design for testing treatment, self-selection and treatment preference effects. Statistics in Medicine, 4, 477–485.

    Article  Google Scholar 

  • Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103, 1334–1344.

    Article  Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York, NY: Houghton Mifflin.

    Google Scholar 

  • Solomon, R. L. (1949). An extension of control group design. Psychological Bulletin, 46, 137–150.

    Article  Google Scholar 

  • Sprangers, M. A. G. (1989). Response shift and the retrospective pretest: On the usefulness of retrospective pretest-posttest designs in detecting training related response shifts. Unpublished doctoral dissertation. The Netherlands: University of Amsterdam.

    Google Scholar 

  • Sprangers, M. A. G., & Schwartz, C. E. (2000). Integrating response shift into health-related quality-of-life research: A theoretical model. In C. E. Schwartz & M. A. G. Sprangers (Eds.), Adaptation to changing health: Response shift in quality-of-life research (pp. 11–23). Washington, DC: American Psychological Association.

    Chapter  Google Scholar 

  • van Belle, G. (2002). Statistical rules of thumb. New York, NY: Wiley.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gideon J. Mellenbergh .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mellenbergh, G.J. (2019). Situational Bias. In: Counteracting Methodological Errors in Behavioral Research. Springer, Cham. https://doi.org/10.1007/978-3-030-12272-0_6

Download citation

Publish with us

Policies and ethics