Abstract
Situational bias is a systematic error that is caused by the research situation and participants’ reactions to this situation. Situational factors that equally affect E- and C-participants do not cause spurious differences between conditions, but factors that differentially affect E- and C-participants cause bias. Standardization of the research situation equalizes the situation for all participants, and calibration equalizes the research situation across time. Participants may differentially react to conditions, and experimenters and data analysts may differentially affect conditions. Blinding of these persons prevents the differential influence. Random assignment of research persons (e.g., experimenters, interviewers, etc.) to conditions turns their systematic influence on participants into random error. Necessary conditions for a causal effect of an IV on a DV are that the conditions are correctly implemented, and conditions are not contaminated. A manipulation check is a procedure to check the implementation of conditions. Contamination of conditions is prevented by separating conditions in location or time. Random assignment of participants counteracts selection bias , but it may induce randomization bias , for example, if participants dislike their assigned condition. Usually it cannot be prevented in randomized experiments , but it can be assessed by applying a double randomized preference design. Pretest -posttest studies are threatened by pretest effects, which are the effects of a pretest on participants’ behavior. It can be prevented by, for example, replacing the pretest by an unobtrusive proxy pretest , and it can be assessed by using Solomon’s four-group design. Additionally to pretest effects, studies that use a self-report pretest may show a response shift , which is a change in meaning of a participant’s self-evaluating from pretest to posttest. It can be assessed by administering a retrospective pretest .
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adèr, H. J. (2008a). The main analysis phase. In H. J. Adèr & G. J. Mellenbergh (with contributions by D. J. Hand), Advising on research methods: A consultant’s companion (pp. 357–386). Huizen, The Netherlands: van Kessel.
Goeleven, E., de Raedt, R., & Koster, H. W. (2007). The influence of induced mood on the inhibition of emotional information. Motivation and Emotion, 31, 208–218.
Holland, P. W., & Dorans, N. (2006). Linking and equating. In R. L. Brennan (Ed.), Educational measurement (4th ed.). Westport, CT: American Council on Education/Praeger.
Hoogstraten, J. (1985). Influence of objective measures on self-reports in a retrospective pretest-posttest design. Journal of Experimental Education, 53, 207–210.
Hoogstraten, J. (2004). De machteloze onderzoeker: Voetangels en klemmen van sociaal-wetenschappelijk onderzoek [The helpless researcher: Pittfalls of social science research]. Amsterdam, The Netherlands: Boom.
Howard, G. S., Ralph, K. M., Gulanick, N. A., Maxwell, S. E., Nance, D. W., & Gerber, S. K. (1979). Internal invalidity in pretest-posttest self-report evaluations and a re-evaluation of retrospective pretests. Applied Psychological Measurement, 3, 1–23.
Marcus, S. M., Stuart, E. A., Wang, P., Shadish, W. R., & Steiner, P. M. (2012). Estimating the causal effect of randomization versus treatment preference in a doubly randomized preference trial. Psychological Methods, 17, 244–254.
Moerbeek, M. (2005). Randomization of clusters versus randomization of persons within clusters: Which is preferable? American Statistician, 59, 173–179.
Orne, M. T. (1962). On the social psychology of the experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776–783.
Rhoads, C. H. (2011). The implications of ‘contamination’ for experimental design in education. Journal of Educational and Behavioral Statistics, 36, 76–104.
Rosenberg, M. J. (1965). When dissonance fails: On eliminating evaluation apprehension from attitude measurement. Journal of Personality and Social Psychology, 1, 18–42.
Rücker, G. (1989). A two-stage trial design for testing treatment, self-selection and treatment preference effects. Statistics in Medicine, 4, 477–485.
Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103, 1334–1344.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York, NY: Houghton Mifflin.
Solomon, R. L. (1949). An extension of control group design. Psychological Bulletin, 46, 137–150.
Sprangers, M. A. G. (1989). Response shift and the retrospective pretest: On the usefulness of retrospective pretest-posttest designs in detecting training related response shifts. Unpublished doctoral dissertation. The Netherlands: University of Amsterdam.
Sprangers, M. A. G., & Schwartz, C. E. (2000). Integrating response shift into health-related quality-of-life research: A theoretical model. In C. E. Schwartz & M. A. G. Sprangers (Eds.), Adaptation to changing health: Response shift in quality-of-life research (pp. 11–23). Washington, DC: American Psychological Association.
van Belle, G. (2002). Statistical rules of thumb. New York, NY: Wiley.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Mellenbergh, G.J. (2019). Situational Bias. In: Counteracting Methodological Errors in Behavioral Research. Springer, Cham. https://doi.org/10.1007/978-3-030-12272-0_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-12272-0_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-74352-3
Online ISBN: 978-3-030-12272-0
eBook Packages: Behavioral Science and PsychologyBehavioral Science and Psychology (R0)