Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Search engines, cognitive biases and the man–computer interaction: a theoretical framework for empirical researches about cognitive biases in online search on health-related topics

  • 16 Accesses

Abstract

The widespread use of online search engines to answer the general public’s needs for information has raised concerns about possible biases and the emerging of a ‘filter bubble’ in which users are isolated from attitude-discordant messages. Research is split between approaches that largely focus on the intrinsic limitations of search engines and approaches that investigate user search behavior. This work evaluates the findings and limitations of both approaches and advances a theoretical framework for empirical investigations of cognitive biases in online search activities about health-related topics. We aim to investigate the interaction between the user and the search engine as a whole. Online search activity about health-related topics is considered as a hypothesis-testing process. Two questions emerge: whether the retrieved information provided by the search engines are fit to fulfill their role as evidence, and whether the use of this information by users is cognitively and epistemologically valid and unbiased.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    In the present paper the term bias is used according to the definition conceptualized within the ‘heuristic and biases’ program (Tversky and Kahneman 1983). In this perspective a bias is a systematic deviation from norm or rationality and they are adaptive as result of automatic and intuitive reasoning methods (the heuristics) that simultaneously afford quick decision-making and reduce the risk of major consequences or missed opportunities.

  2. 2.

    Being aimed at discovering the way users actually cope with online information, the proposed framework will take search engines as they presently are. This does not imply that the decisions of those who actively shape the online information-retrieving systems (site owners, engineers etc.) are not to be questioned. Especially the lack of transparency of Google rankings is a consistent, overall reaching violation of any sound research criterion. In our opinion, however, a research about the cognitive styles used in the present situation is propedeutic to any proposal of reform. We should not merely wish for an abstractely fairer information-retrieve system, but for a system which is fairer for the actual people who use it, with their biases and cognitive styles. Moreover, any research which concentrate on the willingly manipulations from the active information producers cannot reach its aim if it does not take into account that those manipulations pander on existing biases. If, for example, the representative heuristic did not exist, political propagandists would have a harder time to shape opinions.

  3. 3.

    ‘Informational utility’ can be defined as the usefulness a piece of information has (or is perceived to have) in order to adapt to the environment. “An individual will select a mass media message when he estimates that the message reward value exceeds the expenditures incurred in obtaining it. Instrumental utility is one major component in reward value” (Atkin 1973). Informational utility helps to explain why, in certain cases, selective exposure can work against the avoidance of attitude-dissonant messages (Knobloch et al. 2003). For example, Iyengar et al. (2008) have detected that, before the 2000 US presidential election, republican electors showed a preference for republican media, whereas democrats showed no comparable bias. Knobloch-Westerwick and Kleinman (2012) suggested that, since democrats felt that they were going to lose the elections, they found useful to expose themselves to messages from the other side, to learn about future politics, and to develop counter-arguments.

  4. 4.

    During the construction of the experimental design, it should be considered that the formulation of a hypothesis and the following search activity about the possible causes of a cluster of symptoms may be very different from a search activity about the prognosis and therapies of an already diagnosed condition; the cognitive processes involved could be very different. It is advisable to construct two separate sets of experiments, each dealing with these two kinds of activities independently.

  5. 5.

    This is an issue, for example, for Wikipedia, where sometimes sources cited in support of an article are actually referring to information taken by that very article.

  6. 6.

    It is important to stress that in the first set of questions we ask whether the data retrieved by the search engine match the criteria of good epistemological practice (whether they are correct and reliable etc.). In the third set of questions, we ask whether participants, with the data at their disposal, are applying valid cognitive processes. As previously mentioned, the vast majority of studies on online search engines deal exclusively either with the search engine (thus questioning the quality of the information provided) or with the search activity of users (thus questioning their cognitive skills). However, both aspects are relevant; we should know whether the information given by the search engine has the best quality we could wish for, and whether the users are using whatever information they obtain in the best way they could.

References

  1. Atkin, C.K. 1973. Instrumental utilities and information seeking. In New models of communication research, ed. P. Clark, 205–242. Newbury Park, CA: Sage.

  2. Blanke, T. 2005. Ethical subjectification and search engines: Ethics reconsidered. International Review of Information Ethics 3: 33–38.

  3. Card, S.K., T.P. Moran, and A. Newell. 1983. The Psychology of Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

  4. Diaz, A. 2008. Through the Google goggles: Sociopolitical bias in search engine design. In Web Search, ed. A. Spink and M. Zimmer, 11–34. Dordrecht: Springer.

  5. Donsbach, W. 2009. Cognitive dissonance theory—A roller coaster career: How communication research adapted the theory of cognitive dissonance. In Media Choice, ed. T. Hartmann, 142–162. London: Routledge.

  6. Goldman, E. 2008. Search engine bias and the demise of search engine utopianism. In Web Search, ed. A. Spink and M. Zimmer, 121–133. Heidelberg: Springer.

  7. Goldman, E. 2011. Revisiting search engine bias. Wm. Mitchell L. Rev. 38: 96.

  8. Halpern, S. 2011. Mind control and the internet. New York Review of Books.

  9. Hinman, L.M. 2005. Esse est indicato in Google: Ethical and political issues in search engines. International Review of Information Ethics 3(6): 19–25.

  10. Hu, Y., and S.S. Sundar. 2010. Effects of online health sources on credibility and behavioral intentions. Communication Research 37(1): 105–132.

  11. Iyengar, S., and K.S. Hahn. 2009. Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication 59(1): 19–39.

  12. Iyengar, S., S.H. Kyu, J.A. Krosnick, and J. Walker. 2008. Selective exposure to campaign communication: The role of anticipated agreement and issue public membership. The Journal of Politics 70(1): 186–200.

  13. Johnson, T.J., and B.K. Kaye. 2013. The dark side of the boon? Credibility, selective exposure and the proliferation of online sources of political information. Computers in Human Behavior 29(4): 1862–1871.

  14. Keselman, A., A.C. Browne, and D.R. Kaufman. 2008. Consumer health information seeking as hypothesis testing. Journal of the American Medical Informatics Association 15(4): 484–495.

  15. Kimmerle, J., M. Bientzle, U. Cress, D. Flemming, H. Greving, J. Grapendorf, C. Sassenrath, and K. Sassenberg. 2017. Motivated processing of health-related information in online environments. In Informational Environments, ed. J. Buder and F. Hesse, 75–96. Cham: Springer.

  16. Knobloch, S., F.D. Carpentier, and D. Zillmann. 2003. Effects of salience dimensions of informational utility on selective exposure to online news. Journalism & mass communication quarterly 80(1): 91–108.

  17. Knobloch-Westerwick, S., and S.B. Kleinman. 2012. Preelection selective exposure: Confirmation bias versus informational utility. Communication Research 39(2): 170–193.

  18. Knobloch-Westerwick, S., B.K. Johnson, and A. Westerwick. 2014. Confirmation bias in online searches: Impacts of selective exposure before an election on political attitude strength and shifts. Journal of Computer-Mediated Communication 20(2): 171–187.

  19. Kuhn, D. 2001. How do people know? Psychological science 12(1): 1–8.

  20. Meffert, M.F., S. Chung, A.J. Joiner, L. Waks, and J. Garst. 2006. The effects of negativity and motivated information processing during a political campaign. Journal of Communication 56(1): 27–51.

  21. Meric, F., E.V. Bernstam, N.Q. Mirza, K.K. Hunt, F.C. Ames, M.I. Ross, H.M. Kuerer, R.E. Pollock, M.A. Musen, and S.E. Singletary. 2002. Breast cancer on the world wide web: Cross sectional survey of quality of information and popularity of websites. BMJ 324(7337): 577–581.

  22. Metzger, M.J., and A.J. Flanagin. 2013. Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics 59: 210–220.

  23. Metzger, M., E. Flanagin, K. Eyal, D. Lemus, and R. McCann. 2003. Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. In Communication Yearbook, ed. P.J. Kalbfleisch, 293–335. London: Routledge.

  24. Morozov, E. 2011. Your Own Facts. The New York Times.

  25. Pariser, E. 2011. The Filter Bubble: What the Internet is Hiding from You. London: Penguin.

  26. Spink, A., and M. Zimmer. 2008. Web search: Multidisciplinary Perspectives. Berlin: Springer.

  27. Sundar, S. S. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. In Digital Media, Youth, and Credibility, ed. M.J. Metzger and A.J. Flanagin, vol. 7300. Cambridge, MA: The MIT Press.

  28. Sutcliffe, A., and M. Ennis. 1998. Towards a cognitive theory of information retrieval. Interacting with Computers 10(3): 321–351.

  29. Tavani, H. 2016. Search Engines and Ethics. In The Stanford Encyclopedia of Philosophy (Fall 2016 Edition), ed. Edward N. Zalta.

  30. Thomm, E., and R. Bromme. 2016. How source information shapes lay interpretations of science conflicts: Interplay between sourcing, conflict explanation, source evaluation, and claim evaluation. Reading and Writing 29(8): 1629–1652.

  31. Tversky, A., and D. Kahneman. 1983. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review 90(4): 293.

  32. Westerwick, A., S.B. Kleinman, and S. Knobloch-Westerwick. 2013. Turn a blind eye if you care: Impacts of attitude consistency, importance, and credibility on seeking of political information and implications for attitudes. Journal of Communication 63(3): 432–453.

  33. White, R. 2013. Beliefs and biases in web search. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval: ACM.

Download references

Author information

Correspondence to Selena Russo.

Ethics declarations

Conflict of interest

No author has any conflicts of interest to disclose in relation to this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Russo, L., Russo, S. Search engines, cognitive biases and the man–computer interaction: a theoretical framework for empirical researches about cognitive biases in online search on health-related topics. Med Health Care and Philos (2020). https://doi.org/10.1007/s11019-020-09940-9

Download citation

Keywords

  • Web search engine
  • Cognitive biases
  • Information retrieval
  • Hypothesis testing
  • Man–machine interaction