Skip to main content

Four Perspectives on Human Bias in Visual Analytics

  • Chapter
  • First Online:

Abstract

Visual analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this chapter we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Commission errors are contrasted with automation omission errors, which occur if the human-machine team fails to respond to system irregularities or the system fails to provide an indicator of a problematic state. In visual analytics, an omission error could occur if a system “knows” an algorithm might be mis-matched to a data type but does not alert the analyst.

References

  1. Alaieri F, Vellino A (2016) Ethical decision making in robots: autonomy, trust and responsibility. In: Agah A, Cabibihan JJ, Howard AM, Salichs MA, He H (eds) Social robotics: 8th international conference. Springer International Publishing, Kansas City, MO, pp 159–168

    Chapter  Google Scholar 

  2. Amershi S, Cakmak M, Knox WB, Kulesza T (2014) Power to the people: the role of humans in interactive machine learning. AI Mag 35(4):105–120

    Article  Google Scholar 

  3. Brown ET, Ottley A, Zhao H, Lin Q, Souvenir R, Endert A, Chang R (2014) Finding Waldo: learning about users from their interactions. IEEE Trans Visual Comput Graphics 20(12):1663–1672

    Article  Google Scholar 

  4. Burnett M, Stumpf S, Macbeth J, Makri S, Beckwith L, Kwan I, Peters A, Jernigan W (2016) GenderMag: a method for evaluating software’s gender inclusiveness. Interact Comput 28(6):760–787

    Article  Google Scholar 

  5. Busemeyer JR, Diederich A (2010) Cognitive modeling. Sage, Los Angeles, CA

    Google Scholar 

  6. Busemeyer JR, Townsend JT (1993) Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychol Rev 100(3):432–459

    Article  Google Scholar 

  7. Chaiken S, Trope Y (1999) Dual-process theories in social psychology. Guilford Press, New York

    Google Scholar 

  8. Cho I, Wesslen R, Karduni A, Santhanam S, Shaikh S, Dou W (2017) The anchoring effect in decision-making with visual analytics. In: IEEE conference on visual analytics science and technology (VAST)

    Google Scholar 

  9. Dimara E, Bezerianos A, Dragicevic P (2017) The attraction effect in information visualization. IEEE Trans Visual Comput Graphics 23(1):471–480

    Article  Google Scholar 

  10. Dou W, Jeong DH, Stukes F, Ribarsky W, Lipford HR, Chang R (2009) Recovering reasoning process from user interactions. IEEE Comput Graphics Appl pp 52–61. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.157.407&rep=rep1&type=pdf

    Article  Google Scholar 

  11. Egeth HE, Yantis S (1997) Visual attention: control, representation, and time course. Annu Rev Psychol 48(1):269–297

    Article  Google Scholar 

  12. Endert A, Ribarsky W, Turkay C, Wong B, Nabney I, Blanco ID, Rossi F (2017) The state of the art in integrating machine learning into visual analytics. In: Computer graphics forum. Wiley Online Library

    Google Scholar 

  13. Fekete JD, Van Wijk J, Stasko J, North C (2008) The value of information visualization. Inf Visual pp 1–18

    Google Scholar 

  14. Friedman B (1996) Value-sensitive design. Interactions 3(6):16–23

    Article  Google Scholar 

  15. Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Trans Inf Syst (TOIS) 14(3):330–347

    Article  Google Scholar 

  16. Frisby JP, Stone JV (2010) Seeing: the computational approach to biological vision. The MIT Press, Cambridge, MA

    Google Scholar 

  17. Gotz D, Zhou MX (2009) Characterizing users’ visual analytic activity for insight provenance. Inf Visual 8(1):42–55

    Article  Google Scholar 

  18. Gotz D, Sun S, Cao N (2016) Adaptive contextualization: combating bias during high-dimensional visualization and data selection. In: Proceedings of the 21st international conference on intelligent user interfaces - IUI ’16 pp 85–95. http://dl.acm.org/citation.cfm?doid=2856767.2856779

  19. Green DM, Birdsall TG, Tanner WP Jr (1957) Signal detection as a function of signal intensity and duration. J Acoust Soc Am 29(4):523–531

    Article  Google Scholar 

  20. Heuer Jr RJ (1999) Psychology of intelligence analysis. Washington, D.C

    Google Scholar 

  21. Hoffman RR, Johnson M, Bradshaw JM, Underbrink A (2013) Trust in automation. IEEE Intell Syst 28(1):84–88

    Article  Google Scholar 

  22. Horvitz E (1999) Principles of mixed-initiative user interfaces. In: Proceedings of the SIGCHI conference on human factors in computing systems pp 159–166

    Google Scholar 

  23. Huber J, Payne JW, Puto C (1982) Adding asymmetrically dominated alternatives: violations of regularity and the similarity hypothesis. J Consum Res 9(1):90–98

    Article  Google Scholar 

  24. Kahneman D, Frederick S (2005) A model of heuristic judgment. The Cambridge handbook of thinking and reasoning pp 267–294

    Google Scholar 

  25. Klein G, Moon B, Hoffman RR (2006) Making sense of sensemaking 2: a macrocognitive model. IEEE Intell Syst 21(5):88–92

    Article  Google Scholar 

  26. Koffka K (2013) Principles of gestalt psychology, vol 44. Routledge, London

    Book  Google Scholar 

  27. Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46(1):50–80

    Article  Google Scholar 

  28. Lee P (2016) Learning from Tay’s introduction. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

  29. Luce RD (1977) The choice axiom after twenty years. J Math Psychol 15(3):215–233

    Article  MathSciNet  Google Scholar 

  30. Macmillan NA, Creelman CD (2004) Detection theory: a user’s guide. Psychology Press, New York

    Book  Google Scholar 

  31. Malhotra NK (1982) Information load and consumer decision making. J Consum Res 8(4):419–430

    Article  Google Scholar 

  32. Milord JT, Perry RP (1977) A methodological study of overloadx. J Gen Psychol 97(1):131–137

    Article  Google Scholar 

  33. Mosier KL, Skitka LJ (1996) Human decision makers and automated decision aids: made for each other. In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and applications. Lawrence Erlbaum Associates, Mahwah, NJ, pp 201–220

    Google Scholar 

  34. Mosier KL, Skitka LJ (1999) Automation use and automation bias. In: Proceedings of the human factors and ergonomics society annual meeting, vol 43. Sage, Beverley Hills, pp 344–348

    Article  Google Scholar 

  35. Nickerson RS (1998) Confirmation bias: a ubiquitous phenomenon in many guises. Rev Gen Psychol 2(2):175–220

    Article  Google Scholar 

  36. North C, May R, Chang R, Pike B, Endert A, Fink GA, Dou W (2011) Analytic provenance: process+interaction+insight. In: 29th annual CHI conference on human factors in computing systems, CHI 2011 pp 33–36

    Google Scholar 

  37. Nosofsky RM (1991) Stimulus bias, asymmetric similarity, and classification. Cogn Psychol 23(1):94–140

    Article  Google Scholar 

  38. Parasuraman R, Manzey DH (2010) Complacency and bias in human use of automation: an attentional integration. Hum Factors 52:381–410

    Article  Google Scholar 

  39. Patterson RE, Blaha LM, Grinstein GG, Liggett KK, Kaveney DE, Sheldon KC, Havig PR, Moore JA (2014) A human cognition framework for information visualization. Comput Graphics 42:42–58

    Article  Google Scholar 

  40. Pirolli P, Card S (2005) The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In: Proceedings of international conference on intelligence analysis 2005, pp 2–4. http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:The+Sensemaking+Process+and+Leverage+Points+for+Analyst+Technology+as+Identified+Through+Cognitive+Task+Analysis#0

  41. Posner MI (1980) Orienting of attention. Q J Exp Psychol 32(1):3–25

    Article  Google Scholar 

  42. Riesenhuber M, Poggio T (1999) Hierarchical models of object recognition in cortex. Nat Neurosci 2(11):1019–1025

    Article  Google Scholar 

  43. Sacha D, Stoffel A, Stoffel F, Kwon BC, Ellis G, Keim DA (2014) Knowledge generation model for visual analytics. IEEE Trans Visual Comput Graphics 20(12):1604–1613

    Article  Google Scholar 

  44. Simons DJ, Chabris CF (1999) Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 28(9):1059–1074

    Article  Google Scholar 

  45. Stanovich KE, West RF (2000) Advancing the rationality debate. Behav Brain Sci 23(5):701–717

    Article  Google Scholar 

  46. Torralba A, Oliva A, Castelhano MS, Henderson JM (2006) Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol Rev 113(4):766–786

    Article  Google Scholar 

  47. Treisman A (1985) Preattentive processing in vision. Comput Vis Graphics Image Process 31(2):156–177

    Article  MathSciNet  Google Scholar 

  48. Tsotsos JK (2011) A computational perspective on visual attention. MIT Press, Cambridge, MA

    Book  Google Scholar 

  49. Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cogn Psychol 5(2):207–232

    Article  Google Scholar 

  50. Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131

    Article  Google Scholar 

  51. Valdez AC, Ziefle M, Sedlmair M (2018a) A framework for studying biases in visualization research. In: Ellis G (ed) Cognitive biases in visualizations, Chap. 2. Springer, Berlin

    Google Scholar 

  52. Valdez AC, Ziefle M, Sedlmair M (2018b) Priming and anchoring effects in visualization. IEEE Trans Visual Comput Graphics 24(1):584–594

    Article  Google Scholar 

  53. Vandekerckhove J (2014) A cognitive latent variable model for the simultaneous analysis of behavioral and personality data. J Math Psychol 60:58–71

    Article  MathSciNet  Google Scholar 

  54. Wall E, Blaha LM, Franklin L, Endert A (2017) Warning, bias may occur: a proposed approach to detecting cognitive bias in interactive visual analytics. In: IEEE conference on visual analytics science and technology (VAST)

    Google Scholar 

  55. Xu K, Attfield S, Jankun-Kelly T, Wheat A, Nguyen PH, Selvaraj N (2015) Analytic provenance for sensemaking: a research agenda. IEEE Comput Graphics Appl 35(3):56–64

    Article  Google Scholar 

Download references

Acknowledgements

The research described in this document was sponsored by the U.S. Department of Defense. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emily Wall .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wall, E., Blaha, L.M., Paul, C.L., Cook, K., Endert, A. (2018). Four Perspectives on Human Bias in Visual Analytics. In: Ellis, G. (eds) Cognitive Biases in Visualizations. Springer, Cham. https://doi.org/10.1007/978-3-319-95831-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-95831-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-95830-9

  • Online ISBN: 978-3-319-95831-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics