Advertisement

Attention, Perception, & Psychophysics

, Volume 81, Issue 1, pp 47–60 | Cite as

Measuring the time course of selection during visual search

  • Evan M. PalmerEmail author
  • Michael J. Van Wert
  • Todd S. Horowitz
  • Jeremy M. Wolfe
Article
  • 240 Downloads

Abstract

In visual search tasks, observers can guide their attention towards items in the visual field that share features with the target item. In this series of studies, we examined the time course of guidance toward a subset of items that have the same color as the target item. Landolt Cs were placed on 16 colored disks. Fifteen distractor Cs had gaps facing up or down while one target C had a gap facing left or right. Observers searched for the target C and reported which side contained the gap as quickly as possible. In the absence of other information, observers must search at random through the Cs. However, during the trial, the disks changed colors. Twelve disks were now of one color and four disks were of another color. Observers knew that the target C would always be in the smaller color set. The experimental question was how quickly observers could guide their attention to the smaller color set. Results indicate that observers could not make instantaneous use of color information to guide the search, even when they knew which two colors would be appearing on every trial. In each study, it took participants 200–300 ms to fully utilize the color information once presented. Control studies replicated the finding with more saturated colors and with colored C stimuli (rather than Cs on colored disks). We conclude that segregation of a display by color for the purposes of guidance takes 200–300 ms to fully develop.

Keywords

Visual search Attention: Selective Attention 

References

  1. Ahissar, M., & Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in the Cognitive Sciences, 8(10), 457-464.Google Scholar
  2. Alvarez, G. A. (2011). Representing multiple objects as an ensemble enhances visual cognition. Trends in the Cognitive Sciences, 15(3), 122-131.  https://doi.org/10.1016/j.tics.2011.01.003 Google Scholar
  3. Ariely, D. (2001). Seeing Sets: Representation by statistical properties. Psychological Science, 12(2), 157-162.Google Scholar
  4. Bauer, B., Jolicœur, P., & Cowan, W. B. (1996). Visual search for colour targets that are or are not linearly-separable from distractors. Vision Research, 36(10), 1439-1466.Google Scholar
  5. Berggren, N., & Eimer, M. (2016). The control of attentional target selection in a colour/colour conjunction task. Attention, Perception, & Psychophysics, 78(8), 2383-2396.  https://doi.org/10.3758/s13414-016-1168-6 Google Scholar
  6. Brady, T. F., Shafer-Skelton, A., & Alvarez, G. A. (2017). Global ensemble texture representations are critical to rapid scene perception. Journal of Experimental Psychology: Human Perception and Performance, 43(6), 1160-1176.  https://doi.org/10.1037/xhp0000399 Google Scholar
  7. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433-436.Google Scholar
  8. Cheal, M., & Lyon, D. R. (1991). Central and peripheral precuing of forced-choice discrimination. The Quarterly Journal of Experimental Psychology, 43(4), 859-880.Google Scholar
  9. Chong, S. C., & Treisman, A. (2003). Representation of statistical properties. Vision Research, 43(4), 393-404.Google Scholar
  10. Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorial in Quantitative Methods for Psychology, 1(1), 4-45.Google Scholar
  11. Di Lollo, V., Enns, J. T., & Rensink, R. A. (2000). Competition for consciousness among visual events: The psychophysics of reentrant visual processes. Journal of Experimental Psychology: General, 129(4), 481-507.Google Scholar
  12. Donk, M., & Theeuwes, J. (2003). Prioritizing selection of new elements: bottom-up versus top-down control. Perception and Psychophysics, 65(8), 1231-1242.Google Scholar
  13. Dukewich, K., & Klein, R. (2015). Inhibition of return: A phenomenon in search of a definition and a theoretical framework. Attention, Perception, & Psychophysics, 77(5), 1647-1658.  https://doi.org/10.3758/s13414-015-0835-3 Google Scholar
  14. Egeth, H. E., Virzi, R. A., & Garbart, H. (1984). Searching for conjunctively defined targets. Journal of Experimental Psychology: Human Perception and Performance, 10, 32-39.Google Scholar
  15. Fei-Fei, L., Iyer, A., Koch, C., & Perona, P. (2007). What do we perceive in a glance of a real-world scene? Journal of Vision, 7(1), 10.Google Scholar
  16. Fox, E. (1998). Perceptual grouping and visual selective attention. Perception and Psychophysics, 60(6), 1004-1021.Google Scholar
  17. Greene, M. R. (2013). Statistics of high-level ccene context. Frontiers in Psychology, 4.  https://doi.org/10.3389/fpsyg.2013.00777
  18. Hochstein, S., & Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36, 791-804.Google Scholar
  19. Horowitz, T. S., & Wolfe, J. M. (1998). Visual search has no memory. Nature, 394, 575-577.Google Scholar
  20. Horowitz, T. S., Wolfe, J. M., Alvarez, G. A., Cohen, M. A., & Kuzmova, Y. I. (2009). The speed of free will. The Quarterly Journal of Experimental Psychology, 62(11), 2262-2288.Google Scholar
  21. JASP Team (2018). JASP (Version 0.8.6) [Computer software].Google Scholar
  22. Jeffreys, H. (1961). The Theory of Probability (3rd). Oxford University Press.Google Scholar
  23. Jiang, Y., Chun, M. M., & Marks, L. E. (2002). Visual marking: Selective attention to asynchronous temporal groups. Journal of Experimental Psychology: Human Perception and Performance, 28(3), 717-730.Google Scholar
  24. Kaptein, N. A., Theeuwes, J., & Van der Heijden, A. H. C. (1994). Search for a conjunctively defined target can be selectively limited to a color-defined subset of elements. Journal of Experimental Psychology: Human Perception and Performance, 21(5), 1053-1069.Google Scholar
  25. Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430), 773–795.Google Scholar
  26. Klein, R. M. (1988). Inhibitory tagging system facilitates visual search. Nature, 334, 430-431.Google Scholar
  27. Klein, R. M. (2000). Inhibition of return. Trends in the Cognitive Sciences, 4(4), 138-147.Google Scholar
  28. Klein, R. M. (2009). On the Control of Attention Canadian Journal of Experimental Psychology, 63(3), 240-252.Google Scholar
  29. Kooi, F. L., Toet, A., Tripathy, S. P., & Levi, D. M. (1994). The effect of similarity and duration on spatial interaction in peripheral vision. Spatial Vision, 8(2), 255-279.Google Scholar
  30. Kristjansson, A. (2006). Simultaneous priming along multiple feature dimensions in a visual search task. Vision Research, 46(16), 2554-2570.Google Scholar
  31. Kruijne, W., & Meeter, M. (2015). The Long and the Short of Priming in Visual Search. Attention, Perception & Psychophysics, 77(5), 1558-1573.Google Scholar
  32. Luck, S. J., & Vecera, S. P. (2002). Attention. In H. Pashler & S. Yantis (Eds.), Stevens' Handbook of Experimental Psychology (3rd ed., Vol. 1: Sensation and Perception), pp. 235-286. New York: Wiley.Google Scholar
  33. Maljkovic, V., & Nakayama, K. (1994). Priming of popout: I. Role of features. Memory & Cognition, 22(6), 657-672.Google Scholar
  34. Moran, R., Zehetleitner, M., Müller, H. J., & Usher, M. (2013). Competitive guided search: Meeting the challenge of benchmark RT distributions. Journal of Vision, 13(8), 24-24.Google Scholar
  35. Morey, R. D. (2008). Confidence Intervals from Normalized Data: A correction to Cousineau (2005). Tutorial in Quantitative Methods for Psychology, 4(2), 61-64.Google Scholar
  36. Morey, R. D., & Rouder, J. N. (2015). BayesFactor (Version 0.9.11-3)[Computer software].Google Scholar
  37. Nakayama, K., & Mackeben, M. (1989). Sustained and transient components of focal visual attention. Vision Research, 29(11), 1631-1647.Google Scholar
  38. Olds, E. S., Cowan, W. B., & Jolicoeur, P. (2000a). The time-course of pop-out search. Vision Research, 40(8), 891-912.Google Scholar
  39. Olds, E. S., Cowan, W. B., & Jolicoeur, P. (2000b). Tracking visual search over space and time. Psychonomics Bulletin and Review, 7(2), 292-300.Google Scholar
  40. Olds, E. S., & Fockler, K. A. (2004). Does previewing one stimulus feature help conjunction search? Perception, 33(2), 195-216.Google Scholar
  41. Olds, E. S., Graham, T. J., & Jones, J. A. (2009). Feature head-start: Conjunction search following progressive feature disclosure. Vision Research, 49(11), 1428-1447.Google Scholar
  42. Palmer, E. M., Horowitz, T. S., Torralba, A., & Wolfe, J. M. (2011). What are the shapes of response time distributions in visual search?. Journal of Experimental Psychology: Human Perception and Performance, 37(1), 58.Google Scholar
  43. Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10(4), 437-442.Google Scholar
  44. Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32, 3-25.Google Scholar
  45. Posner, M. I., & Cohen, Y. (1984). Components of attention. In H. Bouma & D. G. Bouwhuis (Eds.), Attention and Performance X (pp. 55-66). Hillside: Erlbaum.Google Scholar
  46. Posner, M. I., & Rothbart, M. K. (2007). Research on attention networks as a model for the integration of psychological science. Annual Review of Psychology, 58, 1-23.  https://doi.org/10.1146/annurev.psych.58.110405.085516 Google Scholar
  47. Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16, 225–237.Google Scholar
  48. Schwarz, W., & Miller, J. (2016). GSDT: An integrative model of visual search. Journal of Experimental Psychology: Human Perception and Performance, 42(10), 1654.Google Scholar
  49. Scolari, M., Ester, E. F., & Serences, J. T. (2014). Feature- and Object-Based Attentional Modulation in the Human Visual System. In A. C. Nobre & S. Kastner (Eds.), Oxford Handbook of Attention (pp. 573-600). New York: Oxford U Press.Google Scholar
  50. Serre, T., Oliva, A., & Poggio, T. (2007). A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, 104(15), 6424-6429.Google Scholar
  51. Sobel, K. V., Pickard, M. D., & Acklin, W. T. (2009). Using feature preview to investigate the roles of top-down and bottom-up processing in conjunction search. Acta Psychologica, 132(1), 22-30.Google Scholar
  52. Theeuwes, J. (1991). Exogenous and endogenous control of attention: The effect of visual onsets and offsets. Attention, Perception, & Psychophysics, 49(1), 83-90.Google Scholar
  53. Treisman, A. (1985). Preattentive processing in vision. Computer vision, graphics, and image processing., 31, 156-177.Google Scholar
  54. Treisman, A. (1986). Features and objects in visual processing. Scientific American, 255, 114B-125.Google Scholar
  55. Treue, S. (2014). Object- and feature-based attention: monkey physiology. In A. C. Nobre & S. Kastner (Eds.), Oxford Handbook of Attention (pp. 573-600). New York: Oxford U Press.Google Scholar
  56. VanRullen, R., & Thorpe, S. J. (2001). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception, 30(6), 655-668.Google Scholar
  57. Vlaskamp, B. N. S., Over, E. A. B., & Hooge, I. T. C. (2005). Saccadic search performance: The effect of element spacing. Experimental Brain Research, 167(2), 246-259.  https://doi.org/10.1007/s00221-005-0032-z Google Scholar
  58. Watson, D. G., & Humphreys, G. W. (1997). Visual marking: Prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review, 104(1), 90-122.Google Scholar
  59. Watson, D. G., & Maylor, E. A. (2006). Effects of color heterogeneity on subitization. Perception & Psychophysics, 68(2), 319-326.Google Scholar
  60. Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search. Psychonomic Bulletin and Review, 1(2), 202-238.Google Scholar
  61. Wolfe, J. M. (2007). Guided Search 4.0: Current Progress with a model of visual search. In W. Gray (Ed.), Integrated Models of Cognitive Systems (pp. 99-119). New York: Oxford.Google Scholar
  62. Wolfe, J. M. (2014). Approaches to Visual Search: Feature Integration Theory and Guided Search. In A. C. Nobre & S. Kastner (Eds.), Oxford Handbook of Attention (pp. 11-55). New York: Oxford U Press.Google Scholar
  63. Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15(3), 419.Google Scholar
  64. Wolfe, J. M., & Horowitz, T. S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1, 0058.  https://doi.org/10.1038/s41562-017-0058 Google Scholar
  65. Wolfe, J. M., Horowitz, T. S., Kenner, N. M., Hyle, M., & Vasan, N. (2004). How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research, 44(12), 1411-1426.Google Scholar
  66. Wolfe, J. M., & Van Wert, M. J. (2010). Varying target prevalence reveals two dissociable decision criteria in visual search. Current Biology, 20(2), 121-124.Google Scholar
  67. Zhang, W. W., & Luck, S. J. (2009). Feature-based attention modulates feedforward visual processing. Nature Neuroscience, 12(1), 24-25.  https://doi.org/10.1038/nn.2223 Google Scholar

Copyright information

© The Psychonomic Society, Inc. 2018

Authors and Affiliations

  • Evan M. Palmer
    • 1
    • 2
    • 3
    Email author
  • Michael J. Van Wert
    • 1
  • Todd S. Horowitz
    • 1
    • 2
  • Jeremy M. Wolfe
    • 1
    • 2
  1. 1.Visual Attention LabBrigham & Women’s HospitalBostonUSA
  2. 2.Departments of Radiology and OphthalmologyHarvard Medical SchoolBostonUSA
  3. 3.Department of PsychologySan José State UniversitySan JoseUSA

Personalised recommendations