Skip to main content

Part of the book series: Cognitive Systems Monographs ((COSMOS,volume 30))

  • 640 Accesses

Abstract

Although in principle all attention models serve the same purpose, i.e. to highlight potentially relevant and thus interesting—that is to say “salient”—data, attention models can differ substantially in which parts of the signal they mark as being of interest.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Auditory scene analysis describes the process of segregating and grouping sounds from a mixture of sources to determine and represent relevant auditory streams or objects [Bre90].

References

  1. Achanta, R., Hemami, S., Estrada, F., Süsstrunk, S.: Frequency-tuned salient region detection. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2009)

    Google Scholar 

  2. Achanta, R., Süsstrunk, S.: Saliency detection using maximum symmetric surround. In: Proceedings of the International Conference on Image Processing (2010)

    Google Scholar 

  3. Alexe, B., Deselaers, T., Ferrari, V.: What is an object? In: Proceedings of the International Conference on Computer Vision Pattern Recognition, pp. 73–80 (2010)

    Google Scholar 

  4. Aloimonos, Y., Weiss, I., Bandopadhay, A.: Active vision. Int. J. Comput. Vis. 1(4), 333–356 (1988)

    Article  Google Scholar 

  5. Arnott, S.R., Binns, M.A., Grady, C.L., Alain, C.: Assessing the auditory dual-pathway model in humans. Neuroimage 22, 401–408 (2004)

    Article  Google Scholar 

  6. Avidan, S., Shamir, A.: Seam carving for content-aware image resizing. ACM Trans. Graph. 26(3) (2007)

    Google Scholar 

  7. 3SMM: 3M visual attention service. http://solutions.3m.com/wps/portal/3M/en_US/VAS-NA?MDR=true

  8. Bangerter, A.: Using pointing and describing to achieve joint focus of attention in dialogue. Psychol. Sci. 15(6), 415–419 (2004)

    Article  Google Scholar 

  9. Bian, P., Zhang, L.: Biological plausibility of spectral domain approach for spatiotemporal visual saliency. In: Proceedings of the Annual Conference on Neural Information Processing Systems (2009)

    Google Scholar 

  10. Blausen.com staff: Blausen gallery 2014. Wikiversity J. Med. (2014)

    Google Scholar 

  11. Borji, A.: Boosting bottom-up and top-down visual features for saliency estimation. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2012)

    Google Scholar 

  12. Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013)

    Article  MathSciNet  Google Scholar 

  13. Borji, A., Sihite, D.N., Itti, L.: Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans. Image Process. 22(1), 55–69 (2013)

    Article  MathSciNet  Google Scholar 

  14. Breazeal, C., Scassellati, B.: A context-dependent attention system for a social robot. In: Proceedings of the International Joint Conference on Artificial Intelligence (1999)

    Google Scholar 

  15. Bregman, A.S.: Auditory Scene Analysis: The Perceptual Organization of Sounds. MIT Press (1990)

    Google Scholar 

  16. Bruce, N., Tsotsos, J.: Saliency, attention, and visual search: an information theoretic approach. J. Vis. 9(3), 1–24 (2009)

    Article  Google Scholar 

  17. Bundesen, C., Habekost, T.: Handbook of Cognition. Sage Publications, Chap. Attention (2005)

    Google Scholar 

  18. Calvert, G.A., Bullmore, E., Brammer, M., Campbell, R., Williams, S.C., McGuire, P.K., Woodruff, P.W., Iversen, S.D., David, A.S.: Activation of auditory cortex during silent lipreading. Science 276, 593–596 (1997)

    Article  Google Scholar 

  19. Cashon, C., Cohen, L.: The Construction, Deconstruction, and Reconstruction of Infant Face Perception. Chapter The development of face processing in infancy and early childhood, Current perspectives, pp. 55–68. NOVA Science Publishers (2003)

    Google Scholar 

  20. Cerf, M., Harel, J., Einhäuser, W., Koch, C.: Predicting human gaze using low-level saliency combined with face detection. In: Proceedings of the Annual Confernce on Neural Information Processing Systems (2007)

    Google Scholar 

  21. Cerf, M., Frady, E.P., Koch, C.: Faces and text attract gaze independent of the task: experimental data and computer model. J. Vis. 9 (2009)

    Google Scholar 

  22. Chen, L.-Q., Xie, X., Fan, X., Ma, W.-Y., Zhang, H.-J., Zhou, H.-Q.: A visual attention model for adapting images on small displays. Multim. Syst. 9(4), 353–364 (2003)

    Article  Google Scholar 

  23. Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: Global contrast based salient region detection. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2011)

    Google Scholar 

  24. Cherry, E.C.: Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25, 975–979 (2008)

    Article  Google Scholar 

  25. Coensel, B.D., Botteldooren, D.: A model of saliency-based auditory attention to environmental sound. In: Proceedings of the International Congress on Acoustics (2010)

    Google Scholar 

  26. Delano, P.H., Elgueda, D., Hamame, C.M., Robles, L.: Selective attention to visual stimuli reduces cochlear sensitivity in chinchillas. J. Neurosci. 27, 4146–4153 (2007)

    Article  Google Scholar 

  27. De Santis, L., Clarke, S., Murray, M.M.: Automatic and intrinsic auditory what and where processing in humans revealed by electrical neuroimaging. Cereb Cortex 17, 9–17 (2007)

    Article  Google Scholar 

  28. Einhäuser, W., Spain, M., Perona, P.: Objects predict fixations better than early saliency. J. Vis. 8(14) (2008)

    Google Scholar 

  29. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html

  30. Eyezag: Eyezag—eye tracking in your hands. http://www.eyezag.com/

  31. Fei-Fei, L., Fergus, R., Perona, P.: A bayesian approach to unsupervised one-shot learning of object categories. In: Proceedings of the International Conference on Computer Vision (2003)

    Google Scholar 

  32. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Auton. Syst. 42(3–4), 143–166 (2003)

    Article  MATH  Google Scholar 

  33. Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search, ser. Springer, Lecture Notes in Computer Science (2006)

    Book  Google Scholar 

  34. Frintrop, S., Jensfelt, P.: Attentional landmarks and active gaze control for visual slam. IEEE Trans. Robot. 24(5), 1054–1065 (2008)

    Article  Google Scholar 

  35. Frintrop, S., Rome, E., Christensen, H.I.: Computational visual attention systems and their cognitive foundation: a survey. ACM Trans. Applied Percept. 7(1), 6:1–6:39 (2010)

    Google Scholar 

  36. Fritz, J.B., Elhilali, M., David, S.V., Shamma, S.A.: Auditory attention-focusing the searchlight on sound. Curr. Opin. Neurobiol. 17(4), 437–455 (2007)

    Article  Google Scholar 

  37. Ghazanfar, A.A., Schroeder, C.E.: Is neocortex essentially multisensory? Trends Cogn. Sci. 10, 278–285 (2006)

    Article  Google Scholar 

  38. Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2010)

    Google Scholar 

  39. Google: Eye-tracking studies: more than meets the eye. http://googleblog.blogspot.de/2009/02/eye-tracking-studies-more-than-meets.html

  40. Gould, S., Arfvidsson, J., Kaehler, A., Sapp, B., Messner, M., Bradski, G., Baumstarck, P., Chung, S., Ng, A.Y.: Peripheral-foveal vision for real-time object recognition and tracking in video. In: Proceedings of the International Joint Conference on Artificial Intelligence (2007)

    Google Scholar 

  41. Guo, C., Zhang, L.: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Process. 19, 185–198 (2010)

    Article  MathSciNet  Google Scholar 

  42. Hadizadeh, H., Bajic, I.: Saliency-aware video compression. IEEE Trans. Image Process (99) (2013)

    Google Scholar 

  43. Hafter, E.R., Sarampalis, A., Loui, P.: Auditory Perception of Sound Sources. Springer (2007) (ch. Auditory attention and filters (review))

    Google Scholar 

  44. Haslinger, B., Erhard, P., Altenmuller, E., Schroeder, U., Boecker, H., Ceballos-Baumann, A.O.: Transmodal sensorimotor networks during action observation in professional pianists. J. Cogn. Neurosci. 17, 282–293 (2005)

    Article  Google Scholar 

  45. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of the Annual Conference on Neural Information Processing Systems (2007)

    Google Scholar 

  46. Hernandez-Peon, R., Scherrer, H., Jouvet, M.: Modification of electric activity in cochlear nucleus during attention in unanesthetized cats. Science 123, 331–332 (1956)

    Article  Google Scholar 

  47. Hobson, R.: Joint attention: Communication and other minds. Oxford University Press (2005) (Chap.. What puts the jointness in joint attention?), pp. 185–204

    Google Scholar 

  48. Hou, X., Harel, J., Koch, C.: Image signature: highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 194–201 (2012)

    Article  Google Scholar 

  49. Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2007)

    Google Scholar 

  50. Itti, L., Baldi, P.: Bayesian surprise attracts human attention. In: Proceedings of the Annual Confernce on Neural Information Processing Systems (2006)

    Google Scholar 

  51. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  52. Itti, L., Koch, C., Niebur, E.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2(3), 194–203 (2001)

    Article  Google Scholar 

  53. Jaspers, H., Schauerte, B., Fink, G.A.: Sift-based camera localization using reference objects for application in multi-camera environments and robotics. In: Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods (ICPRAM), Vilamoura, Algarve, Portugal (2012)

    Google Scholar 

  54. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: Proceedings of the International Conference on Computer Vision (2009)

    Google Scholar 

  55. Kadir, T., Brady, M.: Saliency, scale and image description. Int. J. Comput. Vis. 45(2), 83–105 (2001)

    Article  MATH  Google Scholar 

  56. Kahneman, D., Treisman, A., Gibbs, B.J.: The reviewing of object files: object-specific integration of information. Cogn. Psychol. 24(2), 175–219 (1992)

    Article  Google Scholar 

  57. Kalinli, O.: Biologically inspired auditory attention models with applications in speech and audio processing. Ph.D. dissertation, University of Southern California, Los Angeles, CA, USA (2009)

    Google Scholar 

  58. Kayser, C., Petkov, C.I., Lippert, M., Logothetis, N.K.: Mechanisms for allocating auditory attention: an auditory saliency map. Curr. Biol. 15(21), 1943–1947 (2005)

    Article  Google Scholar 

  59. Kalinli, O.: Prominence detection using auditory attention cues and task-dependent high level information. IEEE Trans. Audio Speech Lang Proc. 17(5), 1009–1024 (2009)

    Article  Google Scholar 

  60. Kalinli, O., Narayanan, S.: A saliency-based auditory attention model with applications to unsupervised prominent syllable detection in speech. In: Proceedings of the Annual Confernce on International Speech Communication Association (2007)

    Google Scholar 

  61. Klein, D.A., Frintrop, S.: Center-surround divergence of feature statistics for salient object detection. In: Proceedings of the International Conference on Computer Vision (2011)

    Google Scholar 

  62. Klin, A., Jones, W., Schultz, R., Volkmar, F., Cohen, D.: Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Arch. Gen. Psychiatry 59(9), 809–816 (2002)

    Article  Google Scholar 

  63. Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)

    Google Scholar 

  64. Koester, D., Schauerte, B., Stiefelhagen, R.: Accessible section detection for visual guidance. In: IEEE/NSF Workshop on Multimodal and Alternative Perception for Visually Impaired People (2013)

    Google Scholar 

  65. Kootstra, G., Nederveen, A., de Boer, B.: Paying attention to symmetry. In: Proceedings of the British Conference on Computer Vision (2008)

    Google Scholar 

  66. Kühn, B., Schauerte, B., Stiefelhagen, R., Kroschel, K.: A modular audio-visual scene analysis and attention system for humanoid robots. In: Proceedings of the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan (2012)

    Google Scholar 

  67. Kühn, B., Schauerte, B., Kroschel, K., Stiefelhagen, R.: Multimodal saliency-based attention: A lazy robot’s approach. In: Proceedings of the 25th International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, Vilamoura, Algarve, Portugal (2012)

    Google Scholar 

  68. Kühn, B., Schauerte, B., Kroschel, K., Stiefelhagen, R.: Wow! Bayesian surprise for salient acoustic event detection. In: Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE, Vancouver, Canada (2013)

    Google Scholar 

  69. Laurienti, P., Burdette, J.H., Wallace, M.T., Yen, Y.F., Field, A.S., Stein, B.E.: Deactivation of sensory-specific cortex by cross-modal stimuli. J. Cogn. Neurosci. 14, 420–429 (2002)

    Article  Google Scholar 

  70. Li, J., Xu, D., Gao, W.: Removing label ambiguity in learning-based visual saliency estimation. IEEE Trans. Image Process. 21(4), 1513–1525 (2012)

    Article  MathSciNet  Google Scholar 

  71. Liebal, K., Tomasello, M.: Infants appreciate the social intention behind a pointing gesture: commentary on "children’s understanding of communicative intentions in the middle of the second year of life" by T. Aureli, P. Perucchini and M. Genco. Cogn. Dev 24(1), 13–15 (2009)

    Article  Google Scholar 

  72. Lin, K.-H., Zhuang, X., Goudeseune, C., King, S., Hasegawa-Johnson, M., Huang, T.S.: Improving faster-than-real-time human acoustic event detection by saliency-maximized audio visualization. In: Proceedings of the International Confernce on Acoustics, Speech, and Signal Processing (2012)

    Google Scholar 

  73. Lin, K.-H., Zhuang, X., Goudeseune, C., King, S., Hasegawa-Johnson, M., Huang, T.S.: Towards attentive robots. Paladyn 2(2), 64–70 (2011)

    Google Scholar 

  74. Liu, T., Sun, J., Zheng, N.-N., Tang, X., Shum, H.-Y.: Learning to detect a salient object. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2007)

    Google Scholar 

  75. Lu, S., Lim, J.-H.: Saliency modeling from image histograms. In: Proceedings of the European Confernce on Computer Vision (2012)

    Google Scholar 

  76. Louwerse, M., Bangerter, A.: Focusing attention with deictic gestures and linguistic expressions. In: Proceedings of the Annual Confernce on Cognitive Sciience Society (2005)

    Google Scholar 

  77. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)

    Article  Google Scholar 

  78. Marr, D.: VISION—A Computational Investigation into the Human Representation and Processing of Visual Information. W.H Freeman and Company (1982)

    Google Scholar 

  79. Marr, D.: Search goal tunes visual features optimally. Neuron 53(4), 605–617 (2007)

    Article  Google Scholar 

  80. Martinez, M., Constantinescu, A., Schauerte, B., Koester, D., Stiefelhagen, R.: Cognitive evaluation of haptic and audio feedback in short range navigation tasks. In: Proceedings of the 14th Int. Conf. Computers Helping People with Special Needs (ICCHP). Springer, Paris, France (2014)

    Google Scholar 

  81. Martinez, M., Schauerte, B., Stiefelhagen, R.: BAM! Depth-based body analysis in critical care. In: Proceedings of the 15th International Conference on Computer Analysis of Images and Patterns (CAIP). Springer, York, UK (2013)

    Google Scholar 

  82. Martinez, M., Schauerte, B., Stiefelhagen, R.: How the distribution of salient objects in images influences salient object detection. In: Proceedings of the 20th International Conference on Image Processing (ICIP). IEEE, Melbourne, Australia (2013)

    Google Scholar 

  83. Meger, D., Forssén, P.-E., Lai, K., Helmar, S., McCann, S., Southey, T., Baumann, M., Little, J.J., Lowe, D.J.: Curious george: an attentive semantic robot. Robot. Auton. Syst. 56(6), 503–511 (2008)

    Article  Google Scholar 

  84. Miau, F., Papageorgiou, C., Itti, L.: Neuromorphic algorithms for computer vision and attention. In: Bosacchi, B., Fogel, D.B., Bezdek, J.C. (eds.) Proceedings of the SPIE 46 Annual International Symposium on Optical Science and Technology, vol. 4479, pp. 12–23 (2001)

    Google Scholar 

  85. Mundy, P., Newell, L.: Attention, joint attention, and social cognition. Curr. Dir. Psychol. Sci. 16(5), 269–274 (2007)

    Article  Google Scholar 

  86. Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000)

    Article  Google Scholar 

  87. Nickerson, S.B., Jasiobedzki, P., Wilkes, D., Jenkin, M., Milios, E., Tsotsos, J.K., Jepson, A., Bains, O.N.: The ark project: autonomous mobile robots for known industrial environments. Robot. Auton. Syst. 25, 83–104 (1998)

    Article  Google Scholar 

  88. Onat, S., Libertus, K., König, P.: Integrating audiovisual information for the control of overt attention. J. Vis. 7(10) (2007)

    Google Scholar 

  89. Oppenheim, A., Lim, J.: The importance of phase in signals. Proc. IEEE 69(5), 529–541 (1981)

    Article  Google Scholar 

  90. Ouerhani, N., Bracamonte, J., Hugli, H., Ansorge, M., Pellandini, F.: Adaptive color image compression based on visual attention. In: Proceedings of the International Conference on Image Analysis and Processing, pp. 416–421 (2001)

    Google Scholar 

  91. Perez-Gonzalez, D., Malmierca, M.S., Covey, E.: Novelty detector neurons in the mammalian auditory midbrain. Eur. J. Neurosci. 22, 2879–2885 (2005)

    Article  Google Scholar 

  92. Riesenhuber, M., Poggio, T.: Hierarchical models of object recognition in cortex. Nat. Neurosci. 2, 1019–1025 (1999)

    Article  Google Scholar 

  93. Rubinstein, M., Shamir, A., Avidan, S.: Improved seam carving for video retargeting. In: Proceedings of the Annual Confernce on Special Interest Group on Graphics and Interactive Techniques (2008)

    Google Scholar 

  94. Rybok, L., Schauerte, B., Al-Halah, Z., Stiefelhagen, R.: Important stuff, everywhere! Activity recognition with salient proto-objects as context. In: Proceedings of the 14th IEEE Winter Conference on Applications of Computer Vision (WACV), Steamboat Springs, CO, USA (2014)

    Google Scholar 

  95. Santella, A., Agrawala, M., DeCarlo, D., Salesin, D., Cohen, M.: Gaze-based interaction for semi-automatic photo cropping. In: Proceedings of the International Conference on Human Factors Computing Systems (CHI) (2006)

    Google Scholar 

  96. Schauerte, B., Fink, G.A.: Focusing computational visual attention in multi-modal human-robot interaction. In: Proceedings of the 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI). ACM, Beijing, China (2010)

    Google Scholar 

  97. Schauerte, B., Fink, G.A.: Web-based learning of naturalized color models for human-machine interaction. In: Proceedings of the 12th International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, Sydney, Australia (2010)

    Google Scholar 

  98. Schauerte, B.: Multimodal computational attention for scene understanding. Ph.D. dissertation, Karlsruhe Institute of Technology (2014)

    Google Scholar 

  99. Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: Way to Go! Detecting open areas ahead of a walking person. In: ECCV Workshop on Assistive Computer Vision and Robotics (ACVR). Springer (2014)

    Google Scholar 

  100. Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: A web-based platform for interactive image sonificationn. In: Accessible Interaction for Visually Impaired People (AI4VIP) (2015)

    Google Scholar 

  101. Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: Look at this! Learning to guide visual saliency in human-robot interaction. In: Proceedings of the International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ (2014)

    Google Scholar 

  102. Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: On the distribution of salient objects in web images and its influence on salient object detection. PLoS ONE 10, 07 (2015)

    Article  Google Scholar 

  103. Schauerte, B., Kühn, B., Kroschel, K., Stiefelhagen, R.: Multimodal saliency-based attention for object-based scene analysis. In: Proceedings of the 24th International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, San Francisco, CA, USA (2011)

    Google Scholar 

  104. Schauerte, B., Martinez, M., Constantinescu, A., Stiefelhagen, R.: An assistive vision system for the blind that helps find lost things. In: Proceedings of the 13th International Conference on Computers Helping People with Special Needs (ICCHP). Springer, Linz, Austria (2012)

    Google Scholar 

  105. Schauerte, B., Plötz, T., Fink, G.A.: ‘A multi-modal attention system for smart environments. In: Proceedings of the 7th International Conference on Computer Vision Systems (ICVS). Lecture Notes in Computer Science, vol. 5815. Springer, Liège (2009)

    Google Scholar 

  106. Schauerte, B., Richarz, J., Fink, G.A.: Saliency-based identification and recognition of pointed-at objects. In: Proceedings of the 23rd International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, Taipei, Taiwan (2010)

    Google Scholar 

  107. Schauerte, B., Richarz, J., Plötz, T., Thurau, C., Fink, G.A.: Multi-modal and multi-camera attention in smart environments. In: Proceedings of the 11th International Conference on Multimodal Interfaces (ICMI). ACM, Cambridge (2009)

    Google Scholar 

  108. Schauerte, B., Stiefelhagen, R.: Learning robust color name models from web images. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR). IEEE, Tsukuba, Japan (2012)

    Google Scholar 

  109. Schauerte, B., Stiefelhagen, R.: Predicting human gaze using quaternion DCT image signature saliency and face detection. In: Proceedings of the IEEE Workshop on the Applications of Computer Vision (WACV). IEEE, Breckenridge, CO, USA (2012)

    Google Scholar 

  110. Schauerte, B., Stiefelhagen, R.: Quaternion-based spectral saliency detection for eye fixation prediction. In: Proceedings of the 12th European Conference on Computer Vision (ECCV). Springer, Firenze, Italy (2012)

    Google Scholar 

  111. Schneider, T., Schauerte, B., Stiefelhagen, R.: Manifold alignment for person independent appearance-based gaze estimation. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR). IEEE, Stockholm, Sweden (2014)

    Google Scholar 

  112. Schauerte, B., Wörtwein, T., Stiefelhagen, R.: Color decorrelation helps visual saliency detection. In: Proceedings of the 22nd International Conference on Image Processing (ICIP). IEEE (2015)

    Google Scholar 

  113. Schauerte, B., Zamfirescu, C.T.: Small k-pyramids and the complexity of determining k. J. Discrete Algorithms (JDA) (2014)

    Google Scholar 

  114. Setlur, V., Lechner, T., Nienhaus, M., Gooch, B.: Retargeting images and video for preserving information saliency. IEEE Comput. Graph. Appl. 27(5), 80–88 (2007)

    Article  Google Scholar 

  115. Siagian, C., Itti, L.: Biologically inspired mobile robot vision localization. IEEE Trans. Robot. 25(4), 861–873 (2009)

    Article  Google Scholar 

  116. Simion, C., Shimojo, S.: Early interactions between orienting, visual sampling and decision making in facial preference. Vis. Res. 46(20), 3331–3335 (2006)

    Article  Google Scholar 

  117. SMIvision: Sensomotoric instruments gmbh. http://www.smivision.com/

  118. Spivey, M.J., Tyler, M.J., Eberhard, K.M., Tanenhaus, M.K.: Linguistically mediated visual search. Psychol. Sci. 12, 282–286 (2001)

    Article  Google Scholar 

  119. Suh, B., Ling, H., Bederson, B.B., Jacobs, D.W.: Automatic thumbnail cropping and its effectiveness. In: ACM Symposium on User interface Software and Technology (2003)

    Google Scholar 

  120. Sussman, E.S., Winkler, I.: Dynamic sensory updating in the auditory system. Cogn. Brain Res. 12, 431–439 (2001)

    Article  Google Scholar 

  121. Sussman, E.S.: Integration and segregation in auditory scene analysis. J. Acoust. Soc. Am. 117, 1285–1298 (2005)

    Article  Google Scholar 

  122. Treisman, A.M., Gormican, S.: Feature analysis in early vision: evidence from search asymmetries. Psychol. Rev. 95(1), 15–48 (1988)

    Article  Google Scholar 

  123. Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12(1), 97–136 (1980)

    Article  Google Scholar 

  124. Triesch, J., Teuscher, C., Deák, G.O., Carlson, E.: Gaze following: why (not) learn it? Dev. Sci. 9(2), 125–147 (2006)

    Article  Google Scholar 

  125. University of British Columbia: Curious George Project. https://www.cs.ubc.ca/labs/lci/curious_george/. Accessed 3 April 2014

  126. Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Netw. 19(9), 1395–1407 (2006)

    Article  MATH  Google Scholar 

  127. Weissman, D.H., Warner, L.M., Woldorff, M.G.: The neural mechanisms for minimizing cross-modal distraction. J. Neurosci. 24, 10 941–10 949 (2004)

    Google Scholar 

  128. Wikimedia Common (Blausen.com staff): Blausen gallery 2014, ear anatomy. http://commons.wikimedia.org/wiki/File:Blausen_0328_EarAnatomy.png. 23 Feb 2015 (License CC BY 3.0)

  129. Wikimedia Common (Blausen.com staff): Blausen gallery 2014, the internal ear. http://commons.wikimedia.org/wiki/File:Blausen_0329_EarAnatomy_InternalEar.png. 23 Feb 2015 (License CC BY 3.0)

  130. Wikimedia Common (Oarih): Cochlea-crosssection. http://commons.wikimedia.org/wiki/File:Cochlea-crosssection.png. 23 Feb 2015 (License CC BY-SA 3.0)

  131. Winkler, I., Teder-Salejarvi, W.A., Horvath, J., Naatanen, R., Sussman, E.: Human auditory cortex tracks task-irrelevant sound sources. Neuroreport 14, 2053–2056 (2003)

    Article  Google Scholar 

  132. Winkler, I., Czigler, I., Sussman, E., Horvath, J., Balazs, L.: Preattentive binding of auditory and visual stimulus features. J. Cogn. Neurosci. 17, 320–339 (2005)

    Article  Google Scholar 

  133. Winkler, S., Subramanian, R.: Overview of eye tracking datasets. In: International Workshop on Quality of Multimedia Experience (2013)

    Google Scholar 

  134. Woertwein, T., Chollet, M., Schauerte, B., Stiefelhagen, R., Morency, L.-P., Scherer, S.: Multimodal public speaking performance assessment. In: Proceedings of the 17th International Conference on Multimodal Interaction (ICMI). ACM (2015)

    Google Scholar 

  135. Woertwein, T., Schauerte, B., Mueller, K., Stiefelhagen, R.: Interactive web-based image sonification for the blind. In: Proceedings of the 17th International Conference on Multimodal Interaction (ICMI). ACM (2015)

    Google Scholar 

  136. Wolfe, J.M., Horowitz, T.S., Kenner, N., Hyle, M., Vasan, N.: How fast can you change your mind? the speed of top-down guidance in visual search. Vis. Res. 44, 1411–1426 (2004)

    Article  Google Scholar 

  137. Wolfe, J.M.: Guided search 2.0: a revised model of visual search. Psychon. Bull. Rev. 1, 202–238 (1994)

    Article  Google Scholar 

  138. Wolfe, J.M., Cave, K., Franzel, S.: Guided search: an alternative to the feature integration model for visual search. J. Exp. Psychol.: Hum. Percept. Perform. 15, 419–433 (1989)

    Google Scholar 

  139. Woodruff, P.W., Benson, R.R., Bandettini, P.A., Kwong, K.K., Howard, R.J., Talavage, T., Belliveau, J., Rosen, B.R.: Modulation of auditory and visual cortex by selective attention is modality-dependent. Neuroreport 7, 1909–1913 (1996)

    Article  Google Scholar 

  140. Xu, T., Zhang, T., Kühnlenz, K., Buss, M.: Attentional object detection of an active multi-vocal vision system. Int. J. Humanoid. 7(2) (2010)

    Google Scholar 

  141. Yarbus, A.L.: Eye Movements and Vision. Plenum Press (1967)

    Google Scholar 

  142. Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: a bayesian framework for saliency using natural statistics. J. Vis. 8(7) (2008)

    Google Scholar 

  143. Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach, Intell (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boris Schauerte .

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Schauerte, B. (2016). Background. In: Multimodal Computational Attention for Scene Understanding and Robotics. Cognitive Systems Monographs, vol 30. Springer, Cham. https://doi.org/10.1007/978-3-319-33796-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-33796-8_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-33794-4

  • Online ISBN: 978-3-319-33796-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics