Abstract
Although in principle all attention models serve the same purpose, i.e. to highlight potentially relevant and thus interesting—that is to say “salient”—data, attention models can differ substantially in which parts of the signal they mark as being of interest.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Auditory scene analysis describes the process of segregating and grouping sounds from a mixture of sources to determine and represent relevant auditory streams or objects [Bre90].
References
Achanta, R., Hemami, S., Estrada, F., Süsstrunk, S.: Frequency-tuned salient region detection. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2009)
Achanta, R., Süsstrunk, S.: Saliency detection using maximum symmetric surround. In: Proceedings of the International Conference on Image Processing (2010)
Alexe, B., Deselaers, T., Ferrari, V.: What is an object? In: Proceedings of the International Conference on Computer Vision Pattern Recognition, pp. 73–80 (2010)
Aloimonos, Y., Weiss, I., Bandopadhay, A.: Active vision. Int. J. Comput. Vis. 1(4), 333–356 (1988)
Arnott, S.R., Binns, M.A., Grady, C.L., Alain, C.: Assessing the auditory dual-pathway model in humans. Neuroimage 22, 401–408 (2004)
Avidan, S., Shamir, A.: Seam carving for content-aware image resizing. ACM Trans. Graph. 26(3) (2007)
3SMM: 3M visual attention service. http://solutions.3m.com/wps/portal/3M/en_US/VAS-NA?MDR=true
Bangerter, A.: Using pointing and describing to achieve joint focus of attention in dialogue. Psychol. Sci. 15(6), 415–419 (2004)
Bian, P., Zhang, L.: Biological plausibility of spectral domain approach for spatiotemporal visual saliency. In: Proceedings of the Annual Conference on Neural Information Processing Systems (2009)
Blausen.com staff: Blausen gallery 2014. Wikiversity J. Med. (2014)
Borji, A.: Boosting bottom-up and top-down visual features for saliency estimation. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2012)
Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013)
Borji, A., Sihite, D.N., Itti, L.: Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans. Image Process. 22(1), 55–69 (2013)
Breazeal, C., Scassellati, B.: A context-dependent attention system for a social robot. In: Proceedings of the International Joint Conference on Artificial Intelligence (1999)
Bregman, A.S.: Auditory Scene Analysis: The Perceptual Organization of Sounds. MIT Press (1990)
Bruce, N., Tsotsos, J.: Saliency, attention, and visual search: an information theoretic approach. J. Vis. 9(3), 1–24 (2009)
Bundesen, C., Habekost, T.: Handbook of Cognition. Sage Publications, Chap. Attention (2005)
Calvert, G.A., Bullmore, E., Brammer, M., Campbell, R., Williams, S.C., McGuire, P.K., Woodruff, P.W., Iversen, S.D., David, A.S.: Activation of auditory cortex during silent lipreading. Science 276, 593–596 (1997)
Cashon, C., Cohen, L.: The Construction, Deconstruction, and Reconstruction of Infant Face Perception. Chapter The development of face processing in infancy and early childhood, Current perspectives, pp. 55–68. NOVA Science Publishers (2003)
Cerf, M., Harel, J., Einhäuser, W., Koch, C.: Predicting human gaze using low-level saliency combined with face detection. In: Proceedings of the Annual Confernce on Neural Information Processing Systems (2007)
Cerf, M., Frady, E.P., Koch, C.: Faces and text attract gaze independent of the task: experimental data and computer model. J. Vis. 9 (2009)
Chen, L.-Q., Xie, X., Fan, X., Ma, W.-Y., Zhang, H.-J., Zhou, H.-Q.: A visual attention model for adapting images on small displays. Multim. Syst. 9(4), 353–364 (2003)
Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: Global contrast based salient region detection. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2011)
Cherry, E.C.: Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25, 975–979 (2008)
Coensel, B.D., Botteldooren, D.: A model of saliency-based auditory attention to environmental sound. In: Proceedings of the International Congress on Acoustics (2010)
Delano, P.H., Elgueda, D., Hamame, C.M., Robles, L.: Selective attention to visual stimuli reduces cochlear sensitivity in chinchillas. J. Neurosci. 27, 4146–4153 (2007)
De Santis, L., Clarke, S., Murray, M.M.: Automatic and intrinsic auditory what and where processing in humans revealed by electrical neuroimaging. Cereb Cortex 17, 9–17 (2007)
Einhäuser, W., Spain, M., Perona, P.: Objects predict fixations better than early saliency. J. Vis. 8(14) (2008)
Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html
Eyezag: Eyezag—eye tracking in your hands. http://www.eyezag.com/
Fei-Fei, L., Fergus, R., Perona, P.: A bayesian approach to unsupervised one-shot learning of object categories. In: Proceedings of the International Conference on Computer Vision (2003)
Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Auton. Syst. 42(3–4), 143–166 (2003)
Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search, ser. Springer, Lecture Notes in Computer Science (2006)
Frintrop, S., Jensfelt, P.: Attentional landmarks and active gaze control for visual slam. IEEE Trans. Robot. 24(5), 1054–1065 (2008)
Frintrop, S., Rome, E., Christensen, H.I.: Computational visual attention systems and their cognitive foundation: a survey. ACM Trans. Applied Percept. 7(1), 6:1–6:39 (2010)
Fritz, J.B., Elhilali, M., David, S.V., Shamma, S.A.: Auditory attention-focusing the searchlight on sound. Curr. Opin. Neurobiol. 17(4), 437–455 (2007)
Ghazanfar, A.A., Schroeder, C.E.: Is neocortex essentially multisensory? Trends Cogn. Sci. 10, 278–285 (2006)
Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2010)
Google: Eye-tracking studies: more than meets the eye. http://googleblog.blogspot.de/2009/02/eye-tracking-studies-more-than-meets.html
Gould, S., Arfvidsson, J., Kaehler, A., Sapp, B., Messner, M., Bradski, G., Baumstarck, P., Chung, S., Ng, A.Y.: Peripheral-foveal vision for real-time object recognition and tracking in video. In: Proceedings of the International Joint Conference on Artificial Intelligence (2007)
Guo, C., Zhang, L.: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Process. 19, 185–198 (2010)
Hadizadeh, H., Bajic, I.: Saliency-aware video compression. IEEE Trans. Image Process (99) (2013)
Hafter, E.R., Sarampalis, A., Loui, P.: Auditory Perception of Sound Sources. Springer (2007) (ch. Auditory attention and filters (review))
Haslinger, B., Erhard, P., Altenmuller, E., Schroeder, U., Boecker, H., Ceballos-Baumann, A.O.: Transmodal sensorimotor networks during action observation in professional pianists. J. Cogn. Neurosci. 17, 282–293 (2005)
Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of the Annual Conference on Neural Information Processing Systems (2007)
Hernandez-Peon, R., Scherrer, H., Jouvet, M.: Modification of electric activity in cochlear nucleus during attention in unanesthetized cats. Science 123, 331–332 (1956)
Hobson, R.: Joint attention: Communication and other minds. Oxford University Press (2005) (Chap.. What puts the jointness in joint attention?), pp. 185–204
Hou, X., Harel, J., Koch, C.: Image signature: highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 194–201 (2012)
Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2007)
Itti, L., Baldi, P.: Bayesian surprise attracts human attention. In: Proceedings of the Annual Confernce on Neural Information Processing Systems (2006)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)
Itti, L., Koch, C., Niebur, E.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2(3), 194–203 (2001)
Jaspers, H., Schauerte, B., Fink, G.A.: Sift-based camera localization using reference objects for application in multi-camera environments and robotics. In: Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods (ICPRAM), Vilamoura, Algarve, Portugal (2012)
Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: Proceedings of the International Conference on Computer Vision (2009)
Kadir, T., Brady, M.: Saliency, scale and image description. Int. J. Comput. Vis. 45(2), 83–105 (2001)
Kahneman, D., Treisman, A., Gibbs, B.J.: The reviewing of object files: object-specific integration of information. Cogn. Psychol. 24(2), 175–219 (1992)
Kalinli, O.: Biologically inspired auditory attention models with applications in speech and audio processing. Ph.D. dissertation, University of Southern California, Los Angeles, CA, USA (2009)
Kayser, C., Petkov, C.I., Lippert, M., Logothetis, N.K.: Mechanisms for allocating auditory attention: an auditory saliency map. Curr. Biol. 15(21), 1943–1947 (2005)
Kalinli, O.: Prominence detection using auditory attention cues and task-dependent high level information. IEEE Trans. Audio Speech Lang Proc. 17(5), 1009–1024 (2009)
Kalinli, O., Narayanan, S.: A saliency-based auditory attention model with applications to unsupervised prominent syllable detection in speech. In: Proceedings of the Annual Confernce on International Speech Communication Association (2007)
Klein, D.A., Frintrop, S.: Center-surround divergence of feature statistics for salient object detection. In: Proceedings of the International Conference on Computer Vision (2011)
Klin, A., Jones, W., Schultz, R., Volkmar, F., Cohen, D.: Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Arch. Gen. Psychiatry 59(9), 809–816 (2002)
Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)
Koester, D., Schauerte, B., Stiefelhagen, R.: Accessible section detection for visual guidance. In: IEEE/NSF Workshop on Multimodal and Alternative Perception for Visually Impaired People (2013)
Kootstra, G., Nederveen, A., de Boer, B.: Paying attention to symmetry. In: Proceedings of the British Conference on Computer Vision (2008)
Kühn, B., Schauerte, B., Stiefelhagen, R., Kroschel, K.: A modular audio-visual scene analysis and attention system for humanoid robots. In: Proceedings of the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan (2012)
Kühn, B., Schauerte, B., Kroschel, K., Stiefelhagen, R.: Multimodal saliency-based attention: A lazy robot’s approach. In: Proceedings of the 25th International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, Vilamoura, Algarve, Portugal (2012)
Kühn, B., Schauerte, B., Kroschel, K., Stiefelhagen, R.: Wow! Bayesian surprise for salient acoustic event detection. In: Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE, Vancouver, Canada (2013)
Laurienti, P., Burdette, J.H., Wallace, M.T., Yen, Y.F., Field, A.S., Stein, B.E.: Deactivation of sensory-specific cortex by cross-modal stimuli. J. Cogn. Neurosci. 14, 420–429 (2002)
Li, J., Xu, D., Gao, W.: Removing label ambiguity in learning-based visual saliency estimation. IEEE Trans. Image Process. 21(4), 1513–1525 (2012)
Liebal, K., Tomasello, M.: Infants appreciate the social intention behind a pointing gesture: commentary on "children’s understanding of communicative intentions in the middle of the second year of life" by T. Aureli, P. Perucchini and M. Genco. Cogn. Dev 24(1), 13–15 (2009)
Lin, K.-H., Zhuang, X., Goudeseune, C., King, S., Hasegawa-Johnson, M., Huang, T.S.: Improving faster-than-real-time human acoustic event detection by saliency-maximized audio visualization. In: Proceedings of the International Confernce on Acoustics, Speech, and Signal Processing (2012)
Lin, K.-H., Zhuang, X., Goudeseune, C., King, S., Hasegawa-Johnson, M., Huang, T.S.: Towards attentive robots. Paladyn 2(2), 64–70 (2011)
Liu, T., Sun, J., Zheng, N.-N., Tang, X., Shum, H.-Y.: Learning to detect a salient object. In: Proceedings of the International Conference on Computer Vision Pattern Recognition (2007)
Lu, S., Lim, J.-H.: Saliency modeling from image histograms. In: Proceedings of the European Confernce on Computer Vision (2012)
Louwerse, M., Bangerter, A.: Focusing attention with deictic gestures and linguistic expressions. In: Proceedings of the Annual Confernce on Cognitive Sciience Society (2005)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)
Marr, D.: VISION—A Computational Investigation into the Human Representation and Processing of Visual Information. W.H Freeman and Company (1982)
Marr, D.: Search goal tunes visual features optimally. Neuron 53(4), 605–617 (2007)
Martinez, M., Constantinescu, A., Schauerte, B., Koester, D., Stiefelhagen, R.: Cognitive evaluation of haptic and audio feedback in short range navigation tasks. In: Proceedings of the 14th Int. Conf. Computers Helping People with Special Needs (ICCHP). Springer, Paris, France (2014)
Martinez, M., Schauerte, B., Stiefelhagen, R.: BAM! Depth-based body analysis in critical care. In: Proceedings of the 15th International Conference on Computer Analysis of Images and Patterns (CAIP). Springer, York, UK (2013)
Martinez, M., Schauerte, B., Stiefelhagen, R.: How the distribution of salient objects in images influences salient object detection. In: Proceedings of the 20th International Conference on Image Processing (ICIP). IEEE, Melbourne, Australia (2013)
Meger, D., Forssén, P.-E., Lai, K., Helmar, S., McCann, S., Southey, T., Baumann, M., Little, J.J., Lowe, D.J.: Curious george: an attentive semantic robot. Robot. Auton. Syst. 56(6), 503–511 (2008)
Miau, F., Papageorgiou, C., Itti, L.: Neuromorphic algorithms for computer vision and attention. In: Bosacchi, B., Fogel, D.B., Bezdek, J.C. (eds.) Proceedings of the SPIE 46 Annual International Symposium on Optical Science and Technology, vol. 4479, pp. 12–23 (2001)
Mundy, P., Newell, L.: Attention, joint attention, and social cognition. Curr. Dir. Psychol. Sci. 16(5), 269–274 (2007)
Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000)
Nickerson, S.B., Jasiobedzki, P., Wilkes, D., Jenkin, M., Milios, E., Tsotsos, J.K., Jepson, A., Bains, O.N.: The ark project: autonomous mobile robots for known industrial environments. Robot. Auton. Syst. 25, 83–104 (1998)
Onat, S., Libertus, K., König, P.: Integrating audiovisual information for the control of overt attention. J. Vis. 7(10) (2007)
Oppenheim, A., Lim, J.: The importance of phase in signals. Proc. IEEE 69(5), 529–541 (1981)
Ouerhani, N., Bracamonte, J., Hugli, H., Ansorge, M., Pellandini, F.: Adaptive color image compression based on visual attention. In: Proceedings of the International Conference on Image Analysis and Processing, pp. 416–421 (2001)
Perez-Gonzalez, D., Malmierca, M.S., Covey, E.: Novelty detector neurons in the mammalian auditory midbrain. Eur. J. Neurosci. 22, 2879–2885 (2005)
Riesenhuber, M., Poggio, T.: Hierarchical models of object recognition in cortex. Nat. Neurosci. 2, 1019–1025 (1999)
Rubinstein, M., Shamir, A., Avidan, S.: Improved seam carving for video retargeting. In: Proceedings of the Annual Confernce on Special Interest Group on Graphics and Interactive Techniques (2008)
Rybok, L., Schauerte, B., Al-Halah, Z., Stiefelhagen, R.: Important stuff, everywhere! Activity recognition with salient proto-objects as context. In: Proceedings of the 14th IEEE Winter Conference on Applications of Computer Vision (WACV), Steamboat Springs, CO, USA (2014)
Santella, A., Agrawala, M., DeCarlo, D., Salesin, D., Cohen, M.: Gaze-based interaction for semi-automatic photo cropping. In: Proceedings of the International Conference on Human Factors Computing Systems (CHI) (2006)
Schauerte, B., Fink, G.A.: Focusing computational visual attention in multi-modal human-robot interaction. In: Proceedings of the 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI). ACM, Beijing, China (2010)
Schauerte, B., Fink, G.A.: Web-based learning of naturalized color models for human-machine interaction. In: Proceedings of the 12th International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, Sydney, Australia (2010)
Schauerte, B.: Multimodal computational attention for scene understanding. Ph.D. dissertation, Karlsruhe Institute of Technology (2014)
Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: Way to Go! Detecting open areas ahead of a walking person. In: ECCV Workshop on Assistive Computer Vision and Robotics (ACVR). Springer (2014)
Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: A web-based platform for interactive image sonificationn. In: Accessible Interaction for Visually Impaired People (AI4VIP) (2015)
Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: Look at this! Learning to guide visual saliency in human-robot interaction. In: Proceedings of the International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ (2014)
Schauerte, B., Koester, D., Martinez, M., Stiefelhagen, R.: On the distribution of salient objects in web images and its influence on salient object detection. PLoS ONE 10, 07 (2015)
Schauerte, B., Kühn, B., Kroschel, K., Stiefelhagen, R.: Multimodal saliency-based attention for object-based scene analysis. In: Proceedings of the 24th International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, San Francisco, CA, USA (2011)
Schauerte, B., Martinez, M., Constantinescu, A., Stiefelhagen, R.: An assistive vision system for the blind that helps find lost things. In: Proceedings of the 13th International Conference on Computers Helping People with Special Needs (ICCHP). Springer, Linz, Austria (2012)
Schauerte, B., Plötz, T., Fink, G.A.: ‘A multi-modal attention system for smart environments. In: Proceedings of the 7th International Conference on Computer Vision Systems (ICVS). Lecture Notes in Computer Science, vol. 5815. Springer, Liège (2009)
Schauerte, B., Richarz, J., Fink, G.A.: Saliency-based identification and recognition of pointed-at objects. In: Proceedings of the 23rd International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, Taipei, Taiwan (2010)
Schauerte, B., Richarz, J., Plötz, T., Thurau, C., Fink, G.A.: Multi-modal and multi-camera attention in smart environments. In: Proceedings of the 11th International Conference on Multimodal Interfaces (ICMI). ACM, Cambridge (2009)
Schauerte, B., Stiefelhagen, R.: Learning robust color name models from web images. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR). IEEE, Tsukuba, Japan (2012)
Schauerte, B., Stiefelhagen, R.: Predicting human gaze using quaternion DCT image signature saliency and face detection. In: Proceedings of the IEEE Workshop on the Applications of Computer Vision (WACV). IEEE, Breckenridge, CO, USA (2012)
Schauerte, B., Stiefelhagen, R.: Quaternion-based spectral saliency detection for eye fixation prediction. In: Proceedings of the 12th European Conference on Computer Vision (ECCV). Springer, Firenze, Italy (2012)
Schneider, T., Schauerte, B., Stiefelhagen, R.: Manifold alignment for person independent appearance-based gaze estimation. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR). IEEE, Stockholm, Sweden (2014)
Schauerte, B., Wörtwein, T., Stiefelhagen, R.: Color decorrelation helps visual saliency detection. In: Proceedings of the 22nd International Conference on Image Processing (ICIP). IEEE (2015)
Schauerte, B., Zamfirescu, C.T.: Small k-pyramids and the complexity of determining k. J. Discrete Algorithms (JDA) (2014)
Setlur, V., Lechner, T., Nienhaus, M., Gooch, B.: Retargeting images and video for preserving information saliency. IEEE Comput. Graph. Appl. 27(5), 80–88 (2007)
Siagian, C., Itti, L.: Biologically inspired mobile robot vision localization. IEEE Trans. Robot. 25(4), 861–873 (2009)
Simion, C., Shimojo, S.: Early interactions between orienting, visual sampling and decision making in facial preference. Vis. Res. 46(20), 3331–3335 (2006)
SMIvision: Sensomotoric instruments gmbh. http://www.smivision.com/
Spivey, M.J., Tyler, M.J., Eberhard, K.M., Tanenhaus, M.K.: Linguistically mediated visual search. Psychol. Sci. 12, 282–286 (2001)
Suh, B., Ling, H., Bederson, B.B., Jacobs, D.W.: Automatic thumbnail cropping and its effectiveness. In: ACM Symposium on User interface Software and Technology (2003)
Sussman, E.S., Winkler, I.: Dynamic sensory updating in the auditory system. Cogn. Brain Res. 12, 431–439 (2001)
Sussman, E.S.: Integration and segregation in auditory scene analysis. J. Acoust. Soc. Am. 117, 1285–1298 (2005)
Treisman, A.M., Gormican, S.: Feature analysis in early vision: evidence from search asymmetries. Psychol. Rev. 95(1), 15–48 (1988)
Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12(1), 97–136 (1980)
Triesch, J., Teuscher, C., Deák, G.O., Carlson, E.: Gaze following: why (not) learn it? Dev. Sci. 9(2), 125–147 (2006)
University of British Columbia: Curious George Project. https://www.cs.ubc.ca/labs/lci/curious_george/. Accessed 3 April 2014
Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Netw. 19(9), 1395–1407 (2006)
Weissman, D.H., Warner, L.M., Woldorff, M.G.: The neural mechanisms for minimizing cross-modal distraction. J. Neurosci. 24, 10 941–10 949 (2004)
Wikimedia Common (Blausen.com staff): Blausen gallery 2014, ear anatomy. http://commons.wikimedia.org/wiki/File:Blausen_0328_EarAnatomy.png. 23 Feb 2015 (License CC BY 3.0)
Wikimedia Common (Blausen.com staff): Blausen gallery 2014, the internal ear. http://commons.wikimedia.org/wiki/File:Blausen_0329_EarAnatomy_InternalEar.png. 23 Feb 2015 (License CC BY 3.0)
Wikimedia Common (Oarih): Cochlea-crosssection. http://commons.wikimedia.org/wiki/File:Cochlea-crosssection.png. 23 Feb 2015 (License CC BY-SA 3.0)
Winkler, I., Teder-Salejarvi, W.A., Horvath, J., Naatanen, R., Sussman, E.: Human auditory cortex tracks task-irrelevant sound sources. Neuroreport 14, 2053–2056 (2003)
Winkler, I., Czigler, I., Sussman, E., Horvath, J., Balazs, L.: Preattentive binding of auditory and visual stimulus features. J. Cogn. Neurosci. 17, 320–339 (2005)
Winkler, S., Subramanian, R.: Overview of eye tracking datasets. In: International Workshop on Quality of Multimedia Experience (2013)
Woertwein, T., Chollet, M., Schauerte, B., Stiefelhagen, R., Morency, L.-P., Scherer, S.: Multimodal public speaking performance assessment. In: Proceedings of the 17th International Conference on Multimodal Interaction (ICMI). ACM (2015)
Woertwein, T., Schauerte, B., Mueller, K., Stiefelhagen, R.: Interactive web-based image sonification for the blind. In: Proceedings of the 17th International Conference on Multimodal Interaction (ICMI). ACM (2015)
Wolfe, J.M., Horowitz, T.S., Kenner, N., Hyle, M., Vasan, N.: How fast can you change your mind? the speed of top-down guidance in visual search. Vis. Res. 44, 1411–1426 (2004)
Wolfe, J.M.: Guided search 2.0: a revised model of visual search. Psychon. Bull. Rev. 1, 202–238 (1994)
Wolfe, J.M., Cave, K., Franzel, S.: Guided search: an alternative to the feature integration model for visual search. J. Exp. Psychol.: Hum. Percept. Perform. 15, 419–433 (1989)
Woodruff, P.W., Benson, R.R., Bandettini, P.A., Kwong, K.K., Howard, R.J., Talavage, T., Belliveau, J., Rosen, B.R.: Modulation of auditory and visual cortex by selective attention is modality-dependent. Neuroreport 7, 1909–1913 (1996)
Xu, T., Zhang, T., Kühnlenz, K., Buss, M.: Attentional object detection of an active multi-vocal vision system. Int. J. Humanoid. 7(2) (2010)
Yarbus, A.L.: Eye Movements and Vision. Plenum Press (1967)
Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: a bayesian framework for saliency using natural statistics. J. Vis. 8(7) (2008)
Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach, Intell (2012)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Schauerte, B. (2016). Background. In: Multimodal Computational Attention for Scene Understanding and Robotics. Cognitive Systems Monographs, vol 30. Springer, Cham. https://doi.org/10.1007/978-3-319-33796-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-33796-8_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-33794-4
Online ISBN: 978-3-319-33796-8
eBook Packages: EngineeringEngineering (R0)