Skip to main content

Audio Visual Attention Models in the Mobile Robots Navigation

  • Chapter
  • First Online:
New Approaches in Intelligent Image Analysis

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 108))

Abstract

The mobile robots are equipped with sensitive audio visual sensors, usually microphone arrays and video cameras. They are the main sources of audio visual information to perform suitable mobile robots navigation tasks, modeling the human audio visual perception. The results from the audio and visual perception algorithms are widely used, separate or in conjunction (audio visual perception) in the mobile robots navigation, for example to control mobile robots motion in applications like people and objects tracking, surveillance systems, etc. The effectiveness and precision of the audio visual perception methods in the mobile robots navigation can be enhanced combining audio visual perception with audio visual attention. Sufficient relative knowledge exists, describing the phenomena of human audio and visual attention. Such approaches are usually based on a lot of physiological, psychological, medical and technical experimental investigations relating the human audio and visual attention, with the human audio and visual perception with the leading role of the brain activity. Of course, the results from these investigations are very important, but not sufficient for the mobile robots audio visual attention modeling, mainly because of brain missing in mobile robots audio visual perception systems. Therefore, in this chapter is proposed to use the existing definitions and models for human audio and visual attention, adapting them to the models of mobile robots audio and visual attention and combining with the results from the mobile robots audio and visual perception in the mobile robots navigation tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Choset, H., Lynch, K., Hutchinson, S., Kantor, G., Burgard, W., Kavraki, L., Thrun, S.: Principles of Robot Motion: Theory, Algorithms, and Implementations. MIT Press (2005)

    Google Scholar 

  2. Okuno, H.G., Nakadai, K., Hidai, K.I., Mizoguchi, H., Kitano, H.: Human robot interaction through real time auditory and visual multiple talker tracking. In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1402–1409 (2001)

    Google Scholar 

  3. Brooks, R., Breazeal, C., Marjanovic, M., Scassellati, B., Williamson, M.: The Cog project: building a humanoid robot. In: Computation for Metaphors, Analogy, and Agents. Lecture Notes in Artificial Intelligence 1562. Springer, New York, pp. 52–87 (1999)

    Google Scholar 

  4. Filho, A.: Humanoid Robots. New Developments. Advanced Robotic Systems International and I-Tech, Vienna Austria (2010)

    Google Scholar 

  5. Bigun, J.: Vision with Direction. Springer (2006)

    Google Scholar 

  6. Jarvis, R.: Intelligent Robotics: past, present and future. Int. J. Comput. Sci. Appl. (Technomathematics Research Foundation) 5(3), 23–35 (2008)

    Google Scholar 

  7. Adams, B., Breazeal, C., Brooks, R., Scassellati, B.: Humanoid robots: a new kind of tool. IEEE Intell. Syst. Appl. 15(4), 25–31 (2000)

    Article  Google Scholar 

  8. Eckmiller, R., Baruth, O., Neumann, D.: On human factors for interactive man-machine vision: requirements of the neural visual system to transform objects into percepts. In: Proceedings of IEEE World Congress on Computational Intelligence WCCI 2006—International Joint Conference on Neural Networks, Vancouver, Canada, 16–21 July, pp. 99–703 (2006)

    Google Scholar 

  9. Sezer, V., Gokasan, M.: A novel obstacle avoidance algorithm: “Follow the Gap Method”. Robot. Auton. Syst. (2012)

    Google Scholar 

  10. Oroko, J., Ikua, B.: Obstacle avoidance and path planning schemes for autonomous navigation of a mobile robot: a review. Proc. Mech. Eng. Conf. Sustain. Res. Innov. 4, 314–318 (2012)

    Google Scholar 

  11. Kalmegh, S., Samra, D., Rasegaonkar, N.: Obstacle avoidance for a mobile exploration robot using a single ultrasonic range sensor. IEEE International Conference on Emerging Trends in Robotics and Communication Technologies INTERACT, pp. 8–11 (2010)

    Google Scholar 

  12. Zhu, Y., Zhang, T., Song, J., Li, X.: A new hybrid navigation algorithm for mobile robots in environments with incomplete knowledge. Knowl.-Based Syst. 27, 302–313 (2012)

    Article  Google Scholar 

  13. Sgorbissa, A., Zaccaria, R.: Planning and obstacle avoidance in mobile robotics. Robot. Auton. Syst. 60(4), 628–638 (2012)

    Article  Google Scholar 

  14. Kumari, C.: Building algorithm for obstacle detection and avoidance system for wheeled mobile robot. Glob. J. Res. Eng. 11–12 (2012)

    Google Scholar 

  15. Chen, K., Tsai, W.: Vision-based obstacle detection and avoidance for autonomous land vehicle navigation in outdoor roads. Autom. Constr. 10(1), 1–25 (2000)

    Article  MathSciNet  Google Scholar 

  16. Jung, B., Sukhatme, G.: Detecting moving objects using a single camera on a mobile robot in an outdoor environment. In: International Conference on Intelligent Autonomous Systems, The Netherlands, pp. 980–987 (2004)

    Google Scholar 

  17. Beymer, D., Konolige, K.: Tracking people from a mobile platform. In: IJCAI-2001 Workshop on Reasoning with Uncertainty in Robotics, Seattle, WA, USA, pp. 99–116 (2001)

    Google Scholar 

  18. Chakravarty, P., Jarvis, R.: Panoramic vision and laser range finder fusion for multiple person tracking. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, China, pp. 2949–2954 (2006)

    Google Scholar 

  19. Bennewitz, M., Cielniak, G., Burgard, W.: Utilizing learned motion patterns to robustly track persons. In: Proceedings of IEEE International Workshop on VS-PETS, France, pp. 102–109 (2003)

    Google Scholar 

  20. Menegatti, E., Nori, F., Pagello, E., Pellizzari, C., Spagnoli, D.: Designing an omnidirectional vision system for a goalkeeper robot. In: Birk, A., Coradeschi, S., Tadokoro, S. (eds.) RoboCup-2001: Robot Soccer World Cup V. Springer, pp. 78–87 (2002)

    Google Scholar 

  21. Collins, R., Lipton, A., Kanade, T.: A system for video surveillance and monitoring. Technical report, Robotics Institute at Carnagie Mellon University (2000)

    Google Scholar 

  22. Gutchess, D., Jain, A., Wang, S.: Automatic surveillance using omnidirectional and active cameras. In: Asian Conference on Computer Vision (ACCV), pp. 916–920 (2000)

    Google Scholar 

  23. Wang, M., Lin, H.: Object recognition from omnidirectional visual sensing for mobile robot applications. In: IEEE International Conference on Systems, Man, and Cybernetics, San Antonio, Texas, USA, pp. 2010–2015 (2009)

    Google Scholar 

  24. http://www.irobot.com/For-Defense-and-Security.aspx#PublicSafety

  25. Schilling, K., Driewer, F., Baier, H.: User interfaces for robots in rescue operations. In: Proceedings IFAC/IFIP/IFORS/IEA Symposium Analysis, Design and Evaluation of Human-Machine Systems, Atlanta, USA, pp. 79–84 (2004)

    Google Scholar 

  26. http://www.doc-center.robosoft.com/@api/deki/files/6211/=robuROC4_web.pdf

  27. Huttenrauch, H., Eklundh, S.: To help or not to help a service robot: Bystander intervention as a resource in human-robot collaboration. Interact. Stud. 7(3), 455–477 (2006)

    Article  Google Scholar 

  28. Burgard, W., Cremers, A., Fox, D., Hahnel, D., Lakemeyer, G., Steiner, W., Thrun, S.: Experiences with an interactive museum tour-guide robot. Artif. Intell. 114(1–2), 3–55 (1999)

    Article  MATH  Google Scholar 

  29. http://www.robotnik.eu/services-robotic/mobile-robotics-applications/

  30. Khan, M.: The development of a mobile medical robot using ER1 technology. IEEE Potentials 32(4), 34–37 (2013)

    Google Scholar 

  31. Ben Robins, B., Dautenhahn, K., Ferrari, E., Kronreif, G., Prazak-Aram, B., Marti, P., Iacono, I., Gelderblom, G., Bernd, T., Caprino, F., et al.: Scenarios of robot-assisted play for children with cognitive and physical disabilities. Interact. Stud. 13, 189–234 (2012)

    Article  Google Scholar 

  32. Bundsen, C.: A theory of visual attention. Psychol. Rev. 97(4), 523–547 (1990)

    Article  Google Scholar 

  33. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Google Scholar 

  34. Posner, M., Petersen, S.: The attention system of the human brain. Annu. Rev. Neurosci. 13, 25–39 (1990)

    Article  Google Scholar 

  35. Tsotsos, J.: Motion understanding: task-directed attention and representations that link perception with action. Int. J. Comput. Vision 45(3), 265–280 (2001)

    Article  MATH  Google Scholar 

  36. Itti, L., Koch, C., Niebur, E.: Computational modeling of visual attention. Nat. Rev. Neurosci. 194–203 (2001)

    Google Scholar 

  37. Torralba, A., Oliva, A., Castelhano, M., Henderson, M.: Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113(4), 766–786 (2006)

    Article  Google Scholar 

  38. Treisman, M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12(1), 97–136 (1980)

    Article  Google Scholar 

  39. Wang, Y.: On cognitive informatics. In: Proceedings of International Conference on Cognitive Informatics, pp. 34–42 (2002)

    Google Scholar 

  40. Foster, J.: The Nature of Perception. Oxford University Press (2000)

    Google Scholar 

  41. Chater, N., Oaksford, M.: The Probabilistic Mind: Prospects for Bayesian Cognitive Science. Oxford University Press, New York (2008)

    Book  Google Scholar 

  42. Kersten, D., Yuille, A.: Bayesian models of object perception. Curr. Opin. Neurobiol. 13, 1–9 (2003)

    Article  Google Scholar 

  43. Chater, N., Tenenbaum, J., Yuille, A.: Probabilistic models of cognition: conceptual foundations. Trends Cogn. Sci. 10, 287–291 (2006)

    Article  Google Scholar 

  44. Ciftcioglu, Ö., Bittermann, M., Sariyildiz, I.: Towards computer-based perception by modeling visual perception: a probabilistic theory. In: Proceedings of IEEE International Conference on Systems, Man and CyberneticsTaipei, Taiwan, pp. 81–89 (2006)

    Google Scholar 

  45. Andersen, S., Tiippana, K., Lampinen, J., Sams, M.: Bayesian modeling of audiovisual speech perception in noise. In: Audiovisual Speech Perception Conference Proceedings, 34–41 (2001)

    Google Scholar 

  46. Bekiarski, A.: Visual mobile robots perception in motion control. In: Kountchev, R., Nakamatzu, K. (eds.) Advanced in Reasoning-Based Image Processing Intelligent Systems. Springer (2014)

    Google Scholar 

  47. Tarampi, M., Geuss, M., Stefanucci, J., Creem-Regehr, S.: A Preliminary Study on the Role of Movement Imagery in Spatial Perception. Spatial Cognition IX Springer International Publishing, pp. 383–395 (2014)

    Google Scholar 

  48. Creem-Regehr, S., Gagnon, S., Geuss, K., Stefanucci, M.: Relating spatial perspective taking to the perception of other’s affordances: providing a foundation for predicting the future behavior of others. Front. Human Neurosci. 7, 596–598 (2013)

    Article  Google Scholar 

  49. Dehkharghani, Sh.: Development of methods and algorithms for audio-visual mobile robot motion control. Ph.D. thesis, conducted at The French Language Faculty of Electrical Engineering, Technical University of Sofia, Bulgaria (2014)

    Google Scholar 

  50. Laser scanning range finder (SOKUIKI sensor): URG-04LX-UG01 Hokuyo Corporation. http://www.hokuyo-aut.jp/02sensor/07scanner/urg_04lx_ug01.html

  51. Steval-MKI126v2. Datasheet. http://www.st.com

  52. MP34DB01. Datasheet. http://www.st.com

  53. Surveyor SRV-1 Blackfin Camera. http://www.surveyor.com/blackfin/

  54. OV9655 Camera Board. Datasheet. http://www.surveyor.com/blackfin/

  55. Surveyor SRV-1 Blackfin Robot. http://www.surveyor.com/SRV_info.html

  56. Frese, U.: An algorithm for simultaneous localization and mapping. In: Freksa, C. (ed.) Spatial Cognition IV. Springer, pp. 455–476 (2005)

    Google Scholar 

  57. Sariff, N., Buniyamin, N.: An overview of autonomous mobile robot path planning algorithms. In: Proceedings of 4th Student Conference on Research and Development (SCORED 2006), Shah Alam, Malaysia, pp. 183–188 (2006)

    Google Scholar 

  58. Dehkharghani, Sh., Bekiarski, Al., Pleshkova, Sn.: Application of probabilistic methods in mobile robots audio visual motion control combined with laser range finder distance measurements. In: Advances in Circuits, Systems, Automation and Mechanics, pp. 91–98 (2012)

    Google Scholar 

  59. Dehkharghani, Sh., Bekiarski, Al., Pleshkova, Sn.: Method and algorithm for precise estimation of joined audio visual robot control. In: Iran’s Third International Conference on Industrial Automation, Tehran, 22–23 Jan 2013, pp. 143–149 (2013)

    Google Scholar 

  60. Venkov, P., Bekiarski, Al., Dehkharghani, Sh., Pleshkova, Sn.: Search and tracking of targets with mobile robot by using audio-visual information. In: Proceedings of the International Conference on Automation and Informatics (CAI’10), Sofia, pp. 63–469 (2010)

    Google Scholar 

  61. Bekiarski, Al., Pleshkova, Sn.: Microphone array beamforming for mobile robot. In: Proceeding CSECS’09 Proceedings of the 8th WSEAS International Conference on Circuits, Systems, Electronics, Control and Signal Processing, pp. 146–149 (2009)

    Google Scholar 

  62. Dehkharghani, Sh, Pleshkova, S.: Geometric thermal infrared camera calibration for target tracking by a mobile robot. Comptes rendus de l’Academie bulgare des Sciences 67(1), 109–114 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Snejana Pleshkova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Pleshkova, S., Bekiarski, A. (2016). Audio Visual Attention Models in the Mobile Robots Navigation. In: Kountchev, R., Nakamatsu, K. (eds) New Approaches in Intelligent Image Analysis. Intelligent Systems Reference Library, vol 108. Springer, Cham. https://doi.org/10.1007/978-3-319-32192-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-32192-9_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-32190-5

  • Online ISBN: 978-3-319-32192-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics