Advertisement

From Human Eye Fixation to Human-like Autonomous Artificial Vision

  • Viachaslau KachurkaEmail author
  • Kurosh Madani
  • Cristophe Sabourin
  • Vladimir Golovko
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9094)

Abstract

Fitting the skills of the natural vision is an appealing perspective for artificial vision systems, especially in robotics applications where visual perception of the surrounding environment is a key requirement. Focusing on the visual attention dilemma for autonomous visual perception, in this work we propose a model for artificial visual attention combining a statistical foundation of visual saliency and a genetic optimization. The computational issue of our model relies on center-surround statistical features calculations and a nonlinear fusion of different resulting maps. Statistical foundation and bottom-up nature of the proposed model provide as well the advantage to make it usable without needing prior information as a comprehensive solid theoretical basement. The eye-fixation paradigm has been considered as evaluation benchmark providing MIT1003 and Toronto image datasets for experimental validation. The reported experimental results show scores challenging currently best algorithms used in the aforementioned field with faster execution speed of our approach.

Keywords

Autonomous vision Center-surround saliency Evolutionary optimization Eye fixation Human-like visual attention 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Borji, A., Itti, L.: State-of-the-Art in Visual Attention Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1), 185–207 (2013)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Borji, A., Tavakoli, H.R., Sihite D.N., Itti, L.: Analysis of Scores, Datasets, and Models in Visual Saliency Prediction. In: Proc. IEEE ICCV, pp. 921–928 (December 2013)Google Scholar
  3. 3.
    Bruce, N., Tsotsos, J.: Attention based on information maximization. J. Vision 7(9), 950–950 (2007)CrossRefGoogle Scholar
  4. 4.
    Contreras-Reyes, J.E., Arellano-Valle, R.B.: Küllback-Leibler divergence measure for multivariate skew-normal distributions. Entropy 14(9), 1606–1626 (2012)zbMATHMathSciNetCrossRefGoogle Scholar
  5. 5.
    Fawcett, T.: An introduction to ROC analysis. Pattern Recognition Letters 27, 861–874 (2006)CrossRefGoogle Scholar
  6. 6.
    Hayhoe, M., Ballard, D.: Eye Movements in Natural Behavior. Trends in Cognitive Sciences 9, 188–194 (2005)CrossRefGoogle Scholar
  7. 7.
    Holzbach, A., Cheng, G.: A scalable and efficient method for salient region detection using sampled template collation. In: Proc. IEEE ICIP (2014)Google Scholar
  8. 8.
    Jiang, M., Xu, J., Zhao, Q.: Saliency in crowd. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 17–32. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  9. 9.
    Judd, T., Durand, F., Torralba, A.: A Benchmark of Computational Models of Saliency to Predict Human Fixations. MIT Technical Report (2012). http://saliency.mit.edu/
  10. 10.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to Predict Where Humans Look. In: Proc. IEEE ICCV, pp. 2106–2113 (2009)Google Scholar
  11. 11.
    Kadir, T., Brady, M.: Saliency, Scale and Image Description. J. Vision 45(2), 83–105 (2001)Google Scholar
  12. 12.
    Kachurka, V., Madani, K., Sabourin, C., Golovko, V.: A statistical approach to human-like visual attention and saliency detection for robot vision: application to wildland fires’ detection. In: Golovko, V., Imada, A. (eds.) ICNNAI 2014. CCIS, vol. 440, pp. 124–135. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  13. 13.
    Kienzle, W., Franz, M.O., Schölkopf, B., Wichmann, F.A.: Center-Surround Patterns Emerge as Optimal Predictors for Human Saccade Targets. J. Vision 9, 1–15 (2009)CrossRefGoogle Scholar
  14. 14.
    Koehler, K., Guo, F., Zhang, S., Eckstein, M.P.: What do saliency models predict? J. Vision 14(3), 1–27 (2014)CrossRefGoogle Scholar
  15. 15.
    Liu, T., Sun, J., Zheng, N.N., Shum, H.Y.: Learning to Detect a Salient Object. In: Proc. IEEE ICCV, pp. 1–8 (2007)Google Scholar
  16. 16.
    Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: Proc. IEEE CVPR, pp. 2049–2056 (2006)Google Scholar
  17. 17.
    Rajashekar, U., van der Linde, I., Bovik, A.C., Cormack, L.K.: GAFFE: A Gaze-Attentive Fixation Finding Engine. IEEE Trans. Image Processing 17(4), 564–573 (2008)CrossRefGoogle Scholar
  18. 18.
    Ramík, D.M.: Contribution to complex visual information processing and autonomous knowledge extraction: application to autonomous robotics. Ph.D. dissertation, Université Paris-Est, Pub. No. 2012PEST1100 (2012)Google Scholar
  19. 19.
    Ramík, D.M., Sabourin, C., Madani, K.: Hybrid salient object extraction approach with automatic estimation of visual attention scale. In: Proc. IEEE SITIS (2011)Google Scholar
  20. 20.
    Riche, N., Duvinage, M., Mancas, M., Gosselin, B., Dutoit, T.: Saliency and human fixations: state-of-the-art and study of comparison metrics. In: Proc. IEEE ICCV, pp. 1153–1160, December 2013Google Scholar
  21. 21.
    Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin B., Dutoit, T.: RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication (2013), doi: 10.1016/j.image.2013.03.009, issn:0923–5965
  22. 22.
    Shen, C., Zhao, Q.: Webpage saliency. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 33–46. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  23. 23.
    Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  24. 24.
    Tatler, B.W.: The Central Fixation Bias in Scene Viewing: Selecting an Optimal Viewing Position Independently of Motor Bases and Image Feature Distributions. J. Vision 14, 1–17 (2007)Google Scholar
  25. 25.
    Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What You See Is What You Need. J. Vision 3, 86–94 (2003)CrossRefGoogle Scholar
  26. 26.
    Vig, E., Dorr, M., Cox. D.: Large-Scale optimization of hierarchical features for saliency prediction in natural images computer vision and pattern recognition. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2798–2805 (2014)Google Scholar
  27. 27.
    Võ, M.L.-H., Smith, T.J., Mital, P.K., Henderson, J.M.: Do the eyes really have it? Dynamic allocation of attention when viewing moving faces? J. Vision 12(13), 3 (2012)CrossRefGoogle Scholar
  28. 28.
    Zhang, J., Sclaroff, S.: Saliency detection: a boolean map approach. In: Proc. IEEE ICCV, pp. 153–160 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Viachaslau Kachurka
    • 1
    • 2
    Email author
  • Kurosh Madani
    • 1
  • Cristophe Sabourin
    • 1
  • Vladimir Golovko
    • 2
  1. 1.LISSI / EA 3956 Laboratory, Senart-FB Institute of TechnologyUniversity Paris-Est CreteilLieusaintFrance
  2. 2.Neural Networks Laboratory, Intelligent Information Technologies DepartmentBrest State Technical UniversityBrestBelarus

Personalised recommendations