Asymmetry as a Measure of Visual Saliency

  • Ali Alsam
  • Puneet Sharma
  • Anette Wrålsen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7944)

Abstract

A salient feature is a part of the scene that stands out relative to neighboring items. By that we mean that a human observer would experience a salient feature as being more prominent. It is, however, important to quantify saliency in terms of a mathematical quantity that lends itself to measurements. Different metrics have been shown to correlate with human fixations data. These include contrast, brightness and orienting gradients calculated at different image scales.

In this paper, we show that these metrics can be grouped under transformations pertaining to the dihedral group D 4, which is the symmetry group of the square image grid. Our results show that salient features can be defined as the image features that are most asymmetric in their surrounds.

Keywords

Saliency dihedral group D4 asymmetry 

References

  1. 1.
    Suder, K., Worgotter, F.: The control of low-level information flow in the visual system. Reviews in the Neurosciences 11, 127–146 (2000)CrossRefGoogle Scholar
  2. 2.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  3. 3.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  4. 4.
    Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research 40, 1489–1506 (2000)CrossRefGoogle Scholar
  5. 5.
    Gorsuch, R.L.: Factor Analysis. Lawrence Erlbaum Associates, LEA (1983)Google Scholar
  6. 6.
    Jolliffe, I.T.: Principal component analysis. Springer (2002)Google Scholar
  7. 7.
    Braun, J., Sagi, D.: Vision outside the focus of attention. Perception and Psychophysics 48, 45–58 (1990)CrossRefGoogle Scholar
  8. 8.
    Desimone, R., Duncan, J.: Neural mechanisms of selective visual attention. Annual Reviews in the Neurosciences 18, 193–222 (1995)CrossRefGoogle Scholar
  9. 9.
    Steinman, S.B., Steinman, B.A.: Vision and attention. i: Current models of visual attention. Optometry and Vision Science 75, 146–155 (1998)CrossRefGoogle Scholar
  10. 10.
    Itti, L., Koch, C.: Computational modelling of visual attention. Nature Reviews Neuroscience 2, 194–203 (2001)CrossRefGoogle Scholar
  11. 11.
    Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2049–2056 (2006)Google Scholar
  12. 12.
    Mozer, M.C., Sitton, M.: 9. In: Computational modeling of spatial attention, pp. 341–393. Psychology Press (1998)Google Scholar
  13. 13.
    Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 185–207 (2013)CrossRefGoogle Scholar
  14. 14.
    Lenz, R.: Using representations of the dihedral groups in the design of early vision filters. In: ICAASP, pp. 165–168 (1993)Google Scholar
  15. 15.
    Lenz, R.: Investigation of receptive fields using representations of the dihedral groups. Journal of Visual Communication and Image Representation 6, 209–227 (1995)CrossRefGoogle Scholar
  16. 16.
    Foote, R., Mirchandani, G., Rockmore, D.N., Healy, D., Olson, T.: A wreath product group approach to signal and image processing. i. multiresolution analysis. IEEE Transactions on Signal Processing 48, 102–132 (2000)MathSciNetMATHCrossRefGoogle Scholar
  17. 17.
    Chang, W.Y.: Image processing with wreath products. Master’s thesis, Harvey Mudd College (2004)Google Scholar
  18. 18.
    Lenz, R., Bui, T.H., Takase, K.: A group theoretical toolbox for color image operators. In: IEEE International Conference on Image Processing, ICIP 2005, vol. 3, pp. 557–560 (2005)Google Scholar
  19. 19.
    Dummit, D.S., Foote, R.M.: Abstract Algebra. John Wiley & Sons (2004)Google Scholar
  20. 20.
    Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Networks 19, 1395–1407 (2006)MATHCrossRefGoogle Scholar
  21. 21.
    Cerf, M., Harel, J., Einhauser, W., Koch, C.: Predicting human gaze using low-level saliency combined with face detection. In: Advances in Neural Information Processing Systems (NIPS), vol. 20, pp. 241–248 (2007)Google Scholar
  22. 22.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of Neural Information Processing Systems (NIPS) (2006)Google Scholar
  23. 23.
    Cerf, M., Frady, E.P., Koch, C.: Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision 9, 1–15 (2009)CrossRefGoogle Scholar
  24. 24.
    Duchowski, A.T.: Eye Tracking Methodology: Theory and Practice. Springer, Heidelberg (2007)Google Scholar
  25. 25.
    Fawcett, T.: Roc graphs: Notes and practical considerations for researchers. Pattern Recognition Letters 27, 882–891 (2004)CrossRefGoogle Scholar
  26. 26.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: International Conference on Computer Vision (ICCV) (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Ali Alsam
    • 1
  • Puneet Sharma
    • 1
  • Anette Wrålsen
    • 1
  1. 1.Department of Informatics & e-Learning (AITeL)Sør-Trøndelag University College (HiST)TrondheimNorway

Personalised recommendations