Advertisement

Object of Interest Detection by Saliency Learning

  • Pattaraporn Khuwuthyakorn
  • Antonio Robles-Kelly
  • Jun Zhou
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6312)

Abstract

In this paper, we present a method for object of interest detection. This method is statistical in nature and hinges in a model which combines salient features using a mixture of linear support vector machines. It exploits a divide-and-conquer strategy by partitioning the feature space into sub-regions of linearly separable data-points. This yields a structured learning approach where we learn a linear support vector machine for each region, the mixture weights, and the combination parameters for each of the salient features at hand. Thus, the method learns the combination of salient features such that a mixture of classifiers can be used to recover objects of interest in the image. We illustrate the utility of the method by applying our algorithm to the MSRA Salient Object Database.

Keywords

Salient Object Conditional Random Field Salient Region Saliency Detection Visual Saliency 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Fecteau, J., Munoz, D.: Salience, relevance, and firing: a priority map for target selection. Trends in Cognitive Sciences 10, 282–290 (2006)CrossRefGoogle Scholar
  2. 2.
    Liu, T., Sun, J., Zheng, N.N., Tang, X., Shum, H.Y.: Learning to detect a salient object. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)Google Scholar
  3. 3.
    Mahamud, S., Williams, L., Thornber, K., Xu, K.: Segmentation of multiple salient closed contours from real images. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 433–444 (2003)CrossRefGoogle Scholar
  4. 4.
    Li, H., Ngan, K.N.: Saliency model-based face segmentation and tracking in head-and-shoulder video sequences. Journal of Visual Communication and Image Representation 19, 320–333 (2008)CrossRefGoogle Scholar
  5. 5.
    Papageorgiou, C., Poggio, T.: A trainable system for object detection. International Journal of Computer Vision 38, 15–33 (2004)CrossRefGoogle Scholar
  6. 6.
    Marchesotti, L., Cifarelli, C., Csurka, G.: A framework for visual saliency detection with applications to image thumbnailing. In: Proceedings of the IEEE International Conference on Computer Vision (2009)Google Scholar
  7. 7.
    Kadir, T., Brady, M.: Saliency, scale and image description. International Journal of Computer Vision 45, 83–105 (2001)zbMATHCrossRefGoogle Scholar
  8. 8.
    Koch, C., Ullman, S.: Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  9. 9.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  10. 10.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of Neural Information Processing Systems, pp. 545–552 (2007)Google Scholar
  11. 11.
    Rosin, P.L.: A simple method for detecting salient regions. Pattern Recognition 42, 2363–2371 (2009)zbMATHCrossRefGoogle Scholar
  12. 12.
    Alter, T., Basri, R.: Extracting salient curves from images: An analysis of the saliency network. International Journal of Computer Vision 27, 51–69 (1998)CrossRefGoogle Scholar
  13. 13.
    Shaashua, A., Ullman, S.: Structural saliency: The detection of globally salient structures using locally connected network. In: Proceedings of International Conference on Computer Vision, pp. 321–327 (1988)Google Scholar
  14. 14.
    Berengolts, A., Lindenbaum, M.: On the distribution of saliency. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 1973–1990 (2006)CrossRefGoogle Scholar
  15. 15.
    Dickinson, S.J., Christensen, H.I., Tsotsos, J.K., Olofsson, G.: Active object recognition integrating attention and viewpoint control. Computer Vision and Image Understanding 67, 239–260 (1997)CrossRefGoogle Scholar
  16. 16.
    Navalpakkam, V., Itti, L.: Modeling the influence of task on attention. Vision Research 45, 205–231 (2005)CrossRefGoogle Scholar
  17. 17.
    Navalpakkam, V., Itti, L.: Search goal tunes visual features optimally. Neuron 53, 605–617 (2007)CrossRefGoogle Scholar
  18. 18.
    Vincent, B., Troscianko, T., Gilchrist, I.: Investigating a space-variant weighted salience account of visual selection. Vision Research 47, 1809–1820 (2007)CrossRefGoogle Scholar
  19. 19.
    Lindeberg, T.: Scale-space behaviour of local extrema and blobs. Journal of Mathematical Imaging and Vision 1, 65–99 (1992)CrossRefMathSciNetGoogle Scholar
  20. 20.
    Rissanen, J.: Stochastic Complexity in Statistical Inquiry Theory. World Scientific Publishing Co., Inc., River Edge (1989)Google Scholar
  21. 21.
    Dempster, A.P., Laird, M.N., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 39, 1–22 (1977)zbMATHMathSciNetGoogle Scholar
  22. 22.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)CrossRefGoogle Scholar
  23. 23.
    Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: Liblinear: A library for large linear classification. The Journal of Machine Learning Research 9, 1871–1874 (2008)Google Scholar
  24. 24.
    Platt, J.: Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In: Advances in Large Margin Classifiers, pp. 61–74 (2000)Google Scholar
  25. 25.
    Daugman, J.: Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two dimensional visual cortical filters. Journal of the Optical Society of America 2, 1160–1169 (1985)CrossRefGoogle Scholar
  26. 26.
    Ratsch, G., Onoda, T., Muller, K.R.: Soft margins for adaboost. Machine Learning 42, 287–320 (2001)CrossRefGoogle Scholar
  27. 27.
    Duda, R.O., Hart, P.E.: Pattern Classification. Wiley, Chichester (2000)Google Scholar
  28. 28.
    Otsu, N.: A thresholding selection method from gray-level histobrams. IEEE Transactions on Systems, Man, and Cybernetics 9, 62–66 (1979)CrossRefGoogle Scholar
  29. 29.
    van Rijsbergen, C.J.: Information Retireval. Butterworths (1979)Google Scholar
  30. 30.
    Martin, D.R., Fowlkes, C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, 530–549 (2004)CrossRefGoogle Scholar
  31. 31.
    Freixenet, J., Munoz, X., Raba, D., Martí, J., Cufí, X.: Yet another survey on image segmentation: Region and boundary information integration. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 408–422. Springer, Heidelberg (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Pattaraporn Khuwuthyakorn
    • 1
    • 3
  • Antonio Robles-Kelly
    • 1
    • 2
  • Jun Zhou
    • 1
    • 2
  1. 1.RSISEAustralian National UniversityCanberraAustralia
  2. 2.National ICT Australia (NICTA)CanberraAustralia
  3. 3.Cooperative Research Centre for National Plant BiosecurityCanberraAustralia

Personalised recommendations