Advertisement

Automatic Image Annotation Based on Multi-scale Salient Region

Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 238)

Abstract

Automatic image annotation is a challenging problem in image understanding areas. The existing models directly extract visual features from segmented image regions. Since segmented image regions may still have multi-objects, the extractive visual features may not effectively describe corresponding regions. In order to overcome the above problems, an image annotation model based on multi-scale salient region is proposed. In this model, first, each image is segmented by using multi-scale grid-based segmentation method. Second, global contrast-based method is used to extract the saliency maps from each image region. Third, visual features are extracted from each salient region. Finally, multi-scale visual features of image regions are fused and applied to automatic image annotation. Our model can improve the object descriptions of images and image regions. Experimental results conducted on Corel 5K datasets verify the effectiveness of proposed model.

Keywords

Visual Feature Image Region Salient Object Image Annotation Salient Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Feng SL, Manmatha R, Lavrenko V (2004) Multiple Bernoulli relevance models for image and video annotation. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, Washington, DC, pp 1002–1009Google Scholar
  2. 2.
    Jeon J, Lavrenko V, Manmatha R (2003) Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th annual international ACM SIGIR, Toronto, Canada, pp 119–126Google Scholar
  3. 3.
    Gustavo C, Antoni BC, Pedro JM, Nuno V (2007) Supervised learning of semantic classes for image annotation and retrieval. IEEE Trans Pattern Anal Mach Intell 29(3):394–410CrossRefGoogle Scholar
  4. 4.
    Yong W, Tao M, Shaogang G et al (2009) Combining global, regional and contextual features for automatic image annotation. Pattern Recognit 42:259–266MATHCrossRefGoogle Scholar
  5. 5.
    Stefanie L, Roland M, Robert S et al (2009) Automatic image annotation using visual content and folksonomies. Multimed Tools Appl 42:97–113CrossRefGoogle Scholar
  6. 6.
    Ming-Ming C, Guo-Xin Z, Mitra N, et al. (2011) Global contrast based salient region detection. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 409–416Google Scholar
  7. 7.
    Pedro F, Daniel P (2004) Efficient graph-based image segmentation. Int J Comput Vis 59(2):167–181CrossRefGoogle Scholar
  8. 8.
    Songhe F, De X (2010) Transductive multi-instance multi-label learning algorithm with application to automatic image annotation. Expert Syst Appl 37:661–670CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.College of Mathematics and Computer ScienceFuzhou UniversityFuzhouChina

Personalised recommendations