Skip to main content

Mining Multiple Visual Appearances of Semantics for Image Annotation

  • Conference paper
Advances in Multimedia Modeling (MMM 2007)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4351))

Included in the following conference series:

  • 849 Accesses

Abstract

This paper investigates the problem of learning the visual semantics of keyword categories for automatic image annotation. Supervised learning algorithms which learn only a single concept point of a category are limited in their effectiveness for image annotation. We propose to use data mining techniques to mine multiple concepts, where each concept may consist of one or more visual parts, to capture the diverse visual appearances of a single keyword category. For training, we use the Apriori principle to efficiently mine a set of frequent blobsets to capture the semantics of a rich and diverse visual category. Each concept is ranked based on a discriminative or diverse density measure. For testing, we propose a level-sensitive matching to rank words given an unannotated image. Our approach is effective, scales better during training and testing, and is efficient in terms of learning and annotation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: MIRSM (1999)

    Google Scholar 

  2. Duygulu, D., Barnard, K., Freitas, N., de Forsyth, D.: Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  3. Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: SIGIR, pp. 119–126 (2003)

    Google Scholar 

  4. Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: NIPS (2003)

    Google Scholar 

  5. Feng, S.L., Lavrenko, V., Manmatha, R.: Multiple Bernoulli relevance models for image and video annotation. In: CVPR, pp. 1002–1009 (2004)

    Google Scholar 

  6. Blei, D., Jordan, M.I.: Modeling annotated data. In: SIGIR, pp. 127–134 (2003)

    Google Scholar 

  7. Carneiro, G., Vasconcelos, N.: Formulating semantic image annotation as a supervised learning problem. In: CVPR, vol. 2, pp. 163–168 (2005)

    Google Scholar 

  8. Ghoshal, A., Ircing, P., Khudanpur, S.: Hidden Markov Models for Automatic Annotation and Content-Based Retrieval of Images and Video. In: SIGIR, pp. 544–551 (2005)

    Google Scholar 

  9. Szummer, M., Picard, R.: Indoor-Outdoor Image Classification. In: Workshop in Content-based Access to Image and Video Databases (1998)

    Google Scholar 

  10. Shi, R., Chua, T.S., Lee, C.H., Gao, S.: Bayesian Learning of Hierarchical Multinomial Mixture Models of Concepts for Automatic Image Annotation. In: Sundaram, H., Naphade, M., Smith, J.R., Rui, Y. (eds.) CIVR 2006. LNCS, vol. 4071, pp. 102–112. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  11. Tan, P., Steinbach, M., Kumar, V.: Introduction to Data Mining. Addison-Wesley, Reading (2006)

    Google Scholar 

  12. Maron, O., Raton, A.L.: Multiple-Instance Learning for Natural Scene Classification. In: ICML, pp. 341–349 (1998)

    Google Scholar 

  13. Zhang, Q., Yu, W., Goldman, S.A., Fritts, J.E.: Content-Based Retrieval Using Multiple-Instance Learning. In: IMCL, pp. 682–689 (2002)

    Google Scholar 

  14. Shi, Y., Malik, J.: Normalized cuts and image segmentation. In: CVPR, pp. 731–737 (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tan, HK., Ngo, CW. (2006). Mining Multiple Visual Appearances of Semantics for Image Annotation. In: Cham, TJ., Cai, J., Dorai, C., Rajan, D., Chua, TS., Chia, LT. (eds) Advances in Multimedia Modeling. MMM 2007. Lecture Notes in Computer Science, vol 4351. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69423-6_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-69423-6_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-69421-2

  • Online ISBN: 978-3-540-69423-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics