Abstract
This paper investigates the problem of learning the visual semantics of keyword categories for automatic image annotation. Supervised learning algorithms which learn only a single concept point of a category are limited in their effectiveness for image annotation. We propose to use data mining techniques to mine multiple concepts, where each concept may consist of one or more visual parts, to capture the diverse visual appearances of a single keyword category. For training, we use the Apriori principle to efficiently mine a set of frequent blobsets to capture the semantics of a rich and diverse visual category. Each concept is ranked based on a discriminative or diverse density measure. For testing, we propose a level-sensitive matching to rank words given an unannotated image. Our approach is effective, scales better during training and testing, and is efficient in terms of learning and annotation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: MIRSM (1999)
Duygulu, D., Barnard, K., Freitas, N., de Forsyth, D.: Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002)
Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: SIGIR, pp. 119–126 (2003)
Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: NIPS (2003)
Feng, S.L., Lavrenko, V., Manmatha, R.: Multiple Bernoulli relevance models for image and video annotation. In: CVPR, pp. 1002–1009 (2004)
Blei, D., Jordan, M.I.: Modeling annotated data. In: SIGIR, pp. 127–134 (2003)
Carneiro, G., Vasconcelos, N.: Formulating semantic image annotation as a supervised learning problem. In: CVPR, vol. 2, pp. 163–168 (2005)
Ghoshal, A., Ircing, P., Khudanpur, S.: Hidden Markov Models for Automatic Annotation and Content-Based Retrieval of Images and Video. In: SIGIR, pp. 544–551 (2005)
Szummer, M., Picard, R.: Indoor-Outdoor Image Classification. In: Workshop in Content-based Access to Image and Video Databases (1998)
Shi, R., Chua, T.S., Lee, C.H., Gao, S.: Bayesian Learning of Hierarchical Multinomial Mixture Models of Concepts for Automatic Image Annotation. In: Sundaram, H., Naphade, M., Smith, J.R., Rui, Y. (eds.) CIVR 2006. LNCS, vol. 4071, pp. 102–112. Springer, Heidelberg (2006)
Tan, P., Steinbach, M., Kumar, V.: Introduction to Data Mining. Addison-Wesley, Reading (2006)
Maron, O., Raton, A.L.: Multiple-Instance Learning for Natural Scene Classification. In: ICML, pp. 341–349 (1998)
Zhang, Q., Yu, W., Goldman, S.A., Fritts, J.E.: Content-Based Retrieval Using Multiple-Instance Learning. In: IMCL, pp. 682–689 (2002)
Shi, Y., Malik, J.: Normalized cuts and image segmentation. In: CVPR, pp. 731–737 (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Tan, HK., Ngo, CW. (2006). Mining Multiple Visual Appearances of Semantics for Image Annotation. In: Cham, TJ., Cai, J., Dorai, C., Rajan, D., Chua, TS., Chia, LT. (eds) Advances in Multimedia Modeling. MMM 2007. Lecture Notes in Computer Science, vol 4351. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69423-6_27
Download citation
DOI: https://doi.org/10.1007/978-3-540-69423-6_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69421-2
Online ISBN: 978-3-540-69423-6
eBook Packages: Computer ScienceComputer Science (R0)