Advertisement

Quantifying Visual-Representativeness of Social Image Tags Using Image Tag Clarity

  • Aixin Sun
  • Sourav S. Bhowmick

Abstract

Tags associated with images in various social media sharing web sites are valuable information source for superior image retrieval experiences. Due to the nature of tagging, many tags associated with images are not visually descriptive. In this chapter, we propose Image Tag Clarity to evaluate the effectiveness of a tag in describing the visual content of its annotated images, which is also known as the image tag visual-representativeness. It is measured by computing the zero-mean normalized distance between the tag language model estimated from the images annotated by the tag and the collection language model. The tag/collection language models are derived from the bag of visual-word local content features of the images. The visual-representative tags that are commonly used to annotate visually similar images are given high tag clarity scores. Evaluated on a large real-world dataset containing more than 269K images and their associated tags, we show that the image tag clarity score can effectively identify the visual-representative tags from all tags contributed by users. Based on the tag clarity scores, we have made a few interesting observations that could be used to support many tag-based applications.

Keywords

Language Model Visual Content Annotate Image Clarity Score Query Language Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Chua, T.-S., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.-T.: Nus-wide: A real-world web image database from National University of Singapore. In: Proc. of ACM CIVR, Santorini, Greece, July 2009 Google Scholar
  2. 2.
    Cronen-Townsend, S., Zhou, Y., Croft, W.B.: Predicting query performance. In: Proc. of SIGIR, Tampere, Finland, pp. 299–306 (2002) Google Scholar
  3. 3.
    Elsas, J.L., Arguello, J., Callan, J., Carbonell, J.G.: Retrieval and feedback models for blog feed search. In: Proc. of SIGIR, Singapore, pp. 347–354 (2008) CrossRefGoogle Scholar
  4. 4.
    Golder, S.A., Huberman, B.A.: Usage patterns of collaborative tagging systems. J. Inf. Sci. 32(2), 198–208 (2006) CrossRefGoogle Scholar
  5. 5.
    Hauff, C., Murdock, V., Baeza-Yates, R.: Improved query difficulty prediction for the web. In: Proc. of CIKM, Napa Valley, CA, pp. 439–448 (2008) CrossRefGoogle Scholar
  6. 6.
    Knorr, E.M., Ng, R.T.: Algorithms for mining distance-based outliers in large datasets. In: Proc. of VLDB, pp. 392–403. Morgan Kaufmann, San Mateo (1998) Google Scholar
  7. 7.
    Li, X., Snoek, C.G.M., Worring, M.: Learning tag relevance by neighbor voting for social image retrieval. In: Proc. of MIR, pp. 180–187 (2008) CrossRefGoogle Scholar
  8. 8.
    Liu, D., Hua, X.-S., Yang, L., Wang, M., Zhang, H.-J.: Tag ranking. In: Proc. WWW, Madrid, Spain, pp. 351–360 (2009) CrossRefGoogle Scholar
  9. 9.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004) CrossRefGoogle Scholar
  10. 10.
    Lu, Y., Zhang, L., Tian, Q., Ma, W.-Y.: What are the high-level concepts with small semantic gaps. In: Proc. of CVPR, Alaska, USA (2008) Google Scholar
  11. 11.
    Sun, A., Bhowmick, S.S.: Image tag clarity: In search of visual-representative tags for social images. In: Proc. of ACM SIGMM Workshop on Social Media (WSM) with ACM Multimedia, Beijing, China, Oct 2009 Google Scholar
  12. 12.
    Sun, A., Bhowmick, S.S.: Quantifying tag representativeness of visual content of social images. In: Proc. of ACM Multimedia (MM), Firenze, Italy, Oct 2010 Google Scholar
  13. 13.
    Sun, A., Datta, A.: On stability, clarity, and co-occurrence of self-tagging. In: Proc. of ACM WSDM (Late Breaking-Results) (2009) Google Scholar
  14. 14.
    Sun, A., Bhowmick, S.S., Liu, Y.: iavatar: An interactive tool for finding and visualizing visual-representative tags in image search. In: Proceedings of the VLDB Endowment (PVLDB) 3(2), Sep 2010 Google Scholar
  15. 15.
    Teevan, J., Dumais, S.T., Liebling, D.J.: To personalize or not to personalize: modeling queries with variation in user intent. In: Proc. of SIGIR, Singapore, pp. 163–170 (2008) CrossRefGoogle Scholar
  16. 16.
    Weinberger, K., Slaney, M., van Zwol, R.: Resolving tag ambiguity. In: Proc. of ACM Multimedia (MM), Vancouver, Canada (2008) Google Scholar
  17. 17.
    Wu, L., Hua, X.-S., Yu, N., Ma, W.-Y., Li, S.: Flickr distance. In: Proc. of ACM Multimedia (MM), Vancouver, Canada, pp. 31–40 (2008) Google Scholar
  18. 18.
    Wu, L., Yang, L., Yu, N., Hua, X.-S.: Learning to tag. In: Proc. WWW, Madrid, Spain, pp. 361–370 (2009) CrossRefGoogle Scholar
  19. 19.
    Yom-Tov, E., Fine, S., Carmel, D., Darlow, A.: Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In: Proc. of SIGIR, Salvador, Brazil, pp. 512–519 (2005) Google Scholar
  20. 20.
    Zhou, Y., Croft, W.B.: Query performance prediction in web search environments. In: Proc. of SIGIR, Amsterdam, pp. 543–550 (2007) Google Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  1. 1.School of Computer EngineeringNanyang Technological UniversitySingaporeSingapore

Personalised recommendations