Skip to main content

Quantifying Visual-Representativeness of Social Image Tags Using Image Tag Clarity

  • Chapter
Social Media Modeling and Computing

Abstract

Tags associated with images in various social media sharing web sites are valuable information source for superior image retrieval experiences. Due to the nature of tagging, many tags associated with images are not visually descriptive. In this chapter, we propose Image Tag Clarity to evaluate the effectiveness of a tag in describing the visual content of its annotated images, which is also known as the image tag visual-representativeness. It is measured by computing the zero-mean normalized distance between the tag language model estimated from the images annotated by the tag and the collection language model. The tag/collection language models are derived from the bag of visual-word local content features of the images. The visual-representative tags that are commonly used to annotate visually similar images are given high tag clarity scores. Evaluated on a large real-world dataset containing more than 269K images and their associated tags, we show that the image tag clarity score can effectively identify the visual-representative tags from all tags contributed by users. Based on the tag clarity scores, we have made a few interesting observations that could be used to support many tag-based applications.

This chapter is an extended version of the paper [11] presented at the first ACM SIGMM Workshop on Social Media (WSM), held in conjunction with ACM Multimedia, 2009.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.flickr.com.

  2. 2.

    Nevertheless, we believe that the image tag clarity score is generic and can be computed using other feature representations.

  3. 3.

    Recall that both τ(t) and τ(t′) are computed purely from visual content features of their tagged images.

  4. 4.

    Note that the way we bin the tag frequencies and the number of dummy tags used for estimation of the expected tag clarity scores and their standard deviations are different from that in [11]. This leads to differences in the normalized tag clarity scores reported in Sects. 4 and 5.

  5. 5.

    http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm Accessed June 2009.

  6. 6.

    The number reported here is slightly different from that reported in [1] probably due to different pre-processing. Nevertheless, the tag distribution remains similar.

  7. 7.

    One image may be annotated by multiple visual or non-visual tags, respectively.

References

  1. Chua, T.-S., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.-T.: Nus-wide: A real-world web image database from National University of Singapore. In: Proc. of ACM CIVR, Santorini, Greece, July 2009

    Google Scholar 

  2. Cronen-Townsend, S., Zhou, Y., Croft, W.B.: Predicting query performance. In: Proc. of SIGIR, Tampere, Finland, pp. 299–306 (2002)

    Google Scholar 

  3. Elsas, J.L., Arguello, J., Callan, J., Carbonell, J.G.: Retrieval and feedback models for blog feed search. In: Proc. of SIGIR, Singapore, pp. 347–354 (2008)

    Chapter  Google Scholar 

  4. Golder, S.A., Huberman, B.A.: Usage patterns of collaborative tagging systems. J. Inf. Sci. 32(2), 198–208 (2006)

    Article  Google Scholar 

  5. Hauff, C., Murdock, V., Baeza-Yates, R.: Improved query difficulty prediction for the web. In: Proc. of CIKM, Napa Valley, CA, pp. 439–448 (2008)

    Chapter  Google Scholar 

  6. Knorr, E.M., Ng, R.T.: Algorithms for mining distance-based outliers in large datasets. In: Proc. of VLDB, pp. 392–403. Morgan Kaufmann, San Mateo (1998)

    Google Scholar 

  7. Li, X., Snoek, C.G.M., Worring, M.: Learning tag relevance by neighbor voting for social image retrieval. In: Proc. of MIR, pp. 180–187 (2008)

    Chapter  Google Scholar 

  8. Liu, D., Hua, X.-S., Yang, L., Wang, M., Zhang, H.-J.: Tag ranking. In: Proc. WWW, Madrid, Spain, pp. 351–360 (2009)

    Chapter  Google Scholar 

  9. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  10. Lu, Y., Zhang, L., Tian, Q., Ma, W.-Y.: What are the high-level concepts with small semantic gaps. In: Proc. of CVPR, Alaska, USA (2008)

    Google Scholar 

  11. Sun, A., Bhowmick, S.S.: Image tag clarity: In search of visual-representative tags for social images. In: Proc. of ACM SIGMM Workshop on Social Media (WSM) with ACM Multimedia, Beijing, China, Oct 2009

    Google Scholar 

  12. Sun, A., Bhowmick, S.S.: Quantifying tag representativeness of visual content of social images. In: Proc. of ACM Multimedia (MM), Firenze, Italy, Oct 2010

    Google Scholar 

  13. Sun, A., Datta, A.: On stability, clarity, and co-occurrence of self-tagging. In: Proc. of ACM WSDM (Late Breaking-Results) (2009)

    Google Scholar 

  14. Sun, A., Bhowmick, S.S., Liu, Y.: iavatar: An interactive tool for finding and visualizing visual-representative tags in image search. In: Proceedings of the VLDB Endowment (PVLDB) 3(2), Sep 2010

    Google Scholar 

  15. Teevan, J., Dumais, S.T., Liebling, D.J.: To personalize or not to personalize: modeling queries with variation in user intent. In: Proc. of SIGIR, Singapore, pp. 163–170 (2008)

    Chapter  Google Scholar 

  16. Weinberger, K., Slaney, M., van Zwol, R.: Resolving tag ambiguity. In: Proc. of ACM Multimedia (MM), Vancouver, Canada (2008)

    Google Scholar 

  17. Wu, L., Hua, X.-S., Yu, N., Ma, W.-Y., Li, S.: Flickr distance. In: Proc. of ACM Multimedia (MM), Vancouver, Canada, pp. 31–40 (2008)

    Google Scholar 

  18. Wu, L., Yang, L., Yu, N., Hua, X.-S.: Learning to tag. In: Proc. WWW, Madrid, Spain, pp. 361–370 (2009)

    Chapter  Google Scholar 

  19. Yom-Tov, E., Fine, S., Carmel, D., Darlow, A.: Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In: Proc. of SIGIR, Salvador, Brazil, pp. 512–519 (2005)

    Google Scholar 

  20. Zhou, Y., Croft, W.B.: Query performance prediction in web search environments. In: Proc. of SIGIR, Amsterdam, pp. 543–550 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aixin Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag London Limited

About this chapter

Cite this chapter

Sun, A., Bhowmick, S.S. (2011). Quantifying Visual-Representativeness of Social Image Tags Using Image Tag Clarity. In: Hoi, S., Luo, J., Boll, S., Xu, D., Jin, R., King, I. (eds) Social Media Modeling and Computing. Springer, London. https://doi.org/10.1007/978-0-85729-436-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-436-4_1

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-435-7

  • Online ISBN: 978-0-85729-436-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics