Abstract
Tags associated with images in various social media sharing web sites are valuable information source for superior image retrieval experiences. Due to the nature of tagging, many tags associated with images are not visually descriptive. In this chapter, we propose Image Tag Clarity to evaluate the effectiveness of a tag in describing the visual content of its annotated images, which is also known as the image tag visual-representativeness. It is measured by computing the zero-mean normalized distance between the tag language model estimated from the images annotated by the tag and the collection language model. The tag/collection language models are derived from the bag of visual-word local content features of the images. The visual-representative tags that are commonly used to annotate visually similar images are given high tag clarity scores. Evaluated on a large real-world dataset containing more than 269K images and their associated tags, we show that the image tag clarity score can effectively identify the visual-representative tags from all tags contributed by users. Based on the tag clarity scores, we have made a few interesting observations that could be used to support many tag-based applications.
This chapter is an extended version of the paper [11] presented at the first ACM SIGMM Workshop on Social Media (WSM), held in conjunction with ACM Multimedia, 2009.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
Nevertheless, we believe that the image tag clarity score is generic and can be computed using other feature representations.
- 3.
Recall that both τ(t) and τ(t′) are computed purely from visual content features of their tagged images.
- 4.
- 5.
http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm Accessed June 2009.
- 6.
The number reported here is slightly different from that reported in [1] probably due to different pre-processing. Nevertheless, the tag distribution remains similar.
- 7.
One image may be annotated by multiple visual or non-visual tags, respectively.
References
Chua, T.-S., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.-T.: Nus-wide: A real-world web image database from National University of Singapore. In: Proc. of ACM CIVR, Santorini, Greece, July 2009
Cronen-Townsend, S., Zhou, Y., Croft, W.B.: Predicting query performance. In: Proc. of SIGIR, Tampere, Finland, pp. 299–306 (2002)
Elsas, J.L., Arguello, J., Callan, J., Carbonell, J.G.: Retrieval and feedback models for blog feed search. In: Proc. of SIGIR, Singapore, pp. 347–354 (2008)
Golder, S.A., Huberman, B.A.: Usage patterns of collaborative tagging systems. J. Inf. Sci. 32(2), 198–208 (2006)
Hauff, C., Murdock, V., Baeza-Yates, R.: Improved query difficulty prediction for the web. In: Proc. of CIKM, Napa Valley, CA, pp. 439–448 (2008)
Knorr, E.M., Ng, R.T.: Algorithms for mining distance-based outliers in large datasets. In: Proc. of VLDB, pp. 392–403. Morgan Kaufmann, San Mateo (1998)
Li, X., Snoek, C.G.M., Worring, M.: Learning tag relevance by neighbor voting for social image retrieval. In: Proc. of MIR, pp. 180–187 (2008)
Liu, D., Hua, X.-S., Yang, L., Wang, M., Zhang, H.-J.: Tag ranking. In: Proc. WWW, Madrid, Spain, pp. 351–360 (2009)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)
Lu, Y., Zhang, L., Tian, Q., Ma, W.-Y.: What are the high-level concepts with small semantic gaps. In: Proc. of CVPR, Alaska, USA (2008)
Sun, A., Bhowmick, S.S.: Image tag clarity: In search of visual-representative tags for social images. In: Proc. of ACM SIGMM Workshop on Social Media (WSM) with ACM Multimedia, Beijing, China, Oct 2009
Sun, A., Bhowmick, S.S.: Quantifying tag representativeness of visual content of social images. In: Proc. of ACM Multimedia (MM), Firenze, Italy, Oct 2010
Sun, A., Datta, A.: On stability, clarity, and co-occurrence of self-tagging. In: Proc. of ACM WSDM (Late Breaking-Results) (2009)
Sun, A., Bhowmick, S.S., Liu, Y.: iavatar: An interactive tool for finding and visualizing visual-representative tags in image search. In: Proceedings of the VLDB Endowment (PVLDB) 3(2), Sep 2010
Teevan, J., Dumais, S.T., Liebling, D.J.: To personalize or not to personalize: modeling queries with variation in user intent. In: Proc. of SIGIR, Singapore, pp. 163–170 (2008)
Weinberger, K., Slaney, M., van Zwol, R.: Resolving tag ambiguity. In: Proc. of ACM Multimedia (MM), Vancouver, Canada (2008)
Wu, L., Hua, X.-S., Yu, N., Ma, W.-Y., Li, S.: Flickr distance. In: Proc. of ACM Multimedia (MM), Vancouver, Canada, pp. 31–40 (2008)
Wu, L., Yang, L., Yu, N., Hua, X.-S.: Learning to tag. In: Proc. WWW, Madrid, Spain, pp. 361–370 (2009)
Yom-Tov, E., Fine, S., Carmel, D., Darlow, A.: Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In: Proc. of SIGIR, Salvador, Brazil, pp. 512–519 (2005)
Zhou, Y., Croft, W.B.: Query performance prediction in web search environments. In: Proc. of SIGIR, Amsterdam, pp. 543–550 (2007)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag London Limited
About this chapter
Cite this chapter
Sun, A., Bhowmick, S.S. (2011). Quantifying Visual-Representativeness of Social Image Tags Using Image Tag Clarity. In: Hoi, S., Luo, J., Boll, S., Xu, D., Jin, R., King, I. (eds) Social Media Modeling and Computing. Springer, London. https://doi.org/10.1007/978-0-85729-436-4_1
Download citation
DOI: https://doi.org/10.1007/978-0-85729-436-4_1
Publisher Name: Springer, London
Print ISBN: 978-0-85729-435-7
Online ISBN: 978-0-85729-436-4
eBook Packages: Computer ScienceComputer Science (R0)