Advertisement

Effect of Junk Images on Inter-concept Distance Measurement: Positive or Negative?

  • Yusuke NagasawaEmail author
  • Kazuaki Nakamura
  • Naoko Nitta
  • Noboru Babaguchi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10133)

Abstract

In this paper, we focus on the problem of inter-concept distance measurement (ICDM), which is a task of computing the distance between two concepts. ICDM is generally achieved by constructing a visual model of each concept and calculating the dissimilarity score between two visual models. The process of visual concept modeling often suffers from the problem of junk images, i.e., the images whose visual content is not related to the given text-tags. Similarly, it is naively expected that junk images also give a negative effect on the performance of ICDM. On the other hand, junk images might be related to its text-tags in a certain (non-visual) sense because the text-tags are given by not automated systems but humans. Hence, the following question arises: Is the effect of junk images on the performance of ICDM positive or negative? In this paper, we aim to answer this non-trivial question from experimental aspects using a unified framework for ICDM and junk image detection. Surprisingly, our experimental result indicates that junk images give a positive effect on the performance of ICDM.

Keywords

Inter-concept distance Junk image detection Image reliableness Iterative calculation 

References

  1. 1.
    Zhu, S., Wang, G., Ngo, C., Jiang, Y.: On the sampling of web images for learning visual concept classifiers. In: Proceedings of 9th ACM International Conference on Image and Video Retrieval, pp. 50–57 (2010)Google Scholar
  2. 2.
    Yang, J., Li, Y., Tian, Y., Duan, L., Gao, W.: Per-sample multiple kernel approach for visual concept learning. EURASIP J. Image Video Process. 2010, 1–14 (2010)CrossRefGoogle Scholar
  3. 3.
    Qi, G., Aggarwal, C., Rui, Y., Tian, Q., Chang, S., Huang, T.: Towards cross-category knowledge propagation for learning visual concepts. In: Proceedings of 2011 IEEE Conference on CVPR, pp. 897–904 (2011)Google Scholar
  4. 4.
    Sjöberg, M., Koskela, M., Ishikawa, S., Laaksonen, J.: Real-time large-scale visual concept detection with linear classifiers. In: Proceedings of 21st International Conference on Pattern Recognition, pp. 421–424 (2012)Google Scholar
  5. 5.
    Zhuang, L., Gao, H., Luo, J., Lin, Z.: Regularized semi-supervised latent dirichlet allocation for visual concept learning. Neurocomputing 119, 26–32 (2013)CrossRefGoogle Scholar
  6. 6.
    Fan, J., He, X., Zhou, N., Peng, J., Jain, R.: Quantitative characterization of semantic gaps for learning complexity estimation and inference model selection. IEEE Trans. Multimedia 14(5), 1414–1428 (2012)CrossRefGoogle Scholar
  7. 7.
    Gudivada, V., Raghavan, V.: Content-based image retrieval systems. IEEE Comput. 28(9), 18–22 (1995)CrossRefGoogle Scholar
  8. 8.
    Kawakubo, H., Akima, Y., Yanai, K.: Automatic construction of a folksonomy-based visual ontology. In: Proceedings of 6th IEEE International Workshop on Multimedia Information Processing and Retrieval, pp. 330–335 (2010)Google Scholar
  9. 9.
    Wu, L., Hua, X., Yu, N., Ma, W., Li, S.: Flickr distance: a relationship mea-sure for visual concepts. IEEE Trans. PAMI 34(5), 863–875 (2012)CrossRefGoogle Scholar
  10. 10.
    Katsurai, M., Ogawa, T., Haseyama, M.: A cross-modal approach for extracting semantic relationships between concepts using tagged images. IEEE Trans. Multimedia 16(4), 1059–1074 (2014)CrossRefGoogle Scholar
  11. 11.
    Wang, M., Yang, K.: Constructing visual tag dictionary by mining community-contributed media corpus. Neurocomputing 95, 3–10 (2012)CrossRefGoogle Scholar
  12. 12.
    Zhang, X., Liu, C.: Image annotation based on feature fusion and semantic similarity. Neurocomputing 149, 1658–1671 (2015)CrossRefGoogle Scholar
  13. 13.
    Nakamura, K., Babaguchi, N.: Inter-concept distance measurement with adaptively weighted multiple visual features. In: Jawahar, C.V., Shan, S. (eds.) ACCV 2014. LNCS, vol. 9010, pp. 56–70. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-16634-6_5 Google Scholar
  14. 14.
    Kennedy, L.S., Chang, S., Kozintsev, I.V.: To search or to label?: predicting the performance of search-based automatic image classifiers. In: Proceedings of 8th ACM International Workshop on Multimedia Information Retrieval, pp. 249–258 (2006)Google Scholar
  15. 15.
    Setz, A.T., Snoek, C.G.M.: Can social tagged images aid concept-based video search?. In: Proceedings of IEEE International Conference on Multimedia and Expo 2009, pp. 1460–1463 (2009)Google Scholar
  16. 16.
    Schroff, F., Criminisi, A., Zisserman, A.: Harvesting image databases from the web. IEEE Trans. PAMI 33(4), 754–766 (2011)CrossRefGoogle Scholar
  17. 17.
    Fuglede, B., Topsøe, F.: Jensen-shannon divergence and hilbert space embedding. In: Proceedings of 2004 International Symposium on Information Theory, p. 30 (2004)Google Scholar
  18. 18.
    Pedersen, T., Patwardhan, S., Michelizzi, J.: Wordnet::Similarity - measuring the relatedness of concepts. In: Proceedings of 5th Annual Meeting of the North American Chapter of the Association for Computational Linguistics, pp. 38–41 (2004)Google Scholar
  19. 19.
    Budanitsky, A., Hirst, G.: Evaluating wordnet-based measures of lexical semantic relatedness. Comput. Linguist. 32(1), 13–47 (2006)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Yusuke Nagasawa
    • 1
    Email author
  • Kazuaki Nakamura
    • 1
  • Naoko Nitta
    • 1
  • Noboru Babaguchi
    • 1
  1. 1.Graduate School of EngineeringOsaka UniversitySuitaJapan

Personalised recommendations