Advertisement

Multimedia Tools and Applications

, Volume 78, Issue 14, pp 19641–19662 | Cite as

Exploiting label semantic relatedness for unsupervised image annotation with large free vocabularies

  • Luis PellegrinEmail author
  • Hugo Jair Escalante
  • Manuel Montes-y-Gómez
  • Fabio A. González
Article

Abstract

Automatic Image Annotation (AIA) is the task of assigning keywords to images, with the aim to describe their visual content. Recently, an unsupervised approach has been used to tackle this task. Unsupervised AIA (UAIA) methods use reference collections that consist of the textual documents containing images. The aim of the UAIA methods is to extract words from the reference collection to be assigned to images. In this regard, by using an unsupervised approach it is possible to include large vocabularies because any word could be extracted from the reference collection. However, having a greater diversity of words for labeling entails to deal with a larger number of wrong annotations, due to the increasing difficulty for assigning a correct relevance to the labels. With this problem in mind, this paper presents a general strategy for UAIA methods that reranks assigned labels. The proposed method exploits the semantic-relatedness information among labels in order to assign them an appropriate relevance for describing images. Experimental results in different benchmark datasets show the flexibility of our method to deal with assignments from free-vocabularies, and its effectiveness to improve the initial annotation performance for different UAIA methods. Moreover, we found that (1) when considering the semantic-relatedness information among the assigned labels, the initial ranking provided by a UAIA method is improved in most of the cases; and (2) the robustness of the proposed method to be applied on different UAIA methods, will allow extending capabilities of state-of-the-art UAIA methods.

Keywords

Unsupervised image annotation Label relevance Semantic-relatedness estimation 

Notes

Acknowledgements

This work was supported by CONACYT under project grant CB-2014-241306 (Clasificación y recuperación de imágenes mediante técnicas de minería de textos). First author was supported by CONACyT under scholarship No. 214764.

References

  1. 1.
    Ahmed Z, Zeeshan S, Dandekar T (2016) Mining biomedical images towards valuable information retrieval in biomedical and life sciences. Database 2016.  https://doi.org/10.1093/database/baw118
  2. 2.
    Budíková P, Botorek J, Batko M, Zezula P (2014) DISA at imageclef 2014 revised: search-based image annotation with decaf features. arXiv:1409.4627
  3. 3.
    Chavez-Garcia RO, Montes M, Sucar L (2010) Image re-ranking based on relevance feedback combining internal and external similarities. In: Proceedings of the twenty-third international florida artificial intelligence research society conference (FLAIRS 2010), pp 140–141Google Scholar
  4. 4.
    Deng J, Dong W, Socher R, Li LJ, Li K, Li FF (2009) Imagenet: a large-scale hierarchical image database. In: CVPR09. IEEE Computer Society, pp 248–255Google Scholar
  5. 5.
    Escalante HJ, Montes y Gomez M, Sucar LE (2007) Word co-occurrence and markov random fields for improving automatic image annotation. In: Proceedings of the British machine vision conference, vol 2. BMVA Press, Warwick, pp 600–609.  https://doi.org/10.5244/C.21.60
  6. 6.
    Escalante HJ, Hernández C, Gonzalez J, López-López A, Montes M, Morales E, Sucar LE, Villaseñor L, Grubinger M (2010) The segmented and annotated {IAPR} tc-12 benchmark. Comput Vis Image Underst 114(4):419–428.  https://doi.org/10.1016/j.cviu.2009.03.008. Special issue on image and video retrieval evaluationGoogle Scholar
  7. 7.
    Escalante HJ, Montes M, Sucar E (2011) An energy-based model for region labeling. Comput Vis Image Underst 115(6):787–803Google Scholar
  8. 8.
    Feng Y, Lapata M (2008) Automatic image annotation using auxiliary text information. In: Proceedings of ACL-08: HLT. Association for Computational Linguistics, Columbus, pp 272–280Google Scholar
  9. 9.
    Grauman K, Leibe B (2011) Visual object recognition. Synthesis Lectures on Artificial Intelligence and Machine Learning 5:1–181.  https://doi.org/10.2200/S00332ED1V01Y201103AIM011 Google Scholar
  10. 10.
    Guillaumin M, Mensink T, Verbeek J, Schmid C (2009) Tagprop: discriminative metric learning in nearest neighbor models for image auto-annotation. In: International conference on computer vision (ICCV), pp 309–316.  https://doi.org/10.1109/ICCV.2009.5459266
  11. 11.
    Hanbury A (2008) A survey of methods for image annotation. J Vis Lang Comput 19(5):617–627.  https://doi.org/10.1016/j.jvlc.2008.01.002 Google Scholar
  12. 12.
    He Y, Li Y, Lei J, Leung C (2016) A framework of query expansion for image retrieval based on knowledge base and concept similarity. Neurocomputing 204(C):26–32.  https://doi.org/10.1016/j.neucom.2015.11.102 Google Scholar
  13. 13.
    Hernández-Gracidas CA, Sucar LE, Montes-Y-Gómez M (2013) Improving image retrieval by using spatial relations. Multimedia Tools Appl 62(2):479–505.  https://doi.org/10.1007/s11042-011-0911-1 Google Scholar
  14. 14.
    Hoque E, Hoeber O, Gong M (2013) Cider: concept-based image diversification, exploration, and retrieval. Inf Process Manag 49(5):1122–1138.  https://doi.org/10.1016/j.ipm.2012.12.001 Google Scholar
  15. 15.
    Hyung Z, Park JS, Lee K (2017) Utilizing context-relevant keywords extracted from a large collection of user-generated documents for music discovery. Inf Process Manag 53(5):1185–1200.  https://doi.org/10.1016/j.ipm.2017.04.006 Google Scholar
  16. 16.
    Kherfi ML, Ziou D, Bernardi A (2004) Image retrieval from the world wide web: issues, techniques, and systems. ACM Comput Surv 36(1):35–67.  https://doi.org/10.1145/1013208.1013210 Google Scholar
  17. 17.
    Li H, Guan Y, Liu L, Wang F, Wang L (2016) Re-ranking for microblog retrieval via multiple graph model. Multimed Tools Appl 75(15):8939–8954Google Scholar
  18. 18.
    Li J, Xu C, Yang W, Sun C, Tao D (2017) Discriminative multi-view interactive image re-ranking. IEEE Trans Image Process 26(7):3113–3127.  https://doi.org/10.1109/TIP.2017.2651379 MathSciNetzbMATHGoogle Scholar
  19. 19.
    Llorente A, Rüger S (2009) Using second order statistics to enhance automated image annotation. In: Proceedings of the 31th European conference on IR research on advances in information retrieval, ECIR ’09, pp 570–577Google Scholar
  20. 20.
    Llorente A, Motta E, Rüger S (2010) Exploring the semantics behind a collection to improve automated image annotation. In: Proceedings of the 10th international conference on cross-language evaluation forum: multimedia experiments, CLEF’09, pp 307–314.  https://doi.org/10.1007/978-3-642-15751-6-40
  21. 21.
    Makadia A, Pavlovic V, Kumar S (2010) Baselines for image annotation. Int J Comput Vis 90(1):88–105Google Scholar
  22. 22.
    Mikolov T, Sutskever I, Chen K, Corrado G, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th international conference on neural information processing systems, NIPS’13. Curran Associates Inc., USA, pp 3111–3119Google Scholar
  23. 23.
    Miller GA (1995) Wordnet: a lexical database for english. Commun ACM 38 (11):39–41. https://wordnet.princeton.edu/ Google Scholar
  24. 24.
    Mithun NC, Panda R, Roy-Chowdhury AK (2016) Generating diverse image datasets with limited labeling. In: Proceedings of the 2016 ACM on multimedia conference, MM ’16. ACM, New York, pp 566–570,  https://doi.org/10.1145/2964284.2967285
  25. 25.
    Murray N, Perronnin F (2014) Generalized max pooling. In: Proceedings of the 2014 IEEE conference on computer vision and pattern recognition, CVPR ’14. IEEE Computer Society, Washington, pp 2473–2480.  https://doi.org/10.1109/CVPR.2014.317
  26. 26.
    Ordonez V, Han X, Kuznetsova P, Kulkarni G, Mitchell M, Yamaguchi K, Stratos K, Goyal A, Dodge J, Mensch A, Daumé H., Berg AC, Choi Y, Berg TL (2016) Large scale retrieval and generation of image descriptions. Int J Comput Vis 119(1):46–59MathSciNetGoogle Scholar
  27. 27.
    Pandey S, Khanna P, Yokota H (2016) A semantics and image retrieval system for hierarchical image databases. Inf Process Manag 52(4):571–591.  https://doi.org/10.1016/j.ipm.2015.12.005 Google Scholar
  28. 28.
    Pellegrin L, Escalante HJ, Montes-y Gómez M, González FA (2016) Local and global approaches for unsupervised image annotation. Multimed Tools Appl 1–26.  https://doi.org/10.1007/s11042-016-3918-9
  29. 29.
    Pellegrin L, Escalante HJ, Montes-y Gómez M, Villegas M, González FA (2018) A flexible framework for the evaluation of unsupervised image annotation. In: Mendoza M, Velastín S (eds) Progress in pattern recognition, image analysis, computer vision, and applications. Springer International Publishing, Cham, pp 508–516Google Scholar
  30. 30.
    Rahman M, Antani S, Thoma G (2011) A query expansion framework in image retrieval domain based on local and global analysis. Inf Process Manag 47(5):676–691.  https://doi.org/10.1016/j.ipm.2010.12.001 Google Scholar
  31. 31.
    Reshma IA, Ullah MZ, Aono M (2014) Ontology based classification for multi-label image annotation. In: 2014 international conference of advanced informatics: concept, theory and application (ICAICTA), pp 226–231.  https://doi.org/10.1109/ICAICTA.2014.7005945
  32. 32.
    Ramírez-de-la Rosa G, Montes-y Gómez M, Solorio T, Villaseñor-Pineda L (2013) A document is known by the company it keeps: neighborhood consensus for short text categorization. Lang Resour Eval 47(1):127–149Google Scholar
  33. 33.
    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) Imagenet large scale visual recognition challenge. Int J Comput Vision 115(3):211–252.  https://doi.org/10.1007/s11263-015-0816-y MathSciNetGoogle Scholar
  34. 34.
    Salton G, Buckley C (1988) Term-weighting approaches in automatic text retrieval. Inf Process Manag 24(5):513–523.  https://doi.org/10.1016/0306-4573(88)90021-0 Google Scholar
  35. 35.
    Sánchez-Oro J, Montalvo S, Montemayor A, Pantrigo J, Duarte A, Fresno V, Martínez R (2013) Urjc&uned at imageclef 2013 photo annotation task. In: CLEF 2013 evaluation labs and workshop, online working notes, vol 1179. CEUR-WS.org
  36. 36.
    Schroff F, Criminisi A, Zisserman A (2011) Harvesting image databases from the web. IEEE Trans Pattern Anal Mach Intell 33(4):754–766.  https://doi.org/10.1109/TPAMI.2010.133 Google Scholar
  37. 37.
    Schwartz H, Gomez F (2011) Evaluating semantic metrics on tasks of concept similarity. In: FLAIRS conferenceGoogle Scholar
  38. 38.
    Siva P, Russell C, Xiang T, Agapito L (2013) Looking beyond the image: unsupervised learning for object saliency and detection. In: 2013 IEEE conference on computer vision and pattern recognition, pp 3238–3245.  https://doi.org/10.1109/CVPR.2013.416
  39. 39.
    Stathopoulos S, Kalamboukis T (2014) Ipl at imageclef 2014: Scalable concept image annotation. In: CLEF 2014 evaluation labs and workshop, online working notes, vol 1180, pp 398–403. CEUR-WS.org
  40. 40.
    Tariq A, Foroosh H (2017) Learning semantics for image annotation. arXiv:1705.05102
  41. 41.
    Tian F, Shen X (2013) Annotating web images by combining label set relevance with correlation. In: Proceedings of the 14th international conference on web-age information management, WAIM’13. Springer, Berlin, pp 747–756.  https://doi.org/10.1007/978-3-642-38562-9-76
  42. 42.
    Tian F, Shen X (2014) Learning label set relevance for search based image annotation. In: 2014 international conference on virtual reality and visualization, pp 260–265.  https://doi.org/10.1109/ICVRV.2014.39
  43. 43.
    Torralba A, Fergus R, Freeman WT (2008) 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Trans Pattern Anal Mach Intell 30(11):1958–1970.  https://doi.org/10.1109/TPAMI.2008.128 Google Scholar
  44. 44.
    Uricchio T, Bertini M, Ballan L, del Bimbo A (2013) MICC-UNIFI at ImageCLEF 2013 scalable concept image annotation. In: Working notes for CLEF 2013 conference, Valencia, Spain, September 23–26, vol 1179. CEUR-WS.org
  45. 45.
    Villatoro E, Juárez A, Montes M, Villaseñor L, Sucar LE (2012) Document ranking refinement using a markov random field model. Nat Lang Eng 18(2):155–185Google Scholar
  46. 46.
    Villegas M, Paredes R (2012) Overview of the imageclef 2012 scalable concept image annotation task. In: CLEF 2012 evaluation labs and workshop, online working notes, vol 1178. CEUR-WS.org
  47. 47.
    Villegas M, Paredes R (2013) 2013 imageclef webupv collection.  https://doi.org/10.5281/zenodo.257722
  48. 48.
    Villegas M, Paredes R, Thomee B (2013) Overview of the imageclef 2013 scalable concept image annotation subtask. In: CLEF 2013 Evaluation labs and workshop, pp 1–19. CEUR-WS.org
  49. 49.
    Villegas M, Müller H, Gilbert A, Piras L, Wang J, Mikolajczyk K, de Herrera AGS, Bromuri S, Amin MA, Mohammed MK, Acar B, Uskudarli S, Marvasti NB, Aldana JF, del Mar Roldán García M (2015) General overview of ImageCLEF at the CLEF 2015 labs. In: Experimental IR meets multilinguality, multimodality, and interaction: 6th international conference of the clef association. Lecture notes in computer science, vol 9283. Springer International Publishing, Cham, pp 444–461.  https://doi.org/10.1007/978-3-319-24027-5-45
  50. 50.
    Wu Z, Palmer M (1994) Verbs semantics and lexical selection. In: Proceedings of the 32nd annual meeting on association for computational linguistics, ACL ’94. Association for Computational Linguistics, Stroudsburg, pp 133–138.  https://doi.org/10.3115/981732.981751
  51. 51.
    Xu H, Pan P, Lu Y, Xu C, Chen D (2014) Improving automatic image annotation with google semantic link. In: 2014 10th international conference on semantics, knowledge and grids, pp 177–184.  https://doi.org/10.1109/SKG.2014.12
  52. 52.
    Zhang L, Zhang Q, Zhang L, Tao D, Huang X, Du B (2015) Ensemble manifold regularized sparse low-rank approximation for multiview feature embedding. Pattern Recogn 48(10):3102–3112.  https://doi.org/10.1016/j.patcog.2014.12.016. Discriminative feature learning from big data for visual recognitionGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of SciencesUniversidad Autónoma de Baja California (UABC)EnsenadaMexico
  2. 2.Computer Science DepartmentInstituto Nacional de Astrofísica, Óptica y Electrónica (INAOE)San Andrés CholulaMexico
  3. 3.Computing Systems and Industrial Engineering DepartmentUniversidad Nacional de ColombiaBogotáColombia

Personalised recommendations