A Flexible Framework for the Evaluation of Unsupervised Image Annotation

  • Luis Pellegrin
  • Hugo Jair Escalante
  • Manuel Montes-y-Gómez
  • Mauricio Villegas
  • Fabio A. González
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10657)

Abstract

Automatic Image Annotation (AIA) consists in assigning keywords to images describing their visual content. A prevalent way to address the AIA task is based on supervised learning. However, the unsupervised approach is a new alternative that makes a lot of sense when there are not manually labeled images to train supervised techniques. AIA methods are typically evaluated using supervised learning performance measures, however applying these kind of measures to unsupervised methods is difficult and unfair. The main restriction has to do with the fact that unsupervised methods use an unrestricted annotation vocabulary while supervised methods use a restricted one. With the aim to alleviate the unfair evaluation, in this paper we propose a flexible evaluation framework that allows us to compare coverage and relevance of the assigned words by unsupervised automatic image annotation (UAIA) methods. We show the robustness of our framework through a set of experiments where we evaluated the output of both, unsupervised and supervised methods.

Notes

Acknowledgment

This work was supported by CONACYT under project grant CB-2014-241306 (Clasificación y recuperación de imágenes mediante técnicas de minería de textos). And, also by the CONACyT with scholarship No. 214764.

References

  1. 1.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems 25, pp. 1097–1105 (2012)Google Scholar
  2. 2.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR, pp. 580–587 (2014)Google Scholar
  3. 3.
    Pellegrin, L., Escalante, H.J., Montes-y Gómez, M., González, F.A.: Local and global approaches for unsupervised image annotation. Multimedia Tools Appl. 76(15), 16389–16414 (2017).  https://doi.org/10.1007/s11042-016-3918-9. ISSN 1573-7721CrossRefGoogle Scholar
  4. 4.
    Bernardi, R., Çakici, R., Elliott, D., Erdem, A., Erdem, E., Ikizler-Cinbis, N., Keller, F., Muscat, A., Plank, B.: Automatic description generation from images: a survey of models, datasets, and evaluation measures. CoRR abs/1601.03896 (2016)Google Scholar
  5. 5.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS, pp. 3111–3119 (2013)Google Scholar
  6. 6.
    Wu, Z., Palmer, M.: Verbs semantics and lexical selection. In: ACL, pp. 133–138 (1994)Google Scholar
  7. 7.
    Villegas, M., Paredes, R.: 2013 imageCLEF WEBUPV collection. Zenodo. http://doi.org/10.5281/zenodo.257722. Accessed 06 Nov 2017
  8. 8.
    Sahbi, H.: CNRS - telecom paristech at imageCLEF 2013 scalable concept image annotation task: winning annotations with context dependent SVMS. In: CLEF 2013 Evaluation Labs and Workshop, Online Working Notes, pp. 1–12 (2013)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Computer Science DepartmentInstituto Nacional de Astrofísica, Óptica y Electrónica (INAOE)TonantzintlaMexico
  2. 2.SearchInkBerlinGermany
  3. 3.Computing Systems and Industrial Engineering Department, MindLabUniversidad Nacional de ColombiaBogotá DCColombia

Personalised recommendations