© 2010


Experimental Evaluation in Visual Information Retrieval

  • Henning Müller
  • Paul Clough
  • Thomas Deselaers
  • Barbara Caputo

Part of the The Information Retrieval Series book series (INRE, volume 32)

Table of contents

  1. Front Matter
    Pages I-XXVIII
  2. Introduction

    1. Front Matter
      Pages 1-2
    2. Paul Clough, Henning Müller, Mark Sanderson
      Pages 3-18
    3. Michael Grubinger, Stefanie Nowak, Paul Clough
      Pages 19-43
    4. Jayashree Kalpathy–Cramer, Steven Bedrick, William Hersh
      Pages 63-80
    5. Adrien Depeursinge, Henning Müller
      Pages 95-114
  3. Track Reports

    1. Front Matter
      Pages 115-116
    2. Jussi Karlgren, Julio Gonzalo
      Pages 117-139
    3. Monica Lestari Paramita, Michael Grubinger
      Pages 141-162
    4. Theodora Tsikrika, Jana Kludas
      Pages 163-183
    5. Andrzej Pronobis, Barbara Caputo
      Pages 185-198
    6. Stefanie Nowak, Allan Hanbury, Thomas Deselaers
      Pages 199-219
    7. Tatiana Tommasi, Thomas Deselaers
      Pages 221-238
    8. Henning Müller, Jayashree Kalpathy–Cramer
      Pages 239-257
  4. Participant reports

    1. Front Matter
      Pages 259-260
    2. Teerapong Leelanupab, Guido Zuccon, Joemon M. Jose
      Pages 277-294
    3. Manuel Carlos Díaz–Galiano, Miguel Ángel García–Cumbreras, María Teresa Martín–Valdivia, Arturo Montejo-Ráez
      Pages 295-313

About this book


The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003.

To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally.

Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information – a multimodal approach – for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.


Annotation Image Retrieval Medical Image Processing Multimedia Retrieval Performance Evaluation Robot Vision Text Retrieval classification information retrieval media retrieval performance

Editors and affiliations

  • Henning Müller
    • 1
  • Paul Clough
    • 2
  • Thomas Deselaers
    • 3
  • Barbara Caputo
    • 4
  1. 1.HES-SO Business Information SystemsSierreSwitzerland
  2. 2.Dept. Information StudiesUniversity of SheffieldSheffieldUnited Kingdom
  3. 3., Computer Vision Lab/ETF-C 113.2ETH ZürichZürichSwitzerland
  4. 4.Idiap Research InstituteMartignySwitzerland

Bibliographic information

Industry Sectors
IT & Software
Consumer Packaged Goods
Finance, Business & Banking