Zeitschrift für Datenbanktechnologien und Information Retrieval

ISSN: 1618-2162 (Print) 1610-1995 (Online)


Datenbank-Spektrum ist das offizielle Organ der Fachgruppe Datenbanken und Information Retrieval der Gesellschaft für Informatik (GI) e.V. Die Zeitschrift widmet sich den Themen Datenbanken, Datenbankanwendungen und Information Retrieval. Sie vermittelt fundiertes Wissen über die aktuellen Standards und Technologien, deren Einsatz und ihre kommerzielle Relevanz.

Neben Grundlagenbeiträgen, Tutorials, wissenschaftlichen Fachbeiträgen, aktuellen Forschungsergebnissen finden sich in jeder Ausgabe auch Informationen über die Aktivitäten der Fachgruppen, zu Konferenzen und Workshops und über neue Produkte und Bücher. Ein renommiertes Herausgebergremium aus Hochschule und Industrie gewährleistet die Qualität und fachliche Kompetenz der Beiträge.


Künftige Schwerpunktthemen:


Data and Repeatability

What is common practice in most natural sciences has only recently entered the database field: Ensuring repeatability (or reproducibility) of experiments, in order to validate scientific results and enable experimental comparisons of methods. In our field, the ability to reproduce and repeat experiments has two main ingredients: First, the software or sufficient method description. Second, the data to run the experiments on. This special issue on data and repeatability places its focus on this second, arguably more challenging part.

Providing data for experimentation must overcome many obstacles. For instance, the data must be non-private and non-proprietary; for many types of experiments, data must be properly labeled or accompanied by a gold-standard; in many cases, data is “massaged” before entering experiments; special properties of the data, such as distributions or size, must be known or even adaptable. Sometimes data is varied to fit specific needs of an experiment, i.e., through upsampling and augmentation. In addition, “input data” for experiments may refer to many different things: from raw data to cleaned data, from sets of non-integrated CSV-files to a fully integrated relational database, etc.

We are calling for non-typical database contributions that report on

  • Experiences in handling data for scientific and industrial purposes
  • Experiences in handling data in data science/ML/AI Workflows
  • Efforts to create, evaluate or use data for benchmarking
  • Data preparation/cleaning, and data quality war stories
  • Data life cycle Management
  • Long-term data preservation and curation
  • Data hubs and repositories
  • Description of datasets of general interest and open data
  • Data and the law – legally managing data
  • Possible impacts on our publishing culture

    Submissions can range from single pages, for instance to introduce a dataset, to full-fledged scientific contributions, for instance an experimental analysis of data cleaning methods or war stories.

    Expected size of the paper: 8–10 pages, double-column (cf. the author guidelines at Contributions either in German or in English are welcome.

    Deadline for submissions: Feb. 1st, 2019

    Issue delivery: DASP-2-2019 (July 2019)

    Guest editors:

    Jens Dittrich, Universität des Saarlandes

    Felix Naumann, Hasso Plattner Institut, Universität Potsdam

    Norbert Ritter, Universität Hamburg


    Best Workshop Papers of BTW 2019

    This special issue of the “Datenbank-Spektrum” is dedicated to the Best Papers of the Workshops running at the BTW 2019 at the University of Rostock. The selected Workshop contributions should be extended to match the format of regular DASP papers.

    Paper format: 8-10 pages, double-column

    Selection of the Best Papers by the Workshop chairs and the guest editor: April 15th, 2019

    Deadline for submissions: June 1st, 2019

    Issue delivery: DASP-3-2019 (November 2019)

    Guest editor:

    Theo Härder, University of Kaiserslautern



    Trends in Information Retrieval Evaluation

    Evaluation is a central aspect in the research and development of information retrieval systems. In academia, the quantitative evaluation of such systems is mostly known under the term Cranfield paradigm. This research method has been established for more than 25 years in international evaluation campaigns such as the Text Retrieval Conference (TREC) or the Conference and Labs of the Evaluation Forum (CLEF). Meanwhile industrial research has taken a completely different approach. Many companies are able to access a large number of users and their interactions, which can be recorded and evaluated. These infrastructures allow alternative evaluations like large-scale A/B experiments or other online methods. In the last years, different approaches to go beyond TREC-style evaluations emerged to close the gap and to bring together academic and industrial evaluation.

    We are calling for articles that report on novel Evaluation efforts, like:

  • Living Labs
  • Evaluation as a service
  • Large-scale A/B tests
  • Interactive retrieval evaluation
  • Session-based evaluation
  • User-centered evaluation
  • Counterfactual evaluation
  • Novel evaluations in application domains such as cultural heritage, digital libraries, social media, expert search, health information, etc.
  • Other evaluations that go beyond TREC

    Expected size of the paper: 8–10 pages, double-column (cf. the author guidelines at Contributions either in German or in English are welcome.

    Deadline for submissions: Oct. 1st, 2019

    Issue delivery: DASP-1-2020 (March 2020)

    Guest editors:

    Philipp Schaer, Technische Hochschule Köln

    Klaus Berberich, Hochschule für Technik und Wirtschaft des Saarlandes

  • Browse Volumes & Issues

    Latest Articles

    1. No Access

      Datenbankgruppen vorgestellt

      Die Arbeitsgruppe Datenbanken und Informationssysteme an der TU Dortmund

      Jens Teubner (February 2019)

    2. No Access

      Kurz Erklärt

      Wie funktioniert die Blockchain?

      Klaus Meyer-Wegener (February 2019)

    3. No Access



      (February 2019)