The Beauty of Small Data: An Information Retrieval Perspective

  • Martin BraschlerEmail author


This chapter focuses on Data Science problems, which we will refer to as “Small Data” problems. We have over the past 20 years accumulated considerable experience with working on Information Retrieval applications that allow effective search on collections that do not exceed in size the order of tens or hundreds of thousands of documents. In this chapter we want to highlight a number of lessons learned in dealing with such document collections.

The better-known term “Big Data” has in recent years created a lot of buzz, but also frequent misunderstandings. To use a provocative simplification, the magic of Big Data often lies in the fact that sheer volume of data will necessarily bring redundancy, which can be detected in the form of patterns. Algorithms can then be trained to recognize and process these repeated patterns in the data streams.

Conversely, “Small Data” approaches do not operate on volumes of data big enough to exploit repetitive patterns to a successful degree. While there have been spectacular applications of Big Data technology, we are convinced that there are and will remain countless, equally exciting, “Small Data” tasks, across all industrial and public sectors, and also for private applications. They have to be approached in a very different manner to Big Data problems. In this chapter, we will first argue that the task of retrieving documents from large text collections (often termed “full text search”) can become easier as the document collection grows. We then present two exemplary “Small Data” retrieval applications and discuss the best practices that can be derived from such applications.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



The retrieval applications “Stiftung Schweiz” and “Expert Match” were partially funded by Swiss funding agency CTI under grants no. 15666.1 and no. 13235.1.


  1. Amati, G., & van Rijsbergen, C. J. (2002). Probabilistic models of Information Retrieval based on measuring the divergence from randomness. ACM Transactions on Information Systems (TOIS), 20(4), 357–389.CrossRefGoogle Scholar
  2. Baeza-Yates, R., & Riebeiro-Neto, B. (2011). Modern information retrieval, 2nd edn. New York: ACM Press.Google Scholar
  3. Braschler, M., & Gonzalo, J. (2009). Best practices in system and user oriented multilingual information access. TrebleCLEF Consortium, ISBN 9788888506890.Google Scholar
  4. Braschler, M., & Ripplinger, B. (2004). How effective is stemming and decompounding for German text retrieval? Information Retrieval, 7(3–4), 291–316.CrossRefGoogle Scholar
  5. Braschler, M., Rietberger, S., Imhof, M., Järvelin, A., Hansen, P., Lupu, M., Gäde, M., Berendsen, R., Garcia Seco de Herrera, A. (2012). Deliverable 2.3. best practices report, PROMISE participative laboratory for multimedia and multilingual information systems evaluation.Google Scholar
  6. Buss, P., & Braschler, M. (2015). Effizienzsteigerung für das Stiftungsfundraising. In Stiftung & Sponsoring, Ausgabe 5|2015.Google Scholar
  7. Harman, D. K. (1991). How effective is suffixing? Journal of the American Society for Information Science, 42(1), 7.MathSciNetCrossRefGoogle Scholar
  8. Harman, D. (1993). Overview of the first TREC conference, Proceedings of the 16th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM.Google Scholar
  9. Hawking, D., & Robertson, S. (2003). On collection size and retrieval effectiveness. Information Retrieval, 6(1), 99–105.CrossRefGoogle Scholar
  10. Hawking, D., & Thistlewaite, P. B. (1997). Overview of TREC-6 very large collection track. In Proceedings of The Sixth Text REtrieval Conference, TREC 1997 (pp. 93–105), NIST Special Publication 500-240.Google Scholar
  11. Hawking, D., Craswell, N., & Thistlewaite, P. (1998). Overview of TREC-7 very large collection track. In E. M. Voorhees, & D. K. Harman (Eds.), Proceedings of the Seventh Text REtrieval Conference (TREC-7) (pp. 91–103). NIST Special Publication 500-242.Google Scholar
  12. Hawking, D., Voorhees, E., Craswell, N., & Bailey, P. (1999a). Overview of the TREC-8 web track. In E. M. Voorhees, & D. K. Harman (Eds.), Proceedings of the Eighth Text REtrieval Conference (TREC-8) (pp. 131–150). NIST Special Publication 500-246.Google Scholar
  13. Hawking, D., Craswell, N., Thistlewaite, P., & Harman, D. (1999b). Results and challenges in Web search evaluation. Computer Networks, 31(11–16), 1321–1330.CrossRefGoogle Scholar
  14. Hiemstra, D., & de Jong, F. (1999). Disambiguation strategies for cross-language information retrieval. In Abiteboul, S., Vercoustre, A. M. (eds) Research and advanced technology for digital libraries. ECDL 1999. Lecture Notes in Computer Science (Vol. 1696, pp. 274–293). Berlin: Springer.Google Scholar
  15. Hull, D. A. (1996). Stemming algorithms: A case study for detailed evaluation. JASIS, 47(1), 70–84.CrossRefGoogle Scholar
  16. Manning, C. D., Raghavan, P., & Schütze, H. (2008). Introduction to information retrieval. Cambridge University Press.Google Scholar
  17. Peters, C., & Braschler, M. (2001). European research letter: Cross-language system evaluation: The CLEF campaigns. Journal of the Association for Information Science and Technology, 52(12), 1067–1072.Google Scholar
  18. Peters, C., Braschler, M., & Clough, P. (2012). Multilingual information retrieval. Berlin: Springer.CrossRefGoogle Scholar
  19. Robertson, S. E. (1977). The probability ranking principle in IR. Journal of Documentation, 33(4), 294–304.CrossRefGoogle Scholar
  20. Robertson, S. E., Maron, M. E., & Cooper, W. S. (1982). Probability of relevance: A unification of two competing models for document retrieval. Information Technology – Research and Development, 1, 1–21.Google Scholar
  21. Schäuble, P. (1999). Multimedia information retrieval. Kluwer Academic.Google Scholar
  22. Singhal, A., & Kaszkiel, M. (2001). A case study in web search using TREC algorithms. In Proceedings of the 10th International Conference On World Wide Web (pp. 708–716). ACM.Google Scholar
  23. Singhal, A., Buckley, C., & Mitra, M. (1996). Pivoted document length normalization. In: Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’96) (pp. 21–29). New York: ACM.Google Scholar
  24. Smucker, M. D., Allan, J., & Carterette, B. (2007) A comparison of statistical significance tests for information retrieval evaluation, CIKM ’07. Portugal: LisboaGoogle Scholar
  25. Spärck Jones, K., & Willett, P. (1997). Readings in information retrieval. Morgan Kaufmann.Google Scholar
  26. Spink, A., Wolfram, D., Jansen, M. B., & Saracevic, T. (2001). Searching the web: The public and their queries. Journal of the Association for Information Science and Technology, 52(3), 226–234.Google Scholar
  27. Walker, S., Robertson, S. E., Boughanem, M., Jones, G. J. F., & Spärck Jones, K. (1998) Okapi at TREC-6, automatic ad hoc, VLC, routing, filtering and QSDR. In E. M. Voorhees, & D. K. Harman (Eds.), The Sixth Text REtrieval Conference (TREC-6) (pp. 125–136). NIST Special Publication 500-240.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.ZHAW Zurich University of Applied SciencesWinterthurSwitzerland

Personalised recommendations