Skip to main content

On the Selection of the Best Retrieval Result Per Query –An Alternative Approach to Data Fusion–

  • Conference paper
Flexible Query Answering Systems (FQAS 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5822))

Included in the following conference series:

  • 751 Accesses

Abstract

Some recent works have shown that the “perfect” selection of the best IR system per query could lead to a significant improvement on the retrieval performance. Motivated by this fact, in this paper we focus on the automatic selection of the best retrieval result from a given set of results lists generated by different IR systems. In particular, we propose five heuristic measures for evaluating the relative relevance of each result list, which take into account the redundancy and ranking of documents across the lists. Preliminary results in three different data sets, and considering 216 queries, are encouraging. They show that the proposed approach could slightly outperform the results from the best individual IR system in two out of three collections, but that it could significantly improve the average results of individual systems from all data sets. In addition, the achieved results indicate that our approach is a competitive alternative to traditional data fusion methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agirre, E., Di Nunzio, G.M., Ferro, N., Mandl, T., Peters, C.: CLEF 2008: Ad Hoc Track Overview. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark (2008)

    Google Scholar 

  2. Arni, T., Clough, P., Sanderson, M., Grubinger, M.: Overview of the ImageCLEFphoto 2008 Photographic Retrieval Task. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark (2008)

    Google Scholar 

  3. Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. Addison-Wesley, Reading (1999)

    Google Scholar 

  4. Bartell, B.T., Cottrell, G.W., Belew, R.K.: Automatic Combination of Multiple Ranked Retrieval Systems. In: Proceedings of the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval (1994)

    Google Scholar 

  5. Belkin, N.J., Kantor, P., Fox, E.A., Shaw, J.A.: Combining the evidence of multiple query representations for information retrieval. Information Processing and Management 31(3), 431–448 (1995)

    Article  Google Scholar 

  6. Chen, Y., Shahabi, C., Burns, G.A.P.C.: Two-Phase Decision Fusion Based on User Preference. In: The Hawaii International Conference on Computer Sciences, Honolulu, Hawaii (January 2004)

    Google Scholar 

  7. Escalante, H.J., González, J.A., Hernández, C.A., López, A., Montes, M., Morales, E., Sucar, L.E., Villaseñor, L.: TIA-INAOE’s Participation at ImageCLEF 2008. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark (2008)

    Google Scholar 

  8. Gopalan, N.P., Batri, K.: Adaptive Selection of Top-m Retrieval Strategies for Data Fusion in Information Retrieval. International Journal of Soft Computing 2(1), 11–16 (2007)

    Google Scholar 

  9. Hsu, D.F., Taksa, I.: Comparing Rank and Score Combination Methods for Data Fusion in Information Retrieval. Information Retrieval 8(3), 449–480 (2005)

    Article  Google Scholar 

  10. Hubert, G., Mothe, J.: Relevance Feedback as an Indicator to Select the Best Search Engine - Evaluation on TREC Data. In: Proceedings of the Ninth International Conference on Enterprise Information Systems, ICEIS (2007)

    Google Scholar 

  11. Kludas, J., Bruno, E., Marchand-Maillet, S.: Information Fusion in Multimedia Information Retrieval. In: Proceedings of the 5th International Workshop Adaptive Multimedia Retrieval, AMR, pp. 147–159 (2007)

    Google Scholar 

  12. Kompaoré, D., Mothe, J., Baccini, A., Dejean, S.: Query clustering to decide the best system to use. In: Proceedings of the RIAO 2007. 8th International Conference (2007)

    Google Scholar 

  13. Lee, J.H.: Analyses of Multiple Evidence Combination. In: Proceedings of the 20th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval (1997)

    Google Scholar 

  14. Mandl, T., Carvalho, P., Gey, F., Larson, R., Santos, D., Womser-Hacker, C.: GeoCLEF 2008: the CLEF 2008 Cross-Language Geographic Information Retrieval Track Overview. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark (2008)

    Google Scholar 

  15. Martínez-Santiago, F., Ureña-López, L.A., Martín-Valdivia, M.: A merging strategy proposal: The 2-step retrieval status value method. Information Retrieval 9(1), 71–93 (2006)

    Article  Google Scholar 

  16. Ng, K.B., Kantor, P.B.: Predicting the effectiveness of naive data fusion on the basis of system characteristics. Journal of American Society for Information Science 51, 1177–1189 (2000)

    Article  Google Scholar 

  17. Nuray, R., Can, F.: Automatic ranking of information retrieval systems using data fusion. Information Processing and Management 42(3), 595–614 (2006)

    Article  MATH  Google Scholar 

  18. Peters, C.: What happened in CLEF 2008 Introduction to the Working Notes. In: Working Notes of the Cross Language Evaluation Forum, CLEF (2008)

    Google Scholar 

  19. Shaw, J.A., Fox, E.A.: Combination of Multiple Searches. In: Proceedings of The Second Text REtrieval Conference, TREC, vol. 2 (1994)

    Google Scholar 

  20. Spoerri, A.: Using the structure of overlap between search results to rank retrieval systems without relevance judgments. Information Processing and Management 43(4), 1059–1070 (2007)

    Article  Google Scholar 

  21. Villatoro-Tello, E., Montes-y-Gómez, M., Villaseñor-Pineda, L.: INAOE at GeoCLEF 2008: A Ranking Approach based on Sample Documents. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark (2008)

    Google Scholar 

  22. Vogt, C.C., Cottrell, G.W.: Fusion Via a Linear Combination of Scores. Information Retrieval (1), 151–173 (1999)

    Google Scholar 

  23. Vorhees, E.M.: Overview of TREC 2007. In: Proceedings of the sixteenth Text Retrieval Conference, TREC (2007)

    Google Scholar 

  24. Wu, S., Crestani, F.: Methods for Ranking Information Retrieval Systems Without Relevance Judgments. In: Proceedings of the 2003 ACM Symposium on Applied Computing, pp. 811–816 (2003)

    Google Scholar 

  25. Wu, S., McClean, S.: Performance prediction of data fusion for information retrieval. Information Processing and Management 42(4), 899–915 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Juárez-González, A., Montes-y-Gómez, M., Villaseñor-Pineda, L., Ortíz-Arroyo, D. (2009). On the Selection of the Best Retrieval Result Per Query –An Alternative Approach to Data Fusion–. In: Andreasen, T., Yager, R.R., Bulskov, H., Christiansen, H., Larsen, H.L. (eds) Flexible Query Answering Systems. FQAS 2009. Lecture Notes in Computer Science(), vol 5822. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04957-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04957-6_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04956-9

  • Online ISBN: 978-3-642-04957-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics