Abstract
Identifying relevant studies for inclusion in systematic reviews requires significant effort from human experts who manually screen large numbers of studies. The problem is made more difficult by the growing volume of medical literature and Information Retrieval techniques have proved to be useful to reduce workload. Reviewers are often interested in particular types of evidence such as Diagnostic Test Accuracy studies. This paper explores the use of query adaption to identify particular types of evidence and thereby reduce the workload placed on reviewers. A simple retrieval system that ranks studies using TF.IDF weighted cosine similarity was implemented. The Log-Likelihood, Chi-Squared and Odds-Ratio lexical statistics and relevance feedback were used to generate sets of terms that indicate evidence relevant to Diagnostic Test Accuracy reviews. Experiments using a set of 80 systematic reviews from the CLEF2017 and CLEF2018 eHealth tasks demonstrate that the approach improves retrieval performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Search Filters for MEDLINE in Ovid Syntax and the PubMed Translation. https://hiru.mcmaster.ca/hiru/HIRU_Hedges_MEDLINE_Strategies.aspx. Accessed 18 Jan 2018
Alharbi, A., Briggs, W., Stevenson, M.: Retrieving and ranking studies for systematic reviews: University of Sheffield’s approach to CLEF ehealth 2018 task 2. In: CLEF 2018 Evaluation Labs and Workshop: Online Working Notes. CEUR-WS, France (2018)
Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval: The Concepts and Technology Behind Search, 2nd edn. Addison-Wesley, Boston (2011)
Carpineto, C., Romano, G.: A survey of automatic query expansion in information retrieval. ACM Comput. Surv. 44(1), 1–50 (2012)
Cohen, A.M., Ambert, K., McDonagh, M.: A prospective evaluation of an automated classification system to support evidence-based medicine and systematic review. In: AMIA Annual Symposium Proceedings, vol. 2010, pp. 121–125 (2010). http://www.ncbi.nlm.nih.gov/pubmed/21346953
Dunning, T.: Accurate methods for the statistics of surprise and coincidence. Comput. Linguist. 19(1), 61–74 (1993)
Gan, W.Q., Man, S., Senthilselvan, A., Sin, D.: Association between chronic obstructive pulmonary disease and systemic inflammation: a systematic review and a meta-analysis. Thorax 59(7), 574–580 (2004)
Kanoulas, E., Li, D., Azzopardi, L., Spijker, R.: CLEF technologically assisted reviews in empirical medicine overview. In: Working Notes of CLEF 2017 - Conference and Labs of the Evaluation Forum. CEUR Workshop Proceedings, Dublin, Ireland (2017). CEUR-WS.org
Kanoulas, E., Spijker, R., Li, D., Azzopardi, L.: CLEF 2018 technology assisted reviews in empirical medicine overview. In: CLEF 2018 Evaluation Labs and Workshop: Online Working Notes. CEUR-WS, France (2018)
Karimi, S., Pohl, S., Scholer, F., Cavedon, L., Zobel, J.: Boolean versus ranked querying for biomedical systematic reviews. BMC Med. Inform. Decis. Mak. 10(1), 1–20 (2010)
McGowan, J., Sampson, M.: Systematic reviews need systematic searchers. J. Med. Libr. Assoc. 93(1), 74–80 (2005)
O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M., Ananiadou, S.: Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst. Rev. 4(1), 5 (2015)
Paisley, S., Sevra, J., Stevenson, M., Archer, R., Preston, L., Chilcott, J.: Identifying potential early biomarkers of acute myocaridal infarction in the biomedical literature: a comparison of text mining and manual sifting techniques. In: Proceedings of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) 19th Annual European Congress, Vienna, Austria (2016)
Pojanapunya, P., Todd, R.W.: Log-likelihood and odds ratio: keyness statistics for different purposes of keyword analysis. Corpus Linguist. Linguist. Theory 14(1), 133–167 (2018)
Rayson, P.: From key words to key semantic domains. Int. J. Corpus Linguist. 13(4), 519–549 (2008)
Shemilt, I., Khan, N., Park, S., Thomas, J.: Use of cost-effectiveness analysis to compare the efficiency of study identification methods in systematic reviews. Syst. Rev. 5(1), 140 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Alharbi, A., Stevenson, M. (2019). Improving Ranking for Systematic Reviews Using Query Adaptation. In: Crestani, F., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2019. Lecture Notes in Computer Science(), vol 11696. Springer, Cham. https://doi.org/10.1007/978-3-030-28577-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-28577-7_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-28576-0
Online ISBN: 978-3-030-28577-7
eBook Packages: Computer ScienceComputer Science (R0)