Advertisement

Evaluating medical conferences: the emerging need for a quality metric

  • Raynell LangEmail author
  • Kholoud Porter
  • Hartmut B. Krentz
  • M. John Gill
Article

Abstract

Scientific medical conferences have proliferated in recent years but little data are available to assess their effectiveness in achieving their commonly stated aims “to educate, advance science, and establish evidence-based policy”. The recent expansion of what has been labeled ‘predatory academia’ has heightened concerns about the quality of both published and conference “science”. A journal’s impact factor (JIF) became one accepted metric for the quality of publication science, but no such indicator exists for medical scientific conferences, such as a conference impact factor (CIF). To explore the feasibility of implementing a CIF metric for such conferences, we tested a tool that establishes a ranking system to help both attendees and funders identify quality. Using abstracts presented from 2013 to 2016 at an annual meeting (International Workshop on HIV/Hepatitis Observational Databases), we determined how many were subsequently published in peer-reviewed journals. We then calculated a CIF by dividing the number of peer reviewed published papers by the number of abstracts presented at each conference, then multiplied it by the median value of JIF of the publishing journals. For evaluating the quality of a scientific conference, the use of a CIF which, although limited in scope, can act as a tool for attendees and funders to prioritize their time and resources.

Keywords

Conference impact factor Predatory journals Predatory conferences Medical education Continuing professional development 

Abbreviations

JIF

Journal impact factor

CIF

Conference impact factor

IWHOD

International Workshop on HIV/Hepatitis Observational Databases

CPD/CME

Continuing professional or medical development credits

RCR

Relative citation ratio

Notes

Acknowledgements

We would like to acknowledge and thank Andrea Cartier, the IWHOD secretariat for her critical contributions to this work. We would like to acknowledge all the authors of the abstracts presented at IWHOD for their responses to our requests as their contribution made this work possible.

Authors contributions

All authors contributed equally to the development and construction of the study and the manuscript. All authors read and approved the final manuscript.

Funding

No funding was received for this work.

Compliance with ethical standards

Conflict of interest

The authors declare that they no competing interests.

Availability of data and materials

Datasets generated during this study are not publicly available due to confidentiality requirements and the nature of the data being identifying.

References

  1. Beall, J. (2016a). Dangerous predatory publishers threaten medical research. Journal of Korean Medical Science,31(10), 1511–1513.  https://doi.org/10.3346/jkms.2016.31.10.1511.CrossRefGoogle Scholar
  2. Beall, J. (2016b). Predatory journals: Ban predators from the scientific record. Nature,534(7607), 326.  https://doi.org/10.1038/534326a.CrossRefGoogle Scholar
  3. Carroll, C. W. (2016). Spotting the wolf in sheep’s clothing: Predatory open access publications. Journal of Graduate Medical Education,8(5), 662–664.  https://doi.org/10.4300/JGME-D-16-00128.1.MathSciNetCrossRefGoogle Scholar
  4. Conference locate. (2018). http://www.clocate.com. Retrieved on April 27, 2018 2018.
  5. Evanoff, D., Bartholomew, P., DeYoung, R., Lucaci, C., & Phillips, R. (2008). Bank structure conference impact study. Journal of Financial Services Research,34, 99–121.CrossRefGoogle Scholar
  6. Garfield, E. (1999). Journal impact factor: A brief review. CMAJ: Canadian Medical Association Journal,161(8), 979–980.Google Scholar
  7. Hutchins, B. I., Yuan, X., Anderson, J. M., & Santangelo, G. M. (2016). Relative citation ratio (RCR): A new metric that uses citation rates to measure influence at the article level. PLoS Biology,14(9), e1002541.  https://doi.org/10.1371/journal.pbio.1002541.CrossRefGoogle Scholar
  8. Ioannidis, J. P. (2012). Are medical conferences useful? And for whom? JAMA,307(12), 1257–1258.  https://doi.org/10.1001/jama.2012.360.CrossRefGoogle Scholar
  9. Lassmann, B., & Cornaglia, G. (2017). Place of international congresses in the diffusion of knowledge in infectious diseases. Clinical Infectious Diseases,65(suppl_1), S70–S73.  https://doi.org/10.1093/cid/cix348.CrossRefGoogle Scholar
  10. Moher, D., & Srivastava, A. (2015). You are invited to submit. BMC Medicine,13, 180.  https://doi.org/10.1186/s12916-015-0423-3.CrossRefGoogle Scholar
  11. Neves, J., Lavis, J. N., & Ranson, M. K. (2012). A scoping review about conference objectives and evaluative practices: How do we get more out of them? Health Research Policy and Systems,10, 26.  https://doi.org/10.1186/1478-4505-10-26.CrossRefGoogle Scholar
  12. PLoS Medicine Editors. (2006). The impact factor game. It is time to find a better way to assess the scientific literature. PLoS Medicine,3(6), e291.  https://doi.org/10.1371/journal.pmed.0030291.CrossRefGoogle Scholar
  13. Shen, C., & Bjork, B. C. (2015). ‘Predatory’ open access: A longitudinal study of article volumes and market characteristics. BMC Medicine,13, 230.  https://doi.org/10.1186/s12916-015-0469-2.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2019

Authors and Affiliations

  1. 1.Department of Medicine, Southern Alberta ClinicUniversity of CalgaryCalgaryCanada
  2. 2.University College LondonLondonUK
  3. 3.Southern Alberta ClinicUniversity of CalgaryCalgaryCanada

Personalised recommendations