Advertisement

Scientometrics

, Volume 114, Issue 2, pp 449–461 | Cite as

Toward predicting research proposal success

  • Kevin W. Boyack
  • Caleb Smith
  • Richard Klavans
Article

Abstract

Citation analysis and discourse analysis of 369 R01 NIH proposals are used to discover possible predictors of proposal success. We focused on two issues: the Matthew effect in science—Merton’s claim that eminent scientists have an inherent advantage in the competition for funds—and quality of writing or clarity. Our results suggest that a clearly articulated proposal is more likely to be funded than a proposal with lower quality of discourse. We also find that proposal success is correlated with a high level of topical overlap between the proposal references and the applicant’s prior publications. Implications associated with the analysis of proposal data are discussed.

Keywords

Research proposal analytics Funding success prediction Discourse analysis Citation analysis 

References

  1. Biddle, C., & Aker, J. (1996). How does the peer review process influence AANA Journal article readability? Journal of the American Association of Nurse Anesthetists, 64(1), 65–68.Google Scholar
  2. Bornmann, L., & Daniel, H.-D. (2005). Selection of research fellowship recipients by committee peer review. Reliability, fairness and predictive validity of Board of Trustees decisions. Scientometrics, 63(2), 297–320.CrossRefGoogle Scholar
  3. Bornmann, L., & Daniel, H.-D. (2006). Selecting scientific excellence through committee peer review—A citation analysis of publications previously published to approval or rejection of post-doctoral research fellowship applicants. Scientometrics, 68(3), 427–440.CrossRefGoogle Scholar
  4. Bornmann, L., Leydesdorff, L., & van den Besselaar, P. (2010). A meta-evaluation of scientific research proposals: Different ways of comparing rejected to awarded applications. Journal of Informetrics, 4, 211–220.CrossRefGoogle Scholar
  5. Bornmann, L., Wallon, G., & Ledin, A. (2008). Does the committee peer review select the best applicants for funding? An investigation of the selection process for two European Molecular Biology Organization programmes. PLoS ONE, 3(10), e3480.CrossRefGoogle Scholar
  6. Cabezas-Clavijo, A., Robinson-Garcia, N., Escabias, M., & Jimenez-Contreras, E. (2013). Reviewers’ ratings and bibliometric indicators: Hand in hand when assessing over research proposals? PLoS ONE, 8(6), e68258.CrossRefGoogle Scholar
  7. Cole, S., Cole, J. R., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214, 881–886.CrossRefGoogle Scholar
  8. Cole, S., Rubin, L., & Cole, J. R. (1978). Peer review in the national science foundation: Phase one of a study. Washington, DC: The National Academies Press.  https://doi.org/10.17226/20041.Google Scholar
  9. Enger, S. G., & Castellacci, S. (2016). Who get Horizon 2020 research grants? Propensity to apply and probability to succeed in a two-step analysis. Scientometrics, 109, 1611–1638.CrossRefGoogle Scholar
  10. Fang, F. C., Bowen, A., & Casadevall, A. (2016). NIH peer review percentile scores are poorly predictive of grant productivity. eLife, 5, e13323.Google Scholar
  11. Gallo, S. G., Carpenter, A. S., Irwin, D., McPartland, C. D., Travis, J., Reynders, S., et al. (2014). The validation of peer review through research impact measures and the implications for funding strategies. PLoS ONE, 9(9), e106474.CrossRefGoogle Scholar
  12. Garfield, E. (1955). Citation indexes for science: A new dimension in documentation through association of ideas. Science, 122, 108–111.CrossRefGoogle Scholar
  13. Graves, N., Barnett, A. G., & Clarke, P. (2011). Funding grant proposals for scientific research: Retrospective analysis of scores by members of grant review panel. British Medical Journal, 343, d4797.CrossRefGoogle Scholar
  14. Herbert, D. L., Barnett, A. G., Clarke, P., & Graves, N. (2013). On the time spent preparing grant proposals: An observational study of Australian researchers. British Medical Journal Open, 3, e002800.Google Scholar
  15. Hörlesberger, M., Roche, I., Besagni, D., Scherngell, T., Francois, C., Cuxac, P., et al. (2013). A concept for inferring ‘frontier research’ in grant proposals. Scientometrics, 97, 129–148.CrossRefGoogle Scholar
  16. Hornbostel, S., Böhmer, S., Klingsporn, B., Neufeld, J., & Von Ins, M. (2009). Funding of young scientist and scientific excellence. Scientometrics, 79(1), 171–190.CrossRefGoogle Scholar
  17. Jacob, B. A., & Lefgren, L. (2011). The impact of research grant funding on scientific productivity. Journal of Public Economics, 95(9), 1168–1177.CrossRefGoogle Scholar
  18. Johnson, V. E. (2008). Statistical analysis of the National Institutes of Health peer review system. Proceedings of the National Academy of Sciences of the USA, 105, 11076–11080.CrossRefGoogle Scholar
  19. Klavans, R., & Boyack, K. W. (2017). Research portfolio analysis and topic prominence. Journal of Informetrics, 11, 1158–1174.CrossRefGoogle Scholar
  20. Li, D., & Agha, L. (2015). Big names or big ideas: Do peer-review panels select the best science proposals? Science, 348, 434–438.CrossRefGoogle Scholar
  21. Lindner, M. D., & Nakamura, R. K. (2015). Examining the predictive validity of NIH peer review scores. PLoS ONE, 10(6), e126938.CrossRefGoogle Scholar
  22. Melin, G., & Danell, R. (2006). The top eight percent: Development of approved and rejected applicants for a prestigious grant in Sweden. Science and Public Policy, 33(10), 702–712.CrossRefGoogle Scholar
  23. Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63.CrossRefGoogle Scholar
  24. Mintzberg, H., & Waters, J. A. (1985). Of strategies, deliberate and emergent. Strategic Management Journal, 6, 257–272.CrossRefGoogle Scholar
  25. Mutz, R., Bornmann, L., & Daniel, H.-D. (2015). Testing for the fairness and predictive validity of funding decisions: A multilevel multiple imputation for missing data approach using ex-ante and ex-post evaluation data from the Austrian Science Fund. Journal of the Association for Information Science and Technology, 66(11), 2321–2339.CrossRefGoogle Scholar
  26. Neufeld, J., & Hornbostel, S. (2012). Funding programmes for young scientists—Do the ‘best’ apply? Research Evaluation, 21, 270–279.CrossRefGoogle Scholar
  27. Neufeld, J., Huber, N., & Wegner, A. (2013). Peer review-based selection decisions in individual research funding, applicants’ publication strategies and performance: The case of ERC Starting Grants. Research Evaluation, 22, 237–247.CrossRefGoogle Scholar
  28. Nicholson, J. M., & Ioannidis, J. P. A. (2012). Conform and be funded. Nature, 492(7427), 34–36.CrossRefGoogle Scholar
  29. Reinhart, M. (2009). Peer review of grant applications in biology and medicine: Reliability, fairness and validity. Scientometrics, 81(3), 789–809.CrossRefGoogle Scholar
  30. Roberts, J. C., Fletcher, R. H., & Fletcher, S. W. (1994). Effects of peer review and editing on the readability of articles published in Annals of Internal Medicine. Journal of the American Medical Association, 272(2), 119–121.CrossRefGoogle Scholar
  31. Sarewitz, D., & Pielke, R. A., Jr. (2007). The neglected heart of science policy: Reconciling supply of and demand for science. Environmental Science & Policy, 10, 5–16.CrossRefGoogle Scholar
  32. Saygitov, R. T. (2014). The impact of funding through the RF President’s Grants for Young Scientists (the field—Medicine) on research productivity: A quasi-experimental study and a brief systematic review. PLoS ONE, 9(1), e86969.CrossRefGoogle Scholar
  33. Swales, J. (1986). Citation analysis and discourse analysis. Applied Linguistics, 7(1), 39–56.CrossRefGoogle Scholar
  34. Teufel, S. (2010). The structure of scientific articles: Applications to citation indexing and summarization. Stanford, CA: CSLI Publications.Google Scholar
  35. Teufel, S., Siddharthan, A., & Batchelor, C. (2009). Towards discipline-independent argumentative zoning: Evidence from chemistry and computational linguistics. In Proceedings of the 2009 conference on empirical methods in natural language processing (pp. 1493–1502). Singapore.Google Scholar
  36. Van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18(4), 273–288.CrossRefGoogle Scholar
  37. Van den Besselaar, P., & Sandström, U. (2015). Early career grants, performance, and careers: A study on predictive validity of grant decisions. Journal of Informetrics, 9, 826–838.CrossRefGoogle Scholar
  38. Van den Besselaar, P., & Sandström, U. (2017). Influence of cognitive distance on grant decisions. In Science, technology and innovation indicators 2017. Paris, France.Google Scholar
  39. Van Leeuwen, T. N., & Moed, H. (2012). Funding decisions, peer review, and scientific excellence in physical sciences, chemistry, and geosciences. Research Evaluation, 21, 189–198.CrossRefGoogle Scholar
  40. Viner, N., Powell, P., & Green, R. (2004). Institutionalized biases in the award of research grants: A preliminary analysis revisiting the principle of accumulative advantage. Research Policy, 33(3), 443–454.CrossRefGoogle Scholar
  41. Von Hippel, T., & Von Hippel, C. (2015). To apply or not to apply: A survey analysis of grant writing costs and benefits. PLoS ONE, 10(3), e0118494.CrossRefGoogle Scholar
  42. Zuckerman, H. (1967). Nobel laureates in science: Patterns of productivity, collaboration, and authorship. American Sociological Review, 32(3), 391–403.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2017

Authors and Affiliations

  • Kevin W. Boyack
    • 1
  • Caleb Smith
    • 2
  • Richard Klavans
    • 3
  1. 1.SciTech Strategies, Inc.AlbuquerqueUSA
  2. 2.University of Michigan Medical SchoolAnn ArborUSA
  3. 3.SciTech Strategies, Inc.WayneUSA

Personalised recommendations