Advertisement

Scientometrics

, Volume 121, Issue 3, pp 1685–1705 | Cite as

The optimal amount of information to provide in an academic manuscript

  • J. A. GarciaEmail author
  • Rosa Rodriguez-Sánchez
  • J. Fdez-Valdivia
Article

Abstract

Authors may believe that having more information available about the research can help reviewers make better recommendations. However, too much information in a manuscript may create problems to the reviewers and may lead them to poorer recommendations. An information overload on the part of the reviewer might be a state in which she faces an amount of information comprising the accumulation of manuscript informational cues that inhibit the reviewer’s ability to optimally determine the best possible recommendation about the acceptance or rejection of the manuscript. Therefore the author wants to determine the amount of manuscript attributes to provide to reviewers. With this goal in mind we show that there is an intermediate number of manuscript attributes that maximizes the probability of acceptance. If too much research information is provided, some of it is not as useful for recommending acceptance, the average informativeness per research attribute evaluation is too low, and reviewers end up recommending rejection. If too little information is provided about the research, reviewers may end up not having sufficient details to recommend its acceptance. We also show that authors should provide more information to reviewers with more favorable initial valuation toward the research. For those reviewers with a less favorable prior attitude, the author should provide only the most important manuscript attributes. Given that expert reviewers face less load than potential readers, it follows that with respect to the target audience the optimal author’s strategy is also to trade off the amount of research information provided in the manuscript with the average informativeness of these items by selecting an intermediate number of attributes.

Keywords

Authors Manuscript attributes Average informativeness Reviewers Acceptance probability Targeted submission 

Notes

Acknowledgements

This research was sponsored by the Spanish Board for Science, Technology, and Innovation under Grant TIN2017-85542-P, and co-financed with European FEDER funds. Sincere thanks are due to the reviewers for their constructive suggestions and help, in particular the comments on the biased weighting of the initial valuation by the expert.

References

  1. Bakanic, V., McPhail, C., & Simon, R. J. (1987). The manuscript review and decision-making process. American Sociological Review, 52(5), 631–642.  https://doi.org/10.2307/2095599.CrossRefGoogle Scholar
  2. Branco, F., Sun, M., & Villas-Boas, J. M. (2016). Too much information? Information provision and search costs. Marketing Science, 35(4), 605–618.  https://doi.org/10.1287/mksc.2015.0959.CrossRefGoogle Scholar
  3. Chubin, D. E., & Hackett, E. J. (1990). Peerless science: Peer review and US science policy. New York: State University of New York Press.Google Scholar
  4. Cowley, S. J. (2015). How peer-review constrains cognition: on the frontline in the knowledge sector. Frontiers in Psychology, 6, 1706.  https://doi.org/10.3389/fpsyg.2015.01706.CrossRefGoogle Scholar
  5. Frey, B. S. (2003). Publishing as prostitution? Choosing between one’s own ideas and academic success. Public Choice, 116(1), 205–223.  https://doi.org/10.1023/a:1024208701874.CrossRefGoogle Scholar
  6. Fry, J. (2003). The cultural shaping of scholarly communication within academic specialisms. Ph.D. thesis, University of Brighton.Google Scholar
  7. Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2015a). The author-editor game. Scientometrics, 104(1), 361–380.  https://doi.org/10.1007/s11192-015-1566-x.CrossRefGoogle Scholar
  8. Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2015b). Bias and effort in peer review. Journal of the Association for Information Science and Technology, 66(10), 2020–2030.  https://doi.org/10.1002/asi.23307.CrossRefGoogle Scholar
  9. Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2018). Competition between academic journals for scholars’ attention: The ‘Nature effect’ in scholarly communication. Scientometrics, 115(3), 1413–1432.  https://doi.org/10.1007/s11192-018-2723-9.CrossRefGoogle Scholar
  10. Jacoby, J. (1977). Information load and decision quality: Some contested issues. Journal of Marketing Research, 14(4), 569–573.CrossRefGoogle Scholar
  11. Jacoby, J., Speller, H., & Kohn, C. (1974). Brand choice behavior as a function of information load. Journal of Marketing Research, 11(1), 63–69.  https://doi.org/10.2307/3150994.CrossRefGoogle Scholar
  12. Kleinert, H. (2004). Path integrals in quantum mechanics, statistics, polymer physics, and financial markets (4th ed.). Singapore: World Scientific. ISBN 981-238-107-4.CrossRefGoogle Scholar
  13. Lee, C. J., Sugimoto, C., Guo, Z., & Cronin, B. (2013). Bias in peer review. JASIST, 64, 2–17.  https://doi.org/10.1002/asi.22784.CrossRefGoogle Scholar
  14. Ma, Z., Pan, Y. T., Yu, Z. L., Wang, J. T., Jia, J., & Wu, Y. S. (2013). A quantitative study on the effectiveness of peer review for academic journals. Scientometrics, 95(1), 1–13.  https://doi.org/10.1007/s11192-012-0879-2.CrossRefGoogle Scholar
  15. Miller, J. G. (1960). Information input overload and psychopathology. The American Journal of Psychiatry, 116, 695–704.  https://doi.org/10.1176/ajp.116.8.695.CrossRefGoogle Scholar
  16. Rigby, J., Cox, D., & Julian, K. (2018). Journal peer review: A bar or bridge? An analysis of a paper’s revision history and turnaround time, and the effect on citation. Scientometrics, 114, 1087.  https://doi.org/10.1007/s11192-017-2630-5.CrossRefGoogle Scholar
  17. Roetzel, P. (2018). Information overload in the information age: A review of the literature from business administration, business psychology, and related disciplines with a bibliometric approach and framework development. Business Research, pp. 1–44.  https://doi.org/10.1007/s40685-018-0069-z CrossRefGoogle Scholar
  18. Somerville, A. (2016). A Bayesian analysis of peer reviewing. Significance, 13(1), 32–37.  https://doi.org/10.1111/j.1740-9713.2016.00881.x.CrossRefGoogle Scholar
  19. Tennant, J. P., Dugan, J. M., Graziotin, D., Jacques, D. C., Waldner, F., Mietchen, D., et al. (2017). A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research.  https://doi.org/10.12688/f1000research.12037.1.CrossRefGoogle Scholar
  20. Toffler, A. (1970). Future shock. New York: Random House Publishing Group.Google Scholar
  21. Weiss, G. H. (1994). Aspects and applications of the random walk. Amsterdam: North Holland.zbMATHGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2019

Authors and Affiliations

  1. 1.Departamento de Ciencias de la Computación e I. A., CITIC-UGRUniversidad de GranadaGranadaSpain

Personalised recommendations