Advertisement

Publishing

  • Gideon J. MellenberghEmail author
Chapter

Abstract

Research is published in journals, books, dissertations, reports, blogs, and other media. The most prestigious media of the behavioral sciences are international peer-reviewed journals. Journal editors decide to accept or reject a manuscript, and are usually assisted by reviewers in making this decision. The process is affected by factors that are not relevant for the decisions and errors are made: suited manuscripts may be rejected and unsuited manuscripts accepted. Two types of factors are discussed. First, publication bias, which means that the decision to accept a manuscript is affected by the results of a study, for example, manuscripts that report statistically significant results have a higher acceptance rate than manuscripts that report nonsignificant results. Second, original studies have a higher acceptance rate than replications. Replications necessarily deviate from the original study, just because they are conducted at a later time. Therefore, it is proposed to plan a replication as a test of a hypothesis on the elements of the original study that are modified in the replication. These replication hypotheses are tested with linear contrasts of original and replication study outcomes. The usual null hypothesis testing methods are applied if the replication hypothesis states that the results of the original study and replication differ, and equivalence testing if the hypothesis states that they do not differ. Moreover, a framework is proposed that gives guidelines for conducting replication research. Proposals are described to improve the publication process. Publication bias is revealed by preregistration of planned studies, and is prevented by blinding editors and reviewers to the results and conclusions of a study. Replication research is fostered by requiring students to replicate original studies, publishing special issues and brief reports on replications, making available data and materials to other researchers, and collaboration of researchers to replicate studies. Adversarial collaboration is a way to settle a debate on a hypothesis.

Keywords

Adversarial collaboration Correctness replication hypothesis Equivalence testing File drawer problem Generalization replication hypothesis Precision replication hypothesis Preregistration Publication bias Registered reports Transparency 

References

  1. Anderson, S. F., & Maxwell, S. E. (2016). There’s more than one way to conduct a replication study: Beyond statistical significance. Psychological Methods, 21, 1–12.CrossRefGoogle Scholar
  2. APA. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association.Google Scholar
  3. Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554.CrossRefGoogle Scholar
  4. Bakker, M., & Wicherts, J. M. (2011). The (mis)reporting of statistical results in psychology journals. Behavior Research Methods, 43, 666–678.CrossRefGoogle Scholar
  5. Banks, G. C., Kepes, S., & Banks, K. P. (2012). Publication bias: The antagonist of meta-analytic reviews and effective police making. Educational Evaluation and Policy Analysis, 34, 259–277.CrossRefGoogle Scholar
  6. Begg, C. B. (1994). Publication bias. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 399–409). New York, NY: Russell Sage Foundation.Google Scholar
  7. Bonett, D. G. (2008). Meta-analytic interval estimation for bivariate correlations. Psychological Methods, 13, 173–181.CrossRefGoogle Scholar
  8. Bonett, D. G. (2009). Meta-analytic estimation for standardized and unstandardized mean differences. Psychological Methods, 14, 225–238.CrossRefGoogle Scholar
  9. Chambers, C. D. (2013). Registered reports: A new publishing initiative at Cortex. Cortex, 49, 609–610.CrossRefGoogle Scholar
  10. Chambers, C. (2017). The 7 deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
  11. Cooper, H., DeNeve, K., & Charlton, K. (1997). Finding the missing science: The fate of studies submitted for review by a Human Subjects Committee. Psychological Methods, 2, 447–452.CrossRefGoogle Scholar
  12. Copas, J. B. (2013). A likelihood-based sensitivity analysis for publication bias in meta-analysis. Applied Statistics, 62, 47–66.Google Scholar
  13. Cumming, G., & Maillardet, R. (2006). Confidence intervals and replications: Where will the next mean fall? Psychological Methods, 11, 217–227.CrossRefGoogle Scholar
  14. Drotar, D. (2010). Editorial: A call for replications of research in pediatric psychology and guidance for authors. Journal of Pediatric Psychology, 35, 801–805.CrossRefGoogle Scholar
  15. Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345, 1502–1505.CrossRefGoogle Scholar
  16. Frank, M. C., & Saxe, R. (2012). Teaching replication. Perspectives on Psychological Science, 7, 600–604.CrossRefGoogle Scholar
  17. Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1–12.CrossRefGoogle Scholar
  18. Hedges, L. V., & Olkin, M. (1985). Statistical methods for meta-analysis. Orlando, Fl: Academic Press.Google Scholar
  19. Ioannidis, J. P. A., & Trikalinos, T. A. (2007). An exploratory test for an excess of significant findings. Clinical Trials, 4, 245–253.CrossRefGoogle Scholar
  20. Kidwell, M. C., Lazarevic, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., et al. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PloS Biology, 14.  https://doi.org/10.1371/journal.pbio.1002456.
  21. King, G. (1995). Replication, replication. PS: Political Science and Politics, 28, 443–499.Google Scholar
  22. Klein, R. A., Ratliff, K. A., Vianello, M., Adams Jr., R. B., Bahnik, S., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45, 142–152.Google Scholar
  23. Koole, S. L., & Lakens, D. (2012). Rewarding replications: A sure simple way to improve psychological science. Perspectives on Psychological Science, 7, 608–614.CrossRefGoogle Scholar
  24. Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.Google Scholar
  25. Lindsay, R. M., & Ehrenberg, A. S. C. (1993). The design of replicated studies. The American Statistician, 47, 217–228.Google Scholar
  26. Macaskill, P., Walter, S. D., & Irwig, L. (2001). A comparison of methods to detect publication bias in meta-analysis. Statistics in Medicine, 20, 641–654.CrossRefGoogle Scholar
  27. Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review process. Cognitive Therapy and Research, 1, 161–175.CrossRefGoogle Scholar
  28. Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychology research: How often do they really occur? Perspectives on Psychological Science, 7, 537–542.CrossRefGoogle Scholar
  29. Matzke, D., Nieuwenhuis, S., van Rijn, H., Slagter, H. A., van der Molen, M. W., & Wagenmakers, E.-J. (2015). The effect of horizontal eye movements on free recall: A preregistered adversarial collaboration. Journal of Experimental Psychology: General, 144, e1–e15.CrossRefGoogle Scholar
  30. Mellers, B. A., Hertwig, R., & Kahneman, D. (2001). Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychological Science, 12, 269–275.CrossRefGoogle Scholar
  31. Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., du Sert, N. P., et al. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 1–9.Google Scholar
  32. Neuliep, J. W., & Crandall, R. (1990). Editorial bias against replication research. Journal of Social Behavior and Personality, 5, 85–90.Google Scholar
  33. Neuliep, J. W., & Crandall, R. (1993). Reviewer bias against replication research. Journal of Social Behavior and Personality, 8, 21–29.Google Scholar
  34. Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., et al. (2015). Promoting an open research culture. Science, 348, 1422–1425.Google Scholar
  35. Nosek, B. A., & Bar-Anan, Y. (2012). Scientific utopia: I. Opening scientific communication. Psychological Inquiry, 23, 217–243.CrossRefGoogle Scholar
  36. Nosek, B. A., & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results. Social Psychology, 45, 137–141.CrossRefGoogle Scholar
  37. Open Science Collaboration. (2012). An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspectives on Psychological Science, 7, 657–660.CrossRefGoogle Scholar
  38. Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349.Google Scholar
  39. Pashler, H., & Wagenmakers, E.-J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7, 528–530.CrossRefGoogle Scholar
  40. Piantodosi, S. (2005). Clinical trials: A methodological perspective (2nd ed.). Hoboken, NJ: Wiley.CrossRefGoogle Scholar
  41. Rosenthal, R. (1979). The “File Drawer Problem” and tolerance for null results. Psychological Bulletin, 86, 638–641.CrossRefGoogle Scholar
  42. Rothstein, H. R., & Bushman, B. J. (2012). Publication bias in psychological science: Comment on Ferguson and Brannick (2012). Psychological Methods, 17, 129–136.CrossRefGoogle Scholar
  43. Rouder, J. N., & Morey, R. D. (2011). A Bayes factor meta-analysis of Bem’s ESP claim. Psychonomic Bulletin & Review, 18, 682–689.CrossRefGoogle Scholar
  44. Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100.CrossRefGoogle Scholar
  45. Schroter, S., Black, N., Evans, S., Godlee, F., Ostorio, L., & Smith, R. (2008). What errors do peer reviewers detect, and does training improve their ability to detect them? Journal of the Royal Society of Medicine, 101, 507–514.CrossRefGoogle Scholar
  46. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York, NY: Houghton Mifflin.Google Scholar
  47. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 20, 1–8.Google Scholar
  48. Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143, 534–547.CrossRefGoogle Scholar
  49. Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance–or vice versa. Journal of the American Statistical Association, 54, 30–34.Google Scholar
  50. Sterling, T. D., Rosenbaum, W. L., & Weinkam, J. J. (1995). Publication decisions revisited: The effect of the outcome of statistical tests on the decision to publish and vice versa. American Statistician, 49, 108–112.Google Scholar
  51. Torgerson, C. J. (2006). Publication bias: The Achilles’ heel of systematic reviews? British Journal of Educational Studies, 54, 89–102.CrossRefGoogle Scholar
  52. Tsang, E. W., & Kwan, K. M. (1999). Replication and theory development in organizational science: A critical realist perspective. Academy of Management Review, 24, 759–780.CrossRefGoogle Scholar
  53. van Assen, M. A. L. M., van Aert, R. C. M., & Wicherts, J. M. (2015). Meta-analysis using effect size distributions of only statistical significant studies. Psychological Methods, 20, 293–309.CrossRefGoogle Scholar
  54. Vevea, J. L., & Woods, C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10, 428–443.CrossRefGoogle Scholar
  55. Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632–638.CrossRefGoogle Scholar
  56. Walster, G., & Cleary, T. (1970). A proposal for a new editorial policy in the social sciences. American Statistician, 24, 16–19.Google Scholar
  57. Wang, M. C., & Bushman, B. J. (1998). Using the normal quantile plot to explore meta-analytic data sets. Psychological Methods, 3, 46–54.CrossRefGoogle Scholar
  58. Whitehurst, G. J. (1984). Interrater agreement for journal manuscripts. American Psychologist, 39, 22–28.CrossRefGoogle Scholar
  59. Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61, 726–728.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Emeritus Professor Psychological Methods, Department of PsychologyUniversity of AmsterdamAmsterdamThe Netherlands

Personalised recommendations