Academic publishing places many pressures on editors. Those pressures continue to mount, with one result being that this journal’s Editor now routinely declines presubmission inquires and asks authors to formally submit their manuscripts for proper evaluation. This editorial explains why the Editor discourages preliminary inquiries and why they are problematic, counterproductive and unnecessary. To assist authors, this editorial also highlights factors that lead to “desk rejects” and suggests that authors consider them as they seek review. The highlighted factors reveal potential pivot points in editorial decisions that, together, suggest the need for authors to use presubmission inquiries carefully and sparingly.

The Nature and Use of Publishing Inquiries

Presubmission inquires have many goals and take different forms. Authors typically ask editors to determine whether manuscripts fall within their journals’ mission. Some ask whether editors expect favorable peer review. Some authors submit titles, others submit abstracts, others submit drafts, while others provide vague statements of what they intend to study. Responses can help authors gauge whether a journal provides a potentially good fit for their work, which saves them time in the preparation of manuscripts and in the review process. Inquiries also can help develop manuscripts in useful directions, which explains why some journals not only invite but require them. Just as responses signal to authors where to allocate their time and resources to developing manuscripts for a specific journal, they can help journals guide the development of targeted manuscripts.

Presubmission inquiries may have a place in publishing, but they also can be inherently problematic. Inquiries are difficult to evaluate effectively. The reality is that it is impossible to evaluate a study without reading it. Given the inability to evaluate inquiries effectively, authors may receive inappropriate advice. Equally problematically, relying on presubmission inquiries means that authors may not have properly engaged their areas of research. Authors who rely on inquiries run the risk of indicating that they lack an understanding of the field, particularly its research gaps and what is worth addressing. Editors know that the best way to know about an area of research is to study it, which includes knowing where and how its research is produced.

Despite the ideal that manuscripts are evaluated at face value, then, they actually are not, which makes cursory evaluations particularly problematic. Articles are never considered by themselves, and those other factors are best addressed through submissions processed through electronic managing systems. Notably, editors routinely check for fraud that can best be done by thoroughly analyzing the manuscript through the electronic manager (e.g., plagiarism checks, cite-checks, data analyses checks, and other checks are done at the click of a button). Editors also consider the contributions and qualifications of all authors, something that now is routine given the need to combat, for example, academic fraud and obvious conflicts of interest (see, e.g., Levesque 2012a). That too can be done best through the electronic managing system that permits, for example, googling authors as well as manuscripts on similar topics. That analysis also can serve other purposes, such as revealing the manuscript’s potential contribution, audience, and reviewers. These are just some of the editors’ burdens that may not be fully appreciated but now are both routine and expected (see also Levesque 2011, 2017a). The conclusion to be had is that a fair analysis of any submission most readily comes from a rigorous analysis of information required by an editorial managing system. These editorial checks often are much more revealing than authors might expect.

It is not only because presubmission inquiries are problematic that leads this editor to discourage them. Authors submitting to this journal do not need them. Electronically submitted manuscripts can be evaluated quickly and effectively. The journal adopts a two-step peer-review process. An internal review focuses on a manuscript’s potential success in the review process. If that potential is low, then the manuscript is “desk rejected”. That potential is determined by the Editor (sometimes in consultation with a colleague). That decision is typically done within 24 h, often less. Full review is reserved for a minority of manuscripts.

Quick internal reviews may appear arbitrary and even draconian. But, the criteria for meeting the standard for initial review are obvious from a brief look at the journal’s content [and articulated below; see more generally, Levesque 2006)]. Equally importantly, the Editor routinely invites authors to offer revisions that could lead to sending their manuscripts out for full review. Typically, such requests occur when authors have gaps that could be readily addressed. Those suggested revisions, again, only can come after a close reading of a reviewable manuscript and its supporting materials.

Manuscripts Rejected without External Review

Researchers have a tendency to sprint to the finish line. It is remarkable how much time, thought, and energy as well as much of one’s life can go into particular studies. Yet, at the critical point where that deliberate approach is needed—the decision to prepare and send a manuscript for publication, authors have a tendency to become risk takers, take incredible short cuts, and lose a sense of perspective that they apparently managed to resist up to this point. There may be a place for randomness in research, and there may be times to skip “homework”, but it should not be during the decision to submit to a particular journal. Academic journals all have specific missions, and all fields of study have established standards worth considering before submitting presubmissions and manuscripts.

Potential authors are encouraged to examine the journal’s webpage. There, authors will learn that the journal no longer supports qualitative work, as that work needs an entirely different editorial board (given that rationale, exceptions are made with journal special issues; see Levesque 2012b)). In addition, a look at published articles reveals that the journal also has never supported brief reports, mainly because other journals rely on them and brief articles cannot further our multidisciplinary mission (given that rationale, many studies are rejected when their data sets do not support robust analyses that consider multiple dimensions of constructs). Articles that focus on the development of measures also are rejected, as the journal centers on substantive rather than methodological advances (however, such advances are appropriate for our journal when they are couched in substantive advances, such as in the conceptualization of a phenomenon so that we understand the phenomenon better rather than just have a way of indexing it). The journal also now no longer supports reviews of the literature on the grounds that we seek to publish original empirical work (given that rationale, we publish meta-analyses as they provide new empirical analyses; also given that rationale, we created another journal, Adolescent Research Review, to support reviews; see Levesque 2016a, b; however, given this journal’s multidisciplinary reach, we continue to support book review essays, see Levesque 2007). And, simply drawing from adolescent samples does not suffice; studies must address developmental issues (given that rationale, participants can be preadolescents or even adults as long as studying them directly links to developmental concerns relating to adolescents).

In addition to the above broad reasons for rejecting manuscripts without full review, here are others typical for any empirical study:

Ethical, research and publication misconduct: Studies are rejected if their conduct/reporting failed to comply with the ethical standards of their institutional and/or national research committee and/or with the 1964 Helsinki declaration and its later amendments or comparable ethical standards (e.g., participants did not appropriately provide informed consent; the study lacked necessary approval by, or a waiver from, a research ethics committee/institutional review board). Importantly, if the study is unclear about ethical issues or involved possible research misconduct, the editor usually asks for clarification and explanation rather than simply reject. The journal relies on the Committee on Publication Ethics (COPE).

Ineffective research questions/hypotheses: Studies are rejected if they failed to identify a research gap worth addressing and/or failed to detail an effective way to address that gap. We do not pursue manuscripts when the research question/hypotheses and answers (conclusions) are unlikely to improve the developmental understanding of adolescence—studies must address research relating to the development of adolescents as well as formal and informal responses to adolescents (education, family life, clinical practice, public health, health policy, juvenile justice, etc.) (see Levesque 2006).

Insufficiently substantive: Studies are not sent out for full, external review if they lack an original, relevant, or important overall message. Studies are desk rejected when their findings do not add to previous research or when more robust studies already have reported similar findings. Relatedly, to be substantive, the study must be of relevance to non-specialists, useful to readers beyond the particular study’s participants, and the type that others are likely to use/cite. The study needs to evince the promise of impact and have a compelling purpose (see Levesque 2006).

Weak methods and analyses: Studies may address important gaps in research, but they still must be robust. Insufficient statistical power tends to be a major reason for rejecting manuscripts. Lack of power need not always be related to sample size, as authors have many ways to increase power, such as avoiding ineffective measures and using high quality data. Regardless, studies fail when they do not present enough information to be understood or replicated (see Levesque 2015). Sensitivity analyses are expected.

Unpersuasive narratives: Studies are rejected whey they fail to have an effective, sustaining narrative. Effective studies give readers a sense of the research questions/hypotheses, why they are important, and how they are making a contribution to the developmental understanding of adolescents. Effective narratives make those factors clear in their abstracts, develop them in their introductory narratives, present compelling “current study” sections, detail methods effectively, conduct appropriate analyses that include needed sensitivity analyses, properly present results and effectively discuss findings in the context of current and future research relating to the developmental understanding of adolescence. Sometimes simple things matter. The failure to develop an appropriate title typically means that the study will not get sent out for review, as reviewers rely on titles when deciding to review. (The challenge in getting reviewers to review relates to the manuscript’s potential audience; small audiences typically mean that the manuscript should not be pursued.) The failure to develop an effective abstract likely leads to a desk reject, as the abstract reveals how much care the author placed in writing the study and, by implication, conducting the study in the first place.

Ignores reviewers’ comments: For authors who submit revisions, the major reason for rejection is their failure to consider reviewers’ comments. The letter written in response to reviews is critical, but even more critical is the need to ensure that the stated revisions were done. Rejected revisions typically are those that simply stated that the revisions were done when they actually were not. Sometimes authors are given an opportunity to revise the revisions that they declared they did, but sometimes the lack of candor results in diminishing trust in authors and how their study was conducted. Notably, we typically ask for revisions only when quite certain that revisions will result in a positive editorial decision, that the first revision will need minor edits. There are many reasons for that approach, with the major one being that we need to trust the integrity of the authors and studies that we support.

The above list of possible pivot points that could lead to rejections reveals more than the challenges that authors face in getting published. They confirm the proposition that preliminary inquiries are of little utility. No study is perfect. Positive editorial decisions necessarily balance manuscripts’ strengths and weaknesses. Errors can be made in the review process, but the point at which an editorial decision is made certainly is a time when the more information editors and reviewers have, the better.

Conclusion

Before submitting a presubmission inquiry to our journal, authors are encouraged to ask themselves whether their inquiry and study are going to ask too much of the editorial board and whether they are asking too much of the peer review process. The journal is rightly known for being supportive of authors, especially emerging scholars (see Levesque 2016c, 2017b, 2018), but it is difficult to be supportive of authors who fail in their due diligence before submitting manuscripts. Authors who submit presubmission inquiries tend to fail in that analysis as well as in the review process itself. As a result, I would like to encourage all authors to take the time needed to learn about the journal, including its field, and adjusting their manuscripts accordingly before submitting anything about their work. Quality results take quality time. I pledge to be efficient and effective in the use of that time.