Introduction

In the peer review process, two “strategic” questions have to be considered: on one hand—for editors, what is the load due to the number (and what is the relative frequency) of papers submitted at some time during a year?; on the other hand—for authors, is there any bias in the probability of acceptance of their (by assumption high quality) paper when submitted in a given month, because of the (being polite) mood of editors and/or reviewers? A study about such a time concentration (and dispersion) of submitted papers and their subsequent acceptance (or rejection) seems to become appropriate from a scientometrics point of view, in line with recent “effects” found and known through media, like coercive citations or faked research reports.

In fact, the mentioned question of paper submissions timing is of renewed interest nowadays in informetrics and bibliometrics due to the flurry of new publication journals by electronic means. Moreover, paper acceptance rate is of great concern to authors who feel much bias at some time. No need to say that the peer review process is sometimes slow, with reasons found in editor’s and reviewers’ workload, whence a difficulty of finding reviewers. Tying such questions are the open access policy and the submission fees imposed by publishers. on one hand, but also doubts or constraints about the efficiency in managing peer-review of scientific manuscriptseditors perspective (Nedič et al. 2018) and of authors (Drvenica et al. 2019). Thus, one may wonder if there is some “seasonal” or “day of the week” effect.

Very recently, Boja et al. (2018), in this journal, showed that “the day of the week when a paper is submitted to a peer reviewed journal correlates with whether that paper is accepted”, when looking at a huge set of cases for high Impact Factor journals. However, there was no study of rejected papers.

From the seasonal point view, previously, but in recent time, Shalvi et al. (2010) discussed the case of electronic submission monthly frequency to two psychology journals, Psychological Science (PS) and Personality and Social Psychology Bulletin (PSPB), over 4 and 3 years respectively. Shalvi et al. (2010) found a discrepancy in the pattern of “submission-per-month” and “acceptance-per-month” for PS—but not for PSPB. More papers were submitted to PS during “summer months”, but no seasonal bias effect [based on a \( \chi _{{(11)}}^{2} \) test for percentages] was found about subsequent acceptance; nevertheless, the percentage of accepted papers when submitted in Nov. and Dec. was found to be very low. In contrast, many papers were submitted to PSPB during “winter months”, followed by a dip in April, but the percentage of published papers was found to be greater if the submission to PSPB occurred in (Aug.–Sept.–Oct.). Moreover, a marked “acceptance success dip” occurred if the submission was in “winter months”. The main difference between such patterns was conjectured to stem from different rejections policies i.e. employing desk rejections or not.

Later, Schreiber (2012) examined submissions to a specialized journal, Europhysics Letters (EPL), over 12 years. He observed that the number of submitted manuscripts had been steadily increasing while the number of accepted manuscripts had grown more slowly. He claimed to find no statistical effect. However—from Table 2 in Schreiber (2012), there is a clearly visible maximum for the number of submissions in July, more than 10% over the yearly mean, and a marked dip in submissions in February—even taking into account the “month small length”. Examining the acceptance rate (roughly ranging between 45 and 55 %, according to the month of submission), he concluded that strong fluctuations can be seen, between different months,. One detects a maximum in July and a minimum in January for the most recent years.

Alikhan et al. (2011) had a similar concern: they compiled submissions, in 2008, to 20 journals pertaining to dermatology. It was found that May was the least popular month, while July was the most popular month. We have estimated a \( \chi ^{2} \simeq 36.27 \), from the Fig. 1 data in Alikhan et al. (2011). thereby suggesting a far from uniform distribution. There is no information on acceptance rate in Alikhan et al. (2011).

Other papers have appeared pretending discussing seasonal or so effects, concluding from fluctuations, but finding no effect, from standard deviations arguments—instead of \(\chi ^{2}\) tests. Yet, it should be obvious to the reader that a \(\chi ^{2}\) test performs better in order to find whether a distribution is uniform or not—our research question. In contrast, a test based on the standard deviation and the confidence interval can only allow some claim on some percentage deviation of (month) outliers; furthermore such studies are tacitly assuming a normality of the (submission or acceptance) fluctuation distributions—which is far from being the case. Usually, the skewness and kurtosis of the distributions to be mandatory complements are not provided in “fluctuations studies” by such authors.

In order to contribute answers to the question on “monthly bias”, we have been fortunate to get access to data for submitted, and later either accepted or rejected, papers to a specialized (chemistry) scientific journal and to a multidisciplinary journal. Two coauthors of the present report, ON and AD, are Sub-Editor and Manager of the Journal of the Serbian Chemical Society (JSCS). One coauthor MA is a member of the editorial board of Entropy. It will be seen that comparing features from these two journals allows one to lift some veil on the reported apparent discrepancy in other cases.

Thus, here below, we explore the fate of papers submitted to these two journals for peer review during a given month, plus their publication fate. We find that, in the case at hands, fluctuations of course occur from one year to another. However, for JSCS, submission peaks do occur in July and September, while many less papers are submitted in May and December. A marked dip in submissions occurs in August for Entropy—the largest number of submissions occurs in October and December.

However, if the number of submitted papers is relevant for editors and handling machines, the probability of acceptance (and rejection) is much concerning authors. Relatively to the number of submitted papers, it is shown that more papers are accepted for publication if they are submitted in January (and February)—but less so if submitted in December, for JSCS; the highest rejection rates occur for papers submitted in December and March. For Entropy, the acceptance rate is the lowest in June and December, but is high for papers submitted during spring months, February to May. Statistical tests, e.g. a \(\chi ^{2}\) and confidence intervals, are provided to ensure the validity of the findings.

Due to different desk rejection policies and in order to discuss the effect of such policies as in Alikhan et al. (2011), we discuss a possible specific determinant for JSCS data: possible effects due to religious or holiday bias (in Serbia) are commented upon.

Comments on the present study limitations and suggestions for further research (requesting data availability) are found in the conclusion section.

Data

The JSCS and Entropy peer review process are both mainly managed electronically—whence the editorial work is only weakly tied to the editors working days.Footnote 1

The Journal of the Serbian Chemical Society

JSCS contains 14 sub-sections and many sub-editors, as it can be viewed from the journal website http://shd.org.rs/JSCS/.

The (36 data points) time series of the monthly submissions \(N_s{^{(m,y)}}\) to JSCS in a given month (\(m= 1,\dots ,12\)) in year (y) (2012, 2013 and 2014) is shown in Fig. 1. The total number of submission (\(T_s{^{(y)}}=\sum _m N_s{^{(m,y)}}\)) decreased by \(\sim 17\%\) or so from \(y=2012\) or 2013 to \(y=2014\): 317 or \(322\rightarrow 274\).

Next, let us call the numbers of papers later accepted (\(N_a{^{(m,y)}}\)) and those rejected (\(N_r{^{(m,y)}}\)). Among the total number of submitted papers (\(T_s=\sum _y T_s{^{(y)}} \)) = 913 submitted papers, \(T_a= 424 (= 162 + 146 + 116)\) were finally accepted for publication. In view of further discussion, let it be pointed out that among the total number \(T_r = 474\,\, (= 149 + 172 +153)\) of (peer and subsequently editor) rejected papers, i.e. 52%, \(T_{dr} = ( 42 + 81 + 79=) \) 202 papers were desk rejected, without going to a peer review process, i.e. 22.1%. For completeness, let it be recorded that several papers were rejected because the authors did not reply to the reviewers remarks in due time and a few submissions were withdrawn (Thus, \(T_a+ T_r \ne T_s\): \(424 + 474 \ne 913\)).

The time series of the positive fate, thus acceptance, of submitted papers for a specific month submission is also shown in Fig. 1.

Fig. 1
figure 1

Time series of (left) the number of submitted papers and (right) of the number of accepted papers when submitted to JSCS during a given month (m) in 2012, 2013 and 2014

The statistical characteristicsFootnote 2 of the \(N_s{^{(m,y)}}\), \(N_a{^{(m,y)}}\), \(N_r{^{(m,y)}}\), and \(N_{dr}{^{(m,y)}}\) distributions for JSCS are given in Tables 1, 2, 3 and 4.

Table 1 Statistical characteristics of the distribution of the Number \(N_s{^{(m,y)}}\) of papers submitted in a given month to JSCS in 2012, 2013, in 2014, \(C_s ^{(m,y)}\), in 2012, 2013 and 2014 and \(N_s^{(m,y)} \) over (2012–2014); notice that \(C_s^{(m,y)}\) is obtained after monthly summing. Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. mo=\) 12 or 36 months, respectively
Table 2 Statistical characteristics of the distribution of the Number \(N_a^{(m,y)}\) of accepted papers if submitted in a given month (m) to JSCS in 2012, 2013, in 2014, in 2012, 2013 and 2014 for \(C_a^{(m,y)}\), and over (2012–2014); \(C_a^{(m,y)}\) is obtained after monthly summing. Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. mo=\) 12 or 36 months, respectively
Table 3 Statistical characteristics of the distribution of the Number \(N_r^{(m,y)}\) of rejected papers if submitted in a given month (m) to JSCS in 2012, 2013, and 2014, after monthly summing, for \(C_r^{(m,y)}\), and over (2012–2014). Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. mo=\) 12 or 36 months, respectively
Table 4 Statistical characteristics of the distribution of the Number \(N_{dr}^{(m,y)}\) of desk rejected papers if submitted in a given month (m) to JSCS in 2012, 2013, in 2014, in 2012, 2013 and 2014 after monthly summing, for \(C_{dr}^{(m,y)}\), and over (2012–2014). Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. mo=\) 12 or 36 months, respectively

Entropy

Entropy covers research on all aspects of entropy and information studies. The journal home page is http://www.mdpi.com/journal/entropy.

The (36 data point) time series of the monthly submission to Entropy over the years 2014, 2015, and 2016 is shown in Fig. 2. The number of submission increased by \(\sim 60\%\) or so from 2014 to 2015: \(604\rightarrow 961\), but not much (\(\sim 5\%\)) between 2015 and 2016: 961\(\rightarrow 1008\).

Fig. 2
figure 2

Time series of (left) the number of submitted papers and (right) of the number of accepted papers when submitted to Entropy during a given month in 2014, 2015, and 2016

Among the \(T_s=2573\) submitted papers, \(T_a= 1250 \) were finally accepted for publication. The time series of the positive fate, thus acceptance, of submitted papers after a specific month submission, is also shown in Fig. 2.

In view of further discussion below, let it be pointed out that there were \((110 + 162 + 246 =)\,\, 518\,\,\) peer review rejected papers, i.e. 20.1%; \(T_{dr} = (158 + 332 + 315 =) \) 805 papers were desk rejected at submission, i.e. 31.2%.

The statistical characteristics of the \(N_{s}^{(m,y)}\), \(N_{a}^{(m,y)}\), \(N_{r}^{(m,y)}\), and \(N_{dr}^{(m,y)}\) distributions for Entropy are given in Tables 5, 6, 7 and 8.

Table 5 Statistical characteristics of the distribution of the Number \(N_s^{(m,y)}\) of papers submitted in a given month (m) to Entropy in year (\(y=\)) 2014, 2015, and 2016, after monthly summing for \(C_s^{(m,y)}\), and over (2014–2016). Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. mo=\) 12 or 36 months, respectively
Table 6 Statistical characteristics of the distribution of the Number \(N_a^{(m,y)}\) of accepted papers if submitted in a given month to Entropy in 2014, in 2015, and in 2016, after monthly summing for \(C_a^{(m,y)}\), and over (2014–2016). Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. m= 12\) or 36 months, respectively
Table 7 Statistical characteristics of the distribution of the Number \(N_r^{(m,y)}\) of rejected papers if submitted in a given month to Entropy in 2014, in 2015, and in 2016, after monthly summing for \(C_r^{(m,y)}\), and over (2014–2016). Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. mo= 12\) or 36 months, respectively
Table 8 Statistical characteristics of the distribution of the Number \(N_{dr}^{(m,y)}\) of desk rejected papers if submitted in a given month (m) to Entropy in 2014, in 2015, and in 2016, after monthly summing for \(C_{dr}^{(m,y)}\), and over (2014–2016). Therefore, the statistical characteristics in the last two columns slightly differ from each other, because the time span is determined as occurring over \(N. mo= 12\) or 36 months, respectively

Data analysis

The most important value to discuss is the calculated \(\chi ^2\), for checking whether or not the distribution is uniform over the whole year.

Notice that we can discuss the data not only comparing different years, but also the cumulated data: \(C_s^{(m)}=\sum _y N_s^{(m,y)}\), and similarly for \(C_a^{(m)}\), \(C_r^{(m)}\), and \(C_{dr}^{(m)}\), as if all years are “equivalent”. For further analysis, we provide the statistical characteristics of the cumulated distributions in Tables 1, 2, 3, 4, 5, 6, 7 and 8.

We have also taken into account that months can have a different number of days, normalizing all months as if there were 31 days long (including the special case of February in 2016). The fact that the number of papers appears not to be an integer, in so doing, is not a drastic point, but more importantly such a data manipulation does not disagree at all with our following conclusions. Thus, we do not report results due to such “data normalization”.

JSCS data analysis

In all JSCS cases, the mean of each distribution decreases from 2013 to 2014; so does the standard deviation \(\sigma \). This is the case for the cumulated time series, \(C^{(m)} = N^{(m,2012)} + N^{(m,2013)} + N^{(m,2014)} \), data which necessarily differs from \(N^{(m,[2012-2014])}\). The coefficient of variation (CV \( \equiv \sigma /\mu \)) is always quite small, indicating that the data is reliable beyond statistical sampling errors. For either \(C_s\) or \(C_a\), the coefficient of variationFootnote 3 for the cumulated data is lower than the other CVs, a posteriori pointing to the (statistical) interest of accumulating data for each month of different years—beside looking at the more dispersed data over a long time span.

Next, observe the summary of statistical characteristics in Tables 1, 2, 3 and  4; they show that the distributions are positively skewed, except those for the submitted papers which are negatively skewed. The kurtosis of each distribution is usually negative, except for the anomalous cases, \(N_r^{(m,2014)}\) and \(N_{dr}^{(m,2014)}\). It can be concluded that the distributions are quite asymmetric, far from a Gaussian, but rather peaked.

Almost all measured values fall within the classical confidence interval \( ] \mu -2\sigma ,\mu +2\sigma [\). However, in five cases, a few extreme values fall above the upper limit, as can be deduced from the Tables.

“Finally”, notice that all \(\chi ^2\) values, reported in Tables 1, 2, 3 and  4 are much larger than the 95% critical value: they markedly allow to reject the null hypothesis, i.e. a uniform distribution, for each examined case. Thus a monthly effect exists beyond statistical errors for all \(N_s\), \(N_a\), \(N_r\) and \(N_{dr}\) cases.

Entropy data analysis

In the case of Entropy data, the CV is usually low—and much lower than in the case of JSCS. The skewness and kurtosis are not systematically positive or negative. The number of outliers outside the confidence interval is also “not negligible”; this is hinted from the number of maximum and minimum values falling outside the confidence interval, yet “not too far” from the relevant interval border. Nevertheless, this implies that the distribution behaviors are influenced by the number of data points, to a larger extent for Entropy than for JSCS.

Nevertheless, notice that all \(\chi ^2\) values, reported in Tables 5, 6, 7 and 8 are also much larger than the 95% critical value: they markedly allow to reject the null hypothesis, i.e. a uniform distribution, for each examined case. A month anomaly effect exists beyond statistical errors for all \(N_s\) and \(N_a\); it is weaker for the \(N_r\) and \(N_{dr}\) cases. The large \(\chi ^2\) values obviously point to distinguishable peaks and dips, thereby markedly promoting the view of monthly effect bias for \(N_s\) and \(N_a\).

Discussion

Let us first recall that the journals here examined have different aims; one is a specialized journal, the other is an interdisciplinary journal. To our knowledge, this is the first time that a journal with such a “broadness” is considered within the question on monthly bias. It seems that one should expect an averaging effect due to a varied number of constraints on the research schedules pertaining to different topics and data sources. One subquestion pertains on whether a focussed research influences the timing of paper submission, and later acceptance (or rejection). One would expect more bias for the JSCS case than for the Entropy case. Comparing journals (in psychology), but with different “specializations”, Shalvi et al. (2010) had found different behaviors indeed. Let us observe what anomalies are found in the present cases.

JSCS

Comparing months in 2012, 2013 and 2014, it can be noticed that the most similar months (the least change of positions in the decreasing order of “importance”) are Dec., May, June for the lowest submission rate, while Sept. and July are those remaining on the top of the month list, for the highest submission rate; see figures. A specific deduction seems to be implied: there is a steady academic production of papers strictly before and after holidays, but there is a quiet (production and) submission of papers before holidays. This finding of July production relevance is rather similar to that found for most other journals—except PSPB (Shalvi et al. 2010).

Concerning the May dip anomaly, one may remind ourselves that in most countries (including Serbia), lectures and practical work at faculties end by June; since many authors (professors, assistants) are very engaged with students at that time, probably May is not the month when they are focused on writing papers but rather “prefer” finishing regular duties. In fact, corroborating this remark, it has been observed that most papers submitted to JSCS are from academia researchers (Nedič and Dekanski 2015).

A huge peak in January 2013 is intriguing. It was searched whether something special occurred ca. January 2013; it was checked that the submission system worked properly: there was no special clogging a month before. Moreover, there were no special invitations or collection of articles for a special issue. Therefore, the peak can be correlated to that found for PS. From such a concordance, it seems that more quantitative correlation aspects could be searched for through available data.

Notice that on a month rank basis, for 2013 and 2014, the Kendall \(\tau \) coefficient \(\simeq -0.0303\) for submitted papers, but \(\simeq -0.3030\) for accepted papers; concerning the correlation between the cumulated \(N_s\) and \(N_a\), the Kendall \(\tau \) coefficient \(\simeq -0.2121\).

Two other points related to JSCS, are discussed in “Seasonal desk rejection by editor” and “Optimal submission month, -for paper later acceptance” sections: (1) the possible influence of desk rejection policy, a conjecture of Shalvi et al. (2010), for distinguishing patterns, and (2) the acceptance and rejections rates, which are tied to the submission patterns, but also pertain to the “entrance barrier” (editor load mood) conjecture proposed by Schreiber (2012).

Entropy

In the case of Entropy, the cumulated measure (over the 3 years here examined) points to a more frequent submission in December, and a big dip in August. From a more general view point, there are more papers submitted during the last 3 months of the year. A marked contrast occurs for the accepted papers for which a wide dip exists for 4 months : from June till September. The discussion on desk rejection and better chance for acceptance are also found in “Seasonal desk rejection by editor” and “Optimal submission month, -for paper later acceptance” sections.

Notice that for the correlation between the cumulated \(N_s\) and \(N_a\), the Kendall \(\tau \) coefficient \(\simeq 0.4242\).

Finally, comparing the cumulated numbers of submitted and accepted papers to JSCS and to Entropy, and ranking the months accordingly, the Kendall \(\tau \) coefficient is weakly negative: \(\simeq \) − 0.333 and − 0.1818, respectively.

Constraint determinants

Seasonal desk rejection by editor

Often controversial or scorned upon, the desk rejection patterns can be discussed now. Tables 4, 8 provide the relevant data respectively for JSCS and Entropy. Notice that for either JSCS or Entropy, we do not discuss reasons why editors (and reviewers) reject papers; these reasons are outside the present considerations; see for some information (Callaham et al. 1998; Cole et al. 1981; Hargens 1968).

Let us consider JSCS first. It can be observed that “only” (160/596) \(\simeq \) 27% papers are desk rejected—this is interestingly compared to the (“\(many\)”) rejected papers after peer review: 325/596 \(\simeq 0.55\), for JSCS; the ratio is \(\sim 1/2\). The highest desk rejection rate occurs for papers submitted in Nov., while the lowest is for those submitted in May; see Fig. 3. Distinguishing years, it happens that a high rejection rate occurs if the papers were submitted in Nov. 2014 and Aug. 2013, while a low rejection rate occurred for papers submitted in Feruary and May 2013.

Fig. 3
figure 3

Aggregated distribution of the number of desk rejected papers when submitted (left) to JSCS during a given month, in 2012 or in 2013 or in 2014 and (right) to Entropy during a given month in 2014 or in 2015, or in 2016

There is no apparent month correlations. For example, the month with the greatest number of submissions (overall) is Sept.; the rejection rate in Sept. 2013 was 0.469, out of which 0.250 were desktop rejected. In Sept. 2014, these percentages were 0.555 and 0.333. On the other hand, the month with the lowest number of submissions is May. In May 2013, the rejection rate was 0.500, but desktop rejection was only 0.111. In May 2014, the rejection rate was 0.562, and desktop rejection was 0.250.

For completeness in arguing, let it be known that official holidays in Serbia are on Jan. 1–2 and 7 (Christmas day), Feb. 15–16, in April (usually) one Friday and one Monday (Easter holiday), May 1–2, and Nov. 11—at which time one does not expect editors to be on duty for desk rejection.

Next concerning Entropy, (805/2573) \(\simeq \) 31% are desk rejected at submission, much more than those rejected by reviewers (and the editor), i.e. (518/2573) \(\simeq \) 20%. The greatest desk rejection occurs in December and January—the lowest in February, May, and August. However, in terms of percentage of desk rejection with respect to the number of submitted papers, the months of December, September and June are the most probable, while in February and May the editors seem more soft.

Conclusions: there seems to be no effect due to holidays on the editorial workflow, as months most often containing holidays (January, July and August) exhibit no special statistical anomaly—with respect to either submission or decision rate as compared to other months, for JSCS. Yet, the \(\chi ^2\) is quite large (\(\sim \)16.55; see Table 4). Thus, the seasonal effect might have another origin. The Entropy\(N_{dr}\) data distribution is even more uniform (\(\chi ^2 \sim \) 6.52; see Table 8). If any, some seasonal effect on \(N_{dr}\) might occur during winter time.

Entrance barrier editor load effect

Schreiber (2012) considers that an entrance barrier can be set up by editors due to their work load. We understand such a bias as resulting from an accumulation of submitted papers at some time thereafter correlated to a large rate of desk rejection. One can without much restriction assume that the correlation has to be observed for zero month-time lag, since both journals are usually prone to replying quickly to authors.

A visual comparison of the correlation between the number of desk rejected papers and the number of submitted papers to JSCS during a given month, distinguishing 2012 and 2013 from 2014 or to Entropy during a given month in 2014 or in 2015, or in 2016 is shown in Fig. 4. For JSCS, the number of desk rejected papers is roughly proportional to the number \(N_s\) during a given month, \(\simeq 25\%\), a value already noticed—except at \(N_{s}\sim 30\), when \(N_{dr}\) can be as large as 30–50%. However, both in 2013 fall and 2014 spring–summer time, there are months for which \(N_s\) is large, but \(N_{dr}\) is low, leading to a doubt on a editor barrier load effect.

For Entropy, it occurs that there are two clusters separated by borders \(N_s \sim 70\) and \(N_{dr} \sim 20\). When \(N_s \ge 70\) , the number of desk rejected papers proportionally much increases. That was surely the case in 2015.

Conclusions: JSCS or Entropy editors may raise some entrance barrier due to overload whatever the season,

Fig. 4
figure 4

Entrance barrier load conjecture effect. Visual correlation between the number of desk rejected papers and the number of submitted papers to (left) JSCS during a given month, between 2012 and 2014 or to (right) Entropy during a given month between 2014 and 2016

Optimal submission month, -for paper later acceptance

The above data and discussion on the number of papers is relevant for editors, and automatic handling of papers. Of course, this holds partially true as well for authors who do not want to overload editorial desks with many publications at a given time, since authors expect some rather quick (and positive) decision on their submission. However, another point is of great interest for authors, somewhat bearing on the reviewer and desk editor mood. The most relevant question, on a possible seasonal bias, for authors is whether his/her paper has a greater chance to be accepted if submitted during a given month. Thus, the probability of acceptance, the so called “acceptance rate” is a relevant variable to be studied!

The relative number (i.e. monthly percentages) of papers accepted or rejected, \( p^{(m,y)}_{a,s}= N^{(m,y)}_a/N^{(m,y)}_s\) or \(p^{(m,y)}_{r,s}=N^{(m,y)}_r/N^{(m,y)}_s\), after submission on a specific month is easily obtained from the figures. The months (mo) can be ranked, e.g. in decreasing order of importance, according to such a relative probability (thereafter called \(p_a\)) of having a paper accepted if submitted in a given month (m) to JSCS or to Entropy in given years; see Table 9. One can easily obtained the corresponding \(p_r\) of rejected papers; see Table 10.

Table 9 Months (mo) ranked in decreasing order of importance according to the probability \(p_a = N_a^{(m)} /N_s^{(m)}\) of having a paper accepted if submitted in a given month (m) to JSCS or to Entropy in given years
Table 10 Months (mo) ranked in decreasing order of importance according to the probability \(p_r = N_r^{(m)} /N_s^{(m)}\) of having a paper rejected if submitted in a given month (m) to JSCS or to Entropy in given years

This holds true for any yearly time series leading to some \(p_a \equiv p^{(m,y)}_{a,s}= N^{(m,y)}_a/N^{(m,y)}_s\), whence allowing to compare journals according to

$$\begin{aligned} p_a-p_r = \sum _y [p^{(m,y)}_{a,s}-p^{(m,y)}_{r,s} ]\equiv \sum _y \left[ \frac{N^{(m,y)}_a}{N^{(m,y)}_s}-\frac{N^{(m,y)}_r}{N^{(m,y)}_s}\right] . \end{aligned}$$
(6.1)

One could also consider

$$\begin{aligned} q_a-q_r = \left[ \frac{\sum _y N^{(m,y)}_a}{\sum _y N^{(m,y)}_s}- \frac{\sum _y N^{(m,y)}_r}{\sum _y N^{(m,y)}_s} \right] \equiv \left[ \frac{C^{(m)}_a}{C^{(m)}_s}- \frac{C^{(m)}_r}{C^{(m)}_s} \right] \end{aligned}$$
(6.2)

for the corresponding cumulated data over each specific time interval. A comment on the matter is postponed to the "Appendix".

JSCS case

The relevant percentage differences between accepted and rejected number of papers to JSCS in 2013 and 2014 are given in Fig. 5.

From this difference in probability perspective, it does not seem to be recommended that authors submit their paper to JSCS in Mar or Dec. They should rather submit their papers in January, with some non-negligible statistical chance of acceptance for submissions in February or October.

Fig. 5
figure 5

Monthly aggregated percentages difference between accepted (\(p_a\)) and rejected (\(p_r\)) number of papers, normalized to the number of submitted papers, on a given month (left) to JSCS over (2012–2014), (right) to Entropy in (2014–2016)

Entropy case

For Entropy, an equivalent calculation of \(p_a-p_r\) can be made—from aggregating data in Fig. 2, over a 12 month interval leading to Fig. 5. Even though the best percentage of accepted papers occurs if the papers are submitted from January till May (with a steady increase, in fact) and in October and November, the percentage of submitted papers in December is the largest of the year, and the probability of acceptance is the lowest for such papers.

Thus, a marked dip in acceptance probability occurs if the papers are submitted during the summer months (June–Sept.), as seen in Fig. 5, whence suggesting to avoid such months for submission to Entropy.

Warnings and discussion

For fully testing seasonal effects, one might argue that one should correlate the acceptance/rejection matter to the hemisphere, or/and to nationality of authors, and notice the influence of co-authors.Footnote 4

We apologize for not having searched for the affiliations (either in the southern or northern hemisphere—since seasons are different) of submitting authors to Entropy; we expect that such a “hemisphere effect”, if it exists, is hidden in the statistical error bar of the sample, \( \simeq 1/\sqrt{N}_s \sim 4 \%\). Concerning the nationalities of authors (and reviewers) of JSCS in the period Nov. 2009–Oct. 2014, those have been discussed by Nedič and Dekanski (2015); see Figs. 3, 2 respectively in Nedič and Dekanski (2015). For completeness, let us mention that the distribution of data on papers submitted, accepted, rejected, withdrawn, to JSCS from mainly Serbian authors and from “outside Serbia, on given years can be found in Table 11.

Table 11 Data on papers submitted, accepted, rejected, withdrawn, to JSCS from mainly Serbian authors and from authors “outside Serbia”, on given years

From such a reading, it appears that JSCS editors are fair, not biased, in favor or against papers with the corresponding author being from Serbia.

At this level, more importantly, a comparison resulting form the observation of Fig. 5 allows to point to a marked difference between a specialized journal and a multidisciplinary one—at least from the editorial aim, and the peer reviewers points of view. The difference between the probability of acceptance and that of rejection, on a monthly basis, i.e. \(p_a-p_r\), has an astoundingly different behavior: the \(p_a-p_r\) value is only positive over 3 months for JSCS, but is always positive for Entropy. This can be interpreted in terms of peer review control. Recall that the percentage of desk rejection is approximatively the same for JSCS and Entropy, but the peer review rejection is much higher (\(\sim 55\%\)) for JSCS in contrast with a \(\sim 20\%\) reviewer rejection rate for Entropy. In terms of seasonal effect, one has a positive value in January (and February) for JSCS, but a positive effect for the spring and fall months for Entropy. We consider that such a spread is likely due to the multidisciplinary nature of the latter journal, reducing the strong monthly and seasonal bias on the fate of a paper.

Conclusion

Two remarks seem to be of interest for attempting some understanding of these different findings. On one hand, statistical procedures (either \(\chi ^2\) or confidence interval bounds \(\mu \pm 2 \sigma \)) have not to lead to identical conclusions: both can point to deviations, but the former indicates the presence (or absence) of peaks and dips with respect to the uniform distribution, while the latter points to statistical deviations when the distributions of residuals is expected to be like a Gaussian. In the latter case, an extension of the discussion including skewness and kurtosis is mandatory (Doane and Seward 2011). We have pointed out such departures from Gaussianity. The second remark on monthly and/or seasonal bias, in view of the contradistinctions hereby found between the chemistry and multidisciplinary journal, might not be mainly due to desk rejection effects, as proposed by Shalvi et al. (2010), but rather pertains to the peer reviewers having different statuses within the journal aims spread.

In so doing, by considering two somewhat “modest, but reliable” journals,Footnote 5 we have demonstrated seasonal effects, in paper submission and also in subsequent acceptance. A seasonal bias effect is stronger in the specialized journal. Recall that one can usually read when an accepted paper has been submitted, but the missing set, the rejected papers when submitted, is usually unknown. Due to our editorial status, we have provided a statistical analysis about such an information. Our outlined findings and intrinsic behavioral hypotheses markedly take into account the scientific work environment, and point, in the present cases, to seasonal bias effects, mainly due to authors in the submission/acceptance stage of the peer review process.

Let us develop one point on the study limitation here: all authors of this manuscript have spent a lot of time trying to involve other journals in this investigation. Allow us not to list the journals nor editors. There were, however, very serious obstacles to our intention. A fair number of journal editors expressed the wish to cooperate and share their data. When they asked their publishers for permission to open the data, the permission was not given. Additionally, as collaborators in the same COST action TD 1306 (New Frontiers of Peer-Review), we had the opportunity to directly discuss this issue with representatives of many (the main) scientific journal publishers but data on their journals remained closed. Publishers themselves perform certain investigations and it seems that they rarely share their data with outsiders, whence we were forced to limit ourselves to journals not published by well-known publishers. During our literature search, we have seen that there are papers dealing with specific aspects of publishing in just one journal or 2–3 journals confirming that there are objective difficulties to collect information on a greater number of journals.

We, however, hope that even separate data on specific journals, if publicly reported, will, finally, lead to a cumulative collection of information which, after being published, will enable a fairer comparison between journals and disciplines than presently. This will contribute to overcome limitations of each individual study, and will generate more general knowledge on this subject.

Thus, in order to go beyond our observation, we are aware that more data must be made available by editors and/or publishers. Avoiding debatable hypotheses on the quality of papers, ranking of journals, fame of editors, open access publicity, submission fees, publication charges, and so on, we may suggest more work on time lag effects, beyond (Mrowinski et al. 2016, 2017), in order to pin point better the role of both editors and reviewers quality and concern. In so doing, it might be wise to consider some ARCH-like modeling of seasonal effects, as it has been done for observing day of the week effect in paper submission/acceptance/rejection to/in/by peer review journals as in Ausloos et al. (2017). This suggestion of ARCH econometric-like modeling is supplementary supported by arguments as in related bibliometric studies. Indeed, one could develop a Black–Scholes–Schrödinger–Zipf–Mandelbrot model framework for studying seasonal effects—instead of the coauthor core score as in Rotundo (2014).