1 Introduction

A nice consequence of the Prime Number Theorem is the asymptotic formula

$$\begin{aligned} \log {\text {lcm}}(1, 2, \ldots , n) \sim n , \quad \text { as } n \rightarrow +\infty , \end{aligned}$$
(1)

where \({\text {lcm}}\) denotes the least common multiple. Indeed, precise estimates for \(\log {\text {lcm}}(1, \ldots , n)\) are equivalent to the Prime Number Theorem with an error term. Thus, a natural generalization is to study estimates for \(L_f(n) := \log {\text {lcm}}(f(1), \ldots , f(n))\), where f is a well-behaved function, for instance, a polynomial with integer coefficients. (We ignore terms equal to 0 in the \({\text {lcm}}\) and we set \({\text {lcm}}\varnothing := 1\).) When \(f \in {\mathbb {Z}}[x]\) is a linear polynomial, the product of linear polynomials, or an irreducible quadratic polynomial, asymptotic formulas for \(L_f(n)\) were proved by Bateman et al. [3], Hong et al. [10], and Cilleruelo [6], respectively. In particular, for \(f(x) = x^2 + 1\), Rué et al. [15] determined a precise error term for the asymptotic formula. When f is an irreducible polynomial of degree \(d \ge 3\), Cilleruelo [6] conjectured that \(L_f(n) \sim (d - 1)\, n \log n\), as \(n \rightarrow +\infty \), but this is still an open problem. However, bounds for \(L_f(n)\) were proved by Maynard and Rudnick [13], and Sah [16]. Moreover, Rudnick and Zehavi [14] studied the growth of \(L_f(n)\) along a shifted family of polynomials.

Another direction of research consists in considering the least common multiple of random sets of positive integers. For every positive integer n and every \(\alpha \in [0, 1]\), let \({\mathcal {B}}(n, \alpha )\) denote the probabilistic model in which a random set \({\mathcal {A}} \subseteq \{1, \ldots , n\}\) is constructed by picking independently each element of \(\{1, \ldots , n\}\) with probability \(\alpha \). Cilleruelo et al. [9] studied the least common multiple of the elements of \({\mathcal {A}}\) and proved the following result (see [1] for a more precise version, and [4, 5, 7, 8, 12, 17,18,19] for other results of a similar flavor).

Theorem 1.1

Let \({\mathcal {A}}\) be a random set in \({\mathcal {B}}(n, \alpha )\). Then, as \(\alpha n \rightarrow +\infty \), we have

$$\begin{aligned} \log {\text {lcm}}({\mathcal {A}}) \sim \frac{\alpha \log (1/\alpha )}{1 - \alpha } \cdot n , \end{aligned}$$

with probability \(1 - o(1)\), where the factor involving \(\alpha \) is meant to be equal to 1 for \(\alpha = 1\).

Remark 1.1

In the deterministic case \(\alpha = 1\), we have \({\mathcal {A}} = \{1, \ldots , n\}\) (surely) and Theorem 1.1 corresponds to (1).

Let q be an indeterminate. The q-analog of a positive integer k is defined by

$$\begin{aligned}{}[k]_q := 1 + q + q^2 + \cdots + q^{k - 1} \in {\mathbb {Z}}[q] . \end{aligned}$$

The q-analogs of many other mathematical objects (factorial, binomial coefficients, hypergeometric series, derivative, integral...) have been extensively studied, especially in Analysis and Combinatorics [2, 11]. For every set \({\mathcal {S}}\) of positive integers, let \([{\mathcal {S}}]_q := \big \{[k]_q : k \in {\mathcal {S}}\big \}\).

The aim of this paper is to study the least common multiple of the elements of \([{\mathcal {A}}]_q\) for a random set \({\mathcal {A}}\) in \({\mathcal {B}}(n, \alpha )\). Our main results are the following:

Theorem 1.2

Let \({\mathcal {A}}\) be a random set in \({\mathcal {B}}(n, \alpha )\) and put \(X := \deg {\text {lcm}}\!\big ([{\mathcal {A}}]_q\big )\). Then, for every integer \(n \ge 2\) and every \(\alpha \in [0,1]\), we have

$$\begin{aligned} {\mathbb {E}}[X] = \frac{3}{\pi ^2} \cdot \frac{\alpha {\text {Li}}_2(1 - \alpha )}{1 - \alpha } \cdot n^2 + O\!\left( \alpha n (\log n)^2 \right) , \end{aligned}$$
(2)

where \({\text {Li}}_2(z) := \sum _{k=1}^\infty z^k / k^2\) is the dilogarithm and the factor involving \(\alpha \) is meant to be equal to 1 when \(\alpha = 1\). In particular,

$$\begin{aligned} {\mathbb {E}}[X] \sim \frac{3}{\pi ^2} \cdot \frac{\alpha {\text {Li}}_2(1 - \alpha )}{1 - \alpha } \cdot n^2 , \end{aligned}$$

as \(n \rightarrow +\infty \), uniformly for \(\alpha \in [0,1]\).

Theorem 1.3

Let \({\mathcal {A}}\) be a random set in \({\mathcal {B}}(n, \alpha )\) and put \(X := \deg {\text {lcm}}\!\big ([{\mathcal {A}}]_q\big )\). Then there exists a function \(\mathrm {v} : {(0,1)} \rightarrow {\mathbb {R}}^+\) such that, as \(\alpha n / \big ((\log n)^3 (\log \log n)^2\big ) \rightarrow +\infty \), we have

$$\begin{aligned} {\mathbb {V}}[X] = (\mathrm {v}(\alpha ) + o(1)) \, n^3 . \end{aligned}$$
(3)

Moreover, the upper bound

$$\begin{aligned} {\mathbb {V}}[X] \ll \alpha n^3 , \end{aligned}$$
(4)

holds for every positive integer n and every \(\alpha \in [0, 1]\).

As a consequence of Theorems 1.2 and 1.3, we obtain the following q-analog of Theorem 1.1.

Theorem 1.4

Let \({\mathcal {A}}\) be a random set in \({\mathcal {B}}(n, \alpha )\). Then, as \(\alpha n \rightarrow +\infty \), we have

$$\begin{aligned} \deg {\text {lcm}}\!\big ([{\mathcal {A}}]_q\big ) \sim \frac{3}{\pi ^2} \cdot \frac{\alpha {\text {Li}}_2(1 - \alpha )}{1 - \alpha } \cdot n^2 , \end{aligned}$$

with probability \(1 - o(1)\), where the factor involving \(\alpha \) is meant to be equal to 1 for \(\alpha = 1\).

Remark 1.2

In the deterministic case \(\alpha = 1\), we have (see Lemma 4.1 below)

$$\begin{aligned} \deg {\text {lcm}}\!\big [\{1, 2, \ldots , n\}\big ]_q = \sum _{1 \,<\, d \,\le \, n} \varphi (d) , \end{aligned}$$

and Theorem 1.4 corresponds to the well-known asymptotic formula \(\sum _{d \le n} \varphi (d) \sim \tfrac{3}{\pi ^2} n^2\) (Lemma 3.3 below) for the sum of the first values of the Euler function \(\varphi \).

Remark 1.3

In Theorem 1.4 the condition \(\alpha n \rightarrow +\infty \) is necessary. Indeed, if \(\alpha n \le C\), for some constant \(C > 0\), then

$$\begin{aligned} {\mathbb {P}}[{\mathcal {A}} = \varnothing ] = (1 - \alpha )^n \ge \left( 1 - \frac{C}{n}\right) ^n \rightarrow \mathrm {e}^C \end{aligned}$$

as \(n \rightarrow +\infty \), and so no (nontrivial) asymptotic formula for \(\deg {\text {lcm}}\!\big ([{\mathcal {A}}]_q\big )\) can hold with probability \(1 - o(1)\).

We conclude this section with some possible questions for further research on this topic. Alsmeyer, Kabluchko, and Marynych [1, Corollary 1.5] proved that, for fixed \(\alpha \in [0, 1]\) and for a random set \({\mathcal {A}}\) in \({\mathcal {B}}(n, \alpha )\), an appropriate normalization of the random variable \(\log {\text {lcm}}({\mathcal {A}})\) converges in distribution to a standard normal random variable, as \(n \rightarrow +\infty \). In light of Theorems 1.2 and 1.3, it is then natural to ask whether the random variable

$$\begin{aligned} \frac{\deg {\text {lcm}}\!\big ([{\mathcal {A}}]_q\big ) - \frac{3}{\pi ^2}\cdot \frac{\alpha {\text {Li}}_2(1 - \alpha )}{1 - \alpha }\cdot n^2 }{\sqrt{\mathrm {v}(\alpha )n^3}} \end{aligned}$$

converges in distribution to a normal random variable, or to some other random variable.

Another problem could be considering polynomial values, similarly to the results done in the context of integers, and studying \({\text {lcm}}\!\big ([f(1)]_q, \cdots , [f(n)]_q\big )\) for \(f \in {\mathbb {Z}}[x]\) or, more generally, \({\text {lcm}}\!\big ([f(k)]_q : k \in {\mathcal {A}}\big )\) with \({\mathcal {A}}\) a random set in \({\mathcal {B}}(n, \alpha )\).

2 Notation

We employ the Landau–Bachmann “Big Oh” and “little oh” notations O and o, as well as the associated Vinogradov symbol \(\ll \), with their usual meanings. Any dependence of the implied constants is explicitly stated or indicated with subscripts. For real random variables X and Y, depending on some parameters, we say that “\(X \sim Y\) with probability \(1 - o(1)\)”, as the parameters tend to some limit, if for every \(\varepsilon > 0\) we have \({\mathbb {P}}\big [\,|X - Y| > \varepsilon |Y|\,\big ] = o_\varepsilon (1)\), as the parameters tend to the limit. We let (ab) and [ab] denote the greatest common divisor and the least common multiple, respectively, of two integers a and b. As usual, we write \(\varphi (n)\), \(\mu (n)\), \(\tau (n)\), and \(\sigma (n)\), for the Euler totient function, the Möbius function, the number of divisors, and the sum of divisors, of a positive integer n, respectively.

3 Preliminaries

In this section we collect some preliminary results needed in later arguments.

Lemma 3.1

We have

$$\begin{aligned} \sum _{m \,\le \, x} \tau (m) \ll x \log x , \end{aligned}$$

for every \(x \ge 2\).

Proof

See, e.g., [20, Part I, Theorem 3.2]. \(\square \)

Lemma 3.2

We have

$$\begin{aligned} \sum _{[e_1\!,\, e_2] \,>\, x} \frac{1}{e_1 e_2 [e_1, e_2]} \ll \frac{\log x}{x} , \end{aligned}$$

for every \(x \ge 2\).

Proof

From Lemma 3.1 and partial summation, it follows that

$$\begin{aligned} \sum _{m \,>\, x} \frac{\tau (m)}{m^2}&= \left[ \frac{\sum _{m \le t} \tau (m)}{t^2}\right] _{t\,=\, x}^{+\infty } + 2\int _x^{+\infty } \frac{\sum _{m \le t} \tau (m)}{t^3}\,\mathrm {d} t \\&\ll \int _x^{+\infty } \frac{\log t}{t^2}\,\mathrm {d} t = \left[ -\frac{\log t + 1}{t}\right] _{t \,=\, x}^{+\infty } \ll \frac{\log x}{x} . \end{aligned}$$

Let \(e := (e_1, e_2)\) and \(e_i^\prime := e_i / e\) for \(i=1,2\). Then we have

$$\begin{aligned} \sum _{[e_1\!,\, e_2] \,>\, x} \frac{1}{e_1 e_2 [e_1, e_2]}&\le \sum _{e \,\ge \, 1} \frac{1}{e^3} \sum _{e_1^\prime e_2^\prime \,>\, x / e} \frac{1}{(e_1^\prime e_2^\prime )^2} = \sum _{e \,\ge \, 1} \frac{1}{e^3} \sum _{m \,>\, x / e} \frac{\tau (m)}{m^2} \\&\ll \sum _{e \,\le \, x / 2} \frac{1}{e^3} \frac{\log (x/e)}{x/e} + \sum _{e \,>\, x / 2} \frac{1}{e^3} \ll \frac{\log x}{x} + \frac{1}{x^2} \ll \frac{\log x}{x} , \end{aligned}$$

as desired. \(\square \)

Let us define

$$\begin{aligned} \Phi (x) := \sum _{n \,\le \, x} \varphi (n) \quad \text { and }\quad \Phi (a_1, a_2; x) := \sum _{n \,\le \, x} \varphi (a_1 n)\, \varphi (a_2 n) , \end{aligned}$$

for every \(x \ge 1\) and for all positive integers \(a_1, a_2\).

Lemma 3.3

We have

$$\begin{aligned} \Phi (x) = \frac{3}{\pi ^2} \, x^2 + O(x \log x) , \end{aligned}$$

for every \(x \ge 2\).

Proof

See, e.g., [20, Part I, Theorem 3.4]. \(\square \)

Lemma 3.4

We have

$$\begin{aligned} \Phi (a_1, a_2; x) = C_1(a_1, a_2) \, x^3 + O\big (\sigma (a_1)\,\sigma (a_2) \,x^2 (\log x)^2\big ) , \end{aligned}$$
(5)

for every \(x \ge 2\), where

$$\begin{aligned} C_1(a_1, a_2) := \frac{a_1 a_2}{3}\sum _{d_1\!,\, d_2 \,\ge \, 1} \frac{\mu (d_1)\mu (d_2)}{d_1 d_2 \big [d_1 / (a_1, d_1), d_2 / (a_2, d_2)\big ]} \end{aligned}$$
(6)

and the series is absolutely convergent.

Proof

From the identity \(\varphi (n) / n = \sum _{d \,\mid \;\;\!\!\! n} \mu (d) / d\), it follows that

$$\begin{aligned} \sum _{n \,\le \, x} \frac{\varphi (a_1 n)}{a_1 n} \,\frac{\varphi (a_2 n)}{a_2 n}&= \sum _{n \,\le \, x} \left( \sum _{d_1 \,\mid \, a_1 n} \frac{\mu (d_1)}{d_1} \sum _{d_2 \,\mid \, a_2 n} \frac{\mu (d_2)}{d_2} \right) \\&= \sum _{\begin{array}{c} d_1 \,\le \, a_1 x \\ d_2 \,\le \, a_2 x \end{array}} \frac{\mu (d_1)}{d_1} \, \frac{\mu (d_2)}{d_2}\, \#\big \{n \le x : d_1 \mid a_1 n \text { and } d_2 \mid a_2 n \big \} \\&= \sum _{\left[ \frac{d_1}{(a_1\!,\, d_1)},\, \frac{d_2}{(a_2,\, d_2)}\right] \,\le \, x} \frac{\mu (d_1)}{d_1} \, \frac{\mu (d_2)}{d_2} \left( \frac{x}{\big [d_1 / (a_1, d_1), d_2 / (a_2, d_2)\big ]} + O(1)\right) . \end{aligned}$$

Let \(c_i := (a_i, d_i)\) and \(e_i := d_i / c_i\), for \(i=1,2\). On the one hand, we have

$$\begin{aligned} E_1 := \sum _{\left[ \frac{d_1}{(a_1\!,\, d_1)},\, \frac{d_2}{(a_2,\, d_2)}\right] \,\le \, x} \frac{1}{d_1 d_2} \le \sum _{c_1 \,\mid \, a_1} \frac{1}{c_1} \sum _{c_2 \,\mid \, a_2} \frac{1}{c_2} \sum _{e_1 \,\le \, x} \frac{1}{e_1} \sum _{e_2 \,\le \, x} \frac{1}{e_2}\ll \frac{\sigma (a_1)\,\sigma (a_2)}{a_1 a_2} \, (\log x)^2 . \end{aligned}$$

On the other hand, thanks to Lemma 3.2, we have

$$\begin{aligned} E_2&:= \sum _{\left[ \frac{d_1}{(a_1\!,\, d_1)},\, \frac{d_2}{(a_2,\, d_2)}\right] \,>\, x} \frac{1}{d_1 d_2 \big [d_1 / (a_1, d_1), d_2 / (a_2, d_2)\big ]} \\&\le \sum _{c_1 \,\mid \, a_1} \frac{1}{c_1} \sum _{c_2 \,\mid \, a_2} \frac{1}{c_2} \sum _{[e_1\!,\, e_2] \,>\, x} \frac{1}{e_1 e_2 [e_1, e_2]} \ll \frac{\sigma (a_1)\,\sigma (a_2)}{a_1 a_2} \,\frac{\log x}{x} , \end{aligned}$$

which, in particular, implies that the series

$$\begin{aligned} C_0(a_1, a_2) := \sum _{d_1\!,\, d_2 \,\ge \, 1} \frac{\mu (d_1)\mu (d_2)}{d_1 d_2 [d_1 / (a_1, d_1), d_2 / (a_2, d_2)]} \end{aligned}$$

is absolutely convergent. Therefore, we obtain

$$\begin{aligned} \sum _{n \,\le \, x} \frac{\varphi (a_1 n)}{a_1 n} \,\frac{\varphi (a_2 n)}{a_2 n}&= \big (C_0(a_1, a_2) + O(E_2)\big )\, x + O(E_1) \nonumber \\&= C_0(a_1, a_2) \, x + O\!\left( \frac{\sigma (a_1)\,\sigma (a_2)}{a_1 a_2} \,(\log x)^2\right) . \end{aligned}$$
(7)

Now (5) follows from (7) by partial summation and since \(C_1(a_1, a_2) = \dfrac{a_1 a_2}{3}\,C_0(a_1, a_2)\). \(\square \)

Remark 3.1

The obvious bound \(\varphi (m) \le m\) yields \(C_1(a_1, a_2) \le \dfrac{a_1 a_2}{3}\) (which is not so obvious from (6)).

We end this section with an easy observation that will be useful later.

Remark 3.2

It holds \(1 - (1 - x)^k \le k x\), for all \(x \in [0, 1]\) and for all integers \(k \ge 0\).

4 Proofs

Henceforth, let \({\mathcal {A}}\) be a random set in \({\mathcal {B}}(n, \alpha )\), let \([{\mathcal {A}}]_q\) be its q-analog, and put \(L := {\text {lcm}}\!\big ([{\mathcal {A}}]_q\big )\) and \(X := \deg L\). For every positive integer d, let us define

$$\begin{aligned} I_{{\mathcal {A}}}(d) := {\left\{ \begin{array}{ll} 1 &{} \text { if } d \mid k \text { for some } k \in {\mathcal {A}}; \\ 0 &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$

The following lemma gives a formula for X in terms of \(I_{{\mathcal {A}}}\) and the Euler function.

Lemma 4.1

We have

$$\begin{aligned} X = \sum _{1 \,<\, d \,\le \, n} \varphi (d)\, I_{{\mathcal {A}}}(d) . \end{aligned}$$
(8)

Proof

For every positive integer k, it holds

$$\begin{aligned}{}[k]_q = \frac{q^k - 1}{q - 1} = \prod _{\begin{array}{c} d \,\mid \!\; k \\ d \,>\, 1 \end{array}} \Phi _d(q) , \end{aligned}$$

where \(\Phi _d(q)\) is the dth cyclotomic polynomials. Since, as it is well known, every cyclotomic polynomial is irreducible over \({\mathbb {Q}}\), it follows that L is the product of the polynomials \(\Phi _d(q)\) such that \(d > 1\) and \(d \mid k\) for some \(k \in {\mathcal {A}}\). Finally, the equality \(\deg \!\big (\Phi _d(q)\big ) = \varphi (d)\) and the definition of \(I_{{\mathcal {A}}}\) yield (8). \(\square \)

Let \(\beta := 1 - \alpha \). The next lemma provides two expected values involving \(I_{{\mathcal {A}}}\).

Lemma 4.2

For all positive integers \(d, d_1, d_2\), we have

$$\begin{aligned} {\mathbb {E}}\big [I_{\mathcal {A}}(d)\big ] = 1 - \beta ^{\lfloor n / d\rfloor } \end{aligned}$$
(9)

and

$$\begin{aligned} {\mathbb {E}}\big [I_{\mathcal {A}}(d_1)I_{\mathcal {A}}(d_2)\big ] = 1 - \beta ^{\lfloor n / d_1 \rfloor } - \beta ^{\lfloor n / d_2 \rfloor } + \beta ^{\lfloor n / d_1 \rfloor + \lfloor n / d_2 \rfloor - \lfloor n / [d_1\!,\, d_2] \rfloor } . \end{aligned}$$

Proof

On the one hand, by the definition of \(I_{{\mathcal {A}}}\), we have

$$\begin{aligned} {\mathbb {E}}\big [I_{{\mathcal {A}}}(d)\big ] = {\mathbb {P}}\big [\exists k \in {\mathcal {A}} : d \mid k\big ] = 1 - {\mathbb {P}}\left[ \bigwedge _{m \,\le \, \lfloor n / d\rfloor } (dm \notin {\mathcal {A}})\right] = 1 - \beta ^{\lfloor n / d \rfloor } , \end{aligned}$$

which is (9). On the other hand, by linearity of the expectation and by (9), we have

$$\begin{aligned} {\mathbb {E}}\big [I_{\mathcal {A}}(d_1)I_{\mathcal {A}}(d_2)\big ]&= {\mathbb {E}}\big [I_{\mathcal {A}}(d_1) + I_{\mathcal {A}}(d_2) - 1 + \big (1 - I_{\mathcal {A}}(d_1)\big )\big (1 - I_{\mathcal {A}}(d_2)\big )\big ] \\&= {\mathbb {E}}\big [I_{\mathcal {A}}(d_1)\big ] + {\mathbb {E}}\big [I_{\mathcal {A}}(d_2)\big ] - 1 + {\mathbb {E}}\big [\big (1 - I_{\mathcal {A}}(d_1)\big )\big (1 - I_{\mathcal {A}}(d_2)\big )\big ] \\&= 1 - \beta ^{\lfloor n / d_1 \rfloor } - \beta ^{\lfloor n / d_2 \rfloor } + {\mathbb {E}}\big [\big (1 - I_{\mathcal {A}}(d_1)\big )\big (1 - I_{\mathcal {A}}(d_2)\big )\big ] , \end{aligned}$$

where the last expected value can be computed as

$$\begin{aligned} {\mathbb {E}}\big [\big (1 - I_{\mathcal {A}}(d_1)\big ) \big (1 - I_{\mathcal {A}}(d_2)\big )\big ]&= {\mathbb {P}}\big [\forall k \in {\mathcal {A}} : d_1 \not \mid k \text { and } d_2 \not \mid k\big ] \\&= {\mathbb {P}}\left[ \bigwedge _{\begin{array}{c} k \,\le \, n \\ d_1 \,\mid \, k \text { or } d_2 \,\mid \, k \end{array}}(k \notin {\mathcal {A}})\right] = \beta ^{\lfloor n / d_1 \rfloor + \lfloor n / d_2 \rfloor - \lfloor n / [d_1\!,\, d_2] \rfloor } , \end{aligned}$$

and second claim follows. \(\square \)

We are ready to compute the expected value of X.

Proof of Theorem 1.2

From Lemmas 4.1 and 4.2, it follows that

$$\begin{aligned} {\mathbb {E}}[X] = \sum _{1 \,<\, d \,\le \, n} \varphi (d)\, {\mathbb {E}}\big [I_{{\mathcal {A}}}(d)\big ] = \sum _{1 \,<\, d \,\le \, n} \varphi (d) \big (1 - \beta ^{\lfloor n / d \rfloor }\big ) . \end{aligned}$$
(10)

Moreover, since \(\lfloor n / d \rfloor = j\) if and only if \(n / (j + 1) < d \le n / j\), we get that

$$\begin{aligned} \sum _{d \,\le \, n} \varphi (d) \big (1 - \beta ^{\lfloor n / d \rfloor }\big )&= \sum _{j \,\le \, n} (1 - \beta ^j) \sum _{n / (j + 1) \,<\, d \,\le \, n / j} \varphi (d) \nonumber \\&= \sum _{j \,\le \, n} (1 - \beta ^j) \!\left( \Phi \!\left( \frac{n}{j}\right) - \Phi \!\left( \frac{n}{j + 1}\right) \right) \nonumber \\&= \alpha \sum _{j \,\le \, n} \beta ^{j - 1} \Phi \!\left( \frac{n}{j}\right) \nonumber \\&= \frac{3}{\pi ^2} \cdot \alpha \sum _{j \,\le \, n} \frac{\beta ^{j-1}}{j^2} \cdot n^2 + O\!\left( \alpha \sum _{j \,\le \, n} \frac{n}{j}\log \!\left( \frac{n}{j}\right) \right) \nonumber \\&= \frac{3}{\pi ^2} \cdot \frac{\alpha {\text {Li}}_2(1 - \alpha )}{1 - \alpha } \cdot n^2 + O \big (\alpha n (\log n)^2\big ) , \end{aligned}$$
(11)

where we used Lemma 3.3. Putting together (10) and (11), and noting that, by Remark 3.2, the addend of (11) corresponding to \(d = 1\) is \(1 - \beta ^n = O(\alpha n)\), we get (2). The proof is complete. \(\square \)

Now we consider the variance of X.

Proof of Theorem 1.3

From Lemmas 4.1 and 4.2, it follows that

$$\begin{aligned} {\mathbb {V}}[X]&= {\mathbb {E}}\big [X^2\big ] - {\mathbb {E}}[X]^2 \nonumber \\&= \sum _{1 \,<\, d_1\!,\, d_2 \,\le \, n} \varphi (d_1)\,\varphi (d_2) \Big ({\mathbb {E}}\big [I_{{\mathcal {A}}}(d_1)\, I_{{\mathcal {A}}}(d_2)\big ] - {\mathbb {E}}\big [I_{{\mathcal {A}}}(d_1)\big ]\,{\mathbb {E}}\big [I_{{\mathcal {A}}}(d_2)\big ]\Big ) \nonumber \\&= \sum _{1 \,<\, d_1\!,\, d_2 \,\le \, n} \varphi (d_1)\,\varphi (d_2) \, \beta ^{\lfloor n / d_1 \rfloor + \lfloor n / d_2 \rfloor - \lfloor n / [d_1, d_2] \rfloor } \big (1 - \beta ^{\lfloor n / [d_1, d_2] \rfloor } \big ) . \end{aligned}$$
(12)

Let us define

$$\begin{aligned} V_n(\alpha ) := \frac{1}{n^3}\sum _{d_1\!,\, d_2 \,\le \, n} \varphi (d_1)\,\varphi (d_2) \, \beta ^{\lfloor n / d_1 \rfloor + \lfloor n / d_2 \rfloor - \lfloor n / [d_1, d_2] \rfloor } \big (1 - \beta ^{\lfloor n / [d_1, d_2] \rfloor } \big ) . \end{aligned}$$

Clearly, we have

$$\begin{aligned} V_n(\alpha ) - \frac{{\mathbb {V}}[X]}{n^3} \ll \frac{1}{n^3}\sum _{d \,\le \, n} \varphi (d) \, \beta ^{n} \big (1 - \beta ^{\lfloor n / d \rfloor } \big ) \le \frac{1}{n^3}\sum _{d \,\le \, n} d \ll \frac{1}{n}. \end{aligned}$$

Hence, in order to prove (3), it suffices to show that \(V_n(\alpha ) = \mathrm {v}(\alpha ) + o(1)\).

For all vectors \(\varvec{a} := (a_1, a_2)\) and \(\varvec{j} := (j_1, j_2, j_3)\) with components in the set of positive integers, define the quantities

$$\begin{aligned} \rho _1(\varvec{a}, \varvec{j}) := \max \!\left( \frac{1}{a_1(j_1 + 1)}, \frac{1}{a_2(j_2 + 1)}, \frac{1}{a_1 a_2 (j_3 + 1)}\right) \end{aligned}$$

and

$$\begin{aligned} \rho _2(\varvec{a}, \varvec{j}) := \min \!\left( \frac{1}{a_1 j_1}, \frac{1}{a_2 j_2}, \frac{1}{a_1 a_2 j_3}\right) . \end{aligned}$$

Let \(d := (d_1, d_2)\) and \(a_i := d_i / d\) for \(i=1,2\). Then the equalities

$$\begin{aligned} j_1 = \left\lfloor \frac{n}{d_1}\right\rfloor , \quad j_2 = \left\lfloor \frac{n}{d_2}\right\rfloor , \quad j_3 = \left\lfloor \frac{n}{[d_1, d_2]}\right\rfloor , \end{aligned}$$

are equivalent to

$$\begin{aligned} j_1 \le \frac{n}{a_1 d}< j_1 + 1 , \quad j_2 \le \frac{n}{a_2 d}< j_2 + 1 , \quad j_3 \le \frac{n}{a_1 a_2 d} < j_3 + 1 , \end{aligned}$$

which in turn are equivalent to

$$\begin{aligned} \frac{n}{a_1 (j_1 + 1)}< d \le \frac{n}{a_1 j_1} , \quad \frac{n}{a_2 (j_2 + 1)}< d \le \frac{n}{a_2 j_2} , \quad \frac{n}{a_1 a_2 (j_3 + 1)} < d \le \frac{n}{a_1 a_2 j_3} , \end{aligned}$$

that is,

$$\begin{aligned} \rho _1(\varvec{a}, \varvec{j})\, n < d \le \rho _2(\varvec{a}, \varvec{j})\, n . \end{aligned}$$

Therefore, letting

$$\begin{aligned} {\mathcal {S}}_n := \big \{(\varvec{a}, \varvec{j}) \in {\mathbb {N}}^5 : (a_1, a_2) = 1,\; \exists d \in {\mathbb {N}} \text { s.t. }\! \rho _1(\varvec{a}, \varvec{j})\, n < d \le \rho _2(\varvec{a}, \varvec{j})\, n \big \} \end{aligned}$$

and

$$\begin{aligned} S(\varvec{a}, \varvec{j}; n) := \frac{1}{n^3} \sum _{\rho _1(\varvec{a},\, \varvec{j})\, n \,<\, d \,\le \, \rho _2(\varvec{a},\, \varvec{j})\, n} \varphi (a_1 d) \, \varphi (a_2 d) , \end{aligned}$$

we have

$$\begin{aligned} V_n(\alpha ) = \sum _{(\varvec{a},\, \varvec{j}) \,\in \, {\mathcal {S}}_n} \beta ^{j_1 + j_2 - j_3} (1 - \beta ^{j_3}) \,S(\varvec{a}, \varvec{j}; n) . \end{aligned}$$

Now let us define

$$\begin{aligned} \mathrm {v}(\alpha ) := \sum _{(\varvec{a},\, \varvec{j}) \,\in \, {\mathcal {S}}_\infty } \beta ^{j_1 + j_2 - j_3} (1 - \beta ^{j_3}) \, D(\varvec{a}, \varvec{j}) , \end{aligned}$$
(13)

where

$$\begin{aligned} {\mathcal {S}}_\infty := \bigcup _{m \,\ge \, 1} {\mathcal {S}}_m = \big \{(\varvec{a}, \varvec{j}) \in {\mathbb {N}}^5 : (a_1, a_2) = 1,\, \rho _1(\varvec{a}, \varvec{j}) < \rho _2(\varvec{a}, \varvec{j}) \big \} \end{aligned}$$

and

$$\begin{aligned} D(\varvec{a}, \varvec{j}) := C_1(a_1, a_2) \big (\rho _2(\varvec{a},\, \varvec{j})^3 - \rho _1(\varvec{a},\, \varvec{j})^3\big ) , \end{aligned}$$

recalling that \(C_1(a_1, a_2)\) is defined by (6). The convergence of series (13) follows easily from Remark 3.1, \(\rho _2(\varvec{a}, \varvec{j}) \le 1 / (a_1 a_2 j_3)\), and the fact that \(\min (j_1, j_2) \ge j_3\) for all \((\varvec{a}, \varvec{j}) \in {\mathcal {S}}_\infty \).

Thanks to Lemma 3.4, for each \((\varvec{a}, \varvec{j}) \in {\mathcal {S}}_n\) we have

$$\begin{aligned} S(\varvec{a}, \varvec{j}; n) = D(\varvec{a}, \varvec{j}) + O\!\left( \sigma (a_1)\,\sigma (a_2) \,\rho _2(\varvec{a}, \varvec{j})^2 \cdot \frac{(\log n)^2}{n}\right) . \end{aligned}$$

Consequently, we get that

$$\begin{aligned} V_n(\alpha ) = \mathrm {v}(\alpha ) - \Sigma _1 + O\!\left( \Sigma _2 \cdot \frac{(\log n)^2}{n}\right) , \end{aligned}$$
(14)

where

$$\begin{aligned} \Sigma _1 := \sum _{(\varvec{a},\, \varvec{j}) \,\in \, {\mathcal {S}}_\infty \!\setminus {\mathcal {S}}_n} \beta ^{j_1 + j_2 - j_3} (1 - \beta ^{j_3}) \, D(\varvec{a}, \varvec{j}) \end{aligned}$$

and

$$\begin{aligned} \Sigma _2 := \sum _{(\varvec{a},\, \varvec{j}) \,\in \, {\mathcal {S}}_n} \beta ^{j_1 + j_2 - j_3} (1 - \beta ^{j_3}) \, \sigma (a_1)\,\sigma (a_2) \,\rho _2(\varvec{a}, \varvec{j})^2 . \end{aligned}$$

Now we have to bound both \(\Sigma _1\) and \(\Sigma _2\).

If \((\varvec{a}, \varvec{j}) \in {\mathcal {S}}_\infty \setminus {\mathcal {S}}_n\) then \(\big (\rho _2(\varvec{a}, \varvec{j}) - \rho _1(\varvec{a}, \varvec{j})\big ) n < 1\) and consequently, also by Remark 3.1,

$$\begin{aligned} D(\varvec{a}, \varvec{j}) \ll a_1 a_2 \big (\rho _2^3 - \rho _1^3\big )= & {} a_1 a_2 \big (\rho _1^2 + \rho _1 \rho _2 + \rho _2^2\big )(\rho _2 - \rho _1) \ll \frac{a_1 a_2 \rho _2^2}{n}\nonumber \\\le & {} \frac{1}{a_1 a_2 j_3^2 n} , \end{aligned}$$
(15)

where, for brevity, we wrote \(\rho _i := \rho _i(\varvec{a}, \varvec{j})\) for \(i=1,2\).

If \((\varvec{a}, \varvec{j}) \in {\mathcal {S}}_\infty \) then, as we already noticed, \(\min (j_1, j_2) \ge j_3\) and, moreover,

$$\begin{aligned} \frac{j_2}{j_3 + 1}< a_1< \frac{j_2 + 1}{j_3} \quad \text { and }\quad \frac{j_1}{j_3 + 1}< a_2 < \frac{j_1 + 1}{j_3} . \end{aligned}$$

Hence, we have

$$\begin{aligned} \sum _{(\varvec{a},\, \varvec{j}) \,\in \, {\mathcal {S}}_\infty } \frac{\beta ^{j_1 + j_2 - j_3} (1 - \beta ^{j_3})}{a_1 a_2 j_3^2}&\le \sum _{j_3 \,\ge \, 1} \frac{1 - \beta ^{j_3}}{j_3^2} \sum _{j_1,\, j_2 \,\ge \, j_3} \beta ^{j_1 + j_2 - j_3} \sum _{\begin{array}{c} j_2 / (j_3 + 1) \,<\, a_1 \,<\, (j_2 + 1) / j_3 \\ j_1 / (j_3 + 1) \,<\, a_2 \,<\, (j_1 + 1) / j_3 \end{array}} \frac{1}{a_1 a_2} \nonumber \\&\ll \sum _{j_3 \,\ge \, 1} \frac{1 - \beta ^{j_3}}{j_3^2} \sum _{j_1,\, j_2 \,\ge \, j_3} \beta ^{j_1 + j_2 - j_3} = \frac{1}{\alpha ^2}\sum _{j \,\ge \, 1} \frac{(1 - \beta ^j)\beta ^j}{j^2} \nonumber \\&\le \frac{1}{\alpha } \sum _{j \,\le \, 1 / \alpha } \frac{1}{j} + \frac{1}{\alpha ^2}\sum _{j \,>\, 1 / \alpha } \frac{1}{j^2} \ll \frac{\log (1 / \alpha ) + 1}{\alpha } , \end{aligned}$$
(16)

where we used the inequality \(1 - \beta ^j \le \alpha j\), which follows from Remark 3.2.

On the one hand, from (15) and (16) it follows that

$$\begin{aligned} \Sigma _1 \ll \frac{\log (1 / \alpha ) + 1}{\alpha n} = o(1) , \end{aligned}$$
(17)

as \(\alpha n / \!\big ((\log n)^3 (\log \log n)^2\big ) \rightarrow +\infty \) (actually, \(\alpha n / \!\log n \rightarrow +\infty \) is sufficient).

On the other hand, from \(\rho _2(\varvec{a}, \varvec{j}) \le 1 / (a_1 a_2 j_3)\), (16), and the bound \(\sigma (m) \ll m \log \log m\) (see, e.g., [20, Part I, Theorem 5.7]) it follows that

$$\begin{aligned} \Sigma _2\le & {} \!\!\!\! \sum _{(\varvec{a},\, \varvec{j}) \,\in \, {\mathcal {S}}_n}\!\!\!\! \frac{\beta ^{j_1 + j_2 - j_3} (1 - \beta ^{j_3})}{a_1 a_2 j_3^2} \cdot \frac{\sigma (a_1)\,\sigma (a_2)}{a_1 a_2} \ll \frac{(\log (1/\alpha ) + 1) (\log \log n)^2\!}{\alpha }\nonumber \\= & {} o\!\left( \frac{n}{(\log n)^2}\right) , \end{aligned}$$
(18)

as \(\alpha n / \big ((\log n)^3 (\log \log n)^2\big ) \rightarrow +\infty \).

At this point, putting together (14), (17), and (18), we obtain \(V_n(\alpha ) = \mathrm {v}(\alpha ) + o(1)\), as desired. The proof of (3) is complete.

It remains only to prove the upper bound (4). From (12) it follows that

$$\begin{aligned} {\mathbb {V}}[X]&\le \sum _{[d_1\!,\, d_2] \,\le \, n} \varphi (d_1)\,\varphi (d_2) \, \beta ^{\lfloor n / d_1 \rfloor + \lfloor n / d_2 \rfloor - \lfloor n / [d_1, d_2] \rfloor } \big (1 - \beta ^{\lfloor n / [d_1, d_2] \rfloor } \big ) \\&\le \sum _{[d_1\!,\, d_2] \,\le \, n} d_1 d_2 \cdot \frac{\alpha n}{[d_1, d_2]} = \alpha n \sum _{[d_1\!,\, d_2] \,\le \, n} (d_1, d_2) \le \alpha n \sum _{d \,\le \, n} d \sum _{a_1 a_2 \,\le \, n / d} 1 \\&= \alpha n \sum _{d \,\le \, n} d \sum _{m \,\le \, n / d} \tau (m) \ll \alpha n^2 \sum _{d \,\le \, n} \log \!\left( \frac{n}{d}\right) = \alpha n^2 \big (n \log n - \log (n!)\big ) < \alpha n^3 , \end{aligned}$$

where we used Remark 3.2, Lemma 3.1, and the bound \(n! > (n / \mathrm {e})^n\). Thus (4) is proved. \(\square \)

Proof of Theorem 1.4

By Chebyshev’s inequality, Theorems 1.2 and 1.3, we have

$$\begin{aligned} {\mathbb {P}}\big [\,|X - {\mathbb {E}}[X]| > \varepsilon \!\; {\mathbb {E}}[X] \,\big ] \le \frac{{\mathbb {V}}[X]}{\big (\varepsilon {\mathbb {E}}[X]\big )^2} \ll \frac{\alpha n^3}{(\varepsilon \alpha n)^2} = \frac{1}{\varepsilon ^2 \alpha n} = o_\varepsilon (1) , \end{aligned}$$

as \(\alpha n \rightarrow +\infty \). Hence, using again Theorem 1.2, we get

$$\begin{aligned} X \sim \frac{3}{\pi ^2} \cdot \frac{\alpha {\text {Li}}_2(1 - \alpha )}{1 - \alpha } \cdot n^2 , \end{aligned}$$

with probability \(1 - o(1)\), as \(\alpha n \rightarrow +\infty \). \(\square \)