1 Introduction and Main Results

We consider the Navier–Stokes equation for an incompressible fluid on the d-dimensional torus \(\mathbb T^d\), \(d \ge 2\), \(\mathbb T:= \mathbb R/ 2 \pi \mathbb Z\),

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u - \Delta u + u \cdot \nabla u + \nabla p = \varepsilon f(\omega t, x) \\ {{\,\mathrm{div}\,}}u = 0 \end{array}\right. } \end{aligned}$$
(1.1)

where \(\varepsilon \in (0, 1)\) is a small parameter, the frequency \(\omega = (\omega _1, \ldots , \omega _\nu ) \in \mathbb R^\nu \) is a \(\nu \)-dimensional vector and \(f : \mathbb T^\nu \times \mathbb T^d \rightarrow \mathbb R^d\) is a smooth quasi-periodic external force. The unknowns of the problem are the velocity field \(u = (u_1, \ldots , u_d) : \mathbb R\times \mathbb T^d \rightarrow \mathbb R^d\), and the pressure \(p : \mathbb R\times \mathbb T^d \rightarrow \mathbb R\). For convenience, we set the viscosity parameter in front of the laplacian equal to one. We assume that f has zero space-time average, namely

$$\begin{aligned} \int _{ \mathbb T^\nu \times \mathbb T^d} f(\varphi , x)\, d \varphi \, d x = 0\,. \end{aligned}$$
(1.2)

The purpose of the present paper is to show the existence and the stability of smooth quasi-periodic solutions of the Eq. (1.1). More precisely we show that if f is a sufficiently regular vector field satisfying (1.2), for \(\varepsilon \) sufficiently small and for \(\omega \in \mathbb R^\nu \) DiophantineFootnote 1, i.e.

$$\begin{aligned} \begin{aligned}&|\omega \cdot \ell | \ge \frac{\gamma }{|\ell |^\nu }, \quad \forall \ell \in \mathbb Z^\nu {\setminus } \{ 0 \}\,, \\&\text {for some}\qquad \qquad \gamma \in (0, 1), \end{aligned} \end{aligned}$$
(1.3)

then the Eq. (1.1) admits smooth quasi-periodic solutions (which are referred to also as invariant tori) \(u_\omega (t, x) = U(\omega t, x)\), \(p_\omega (t, x) = P(\omega t, x)\), \(U : \mathbb T^\nu \times \mathbb T^d \rightarrow \mathbb R^d\), \(P : \mathbb T^\nu \times \mathbb T^d \rightarrow \mathbb R\) of size \(O(\varepsilon )\), oscillating with the same frequency \(\omega \in \mathbb R^\nu \) of the forcing term. If the forcing term has zero-average in x, i.e.

$$\begin{aligned} \int _{\mathbb T^d} f(\varphi , x)\, d x= 0, \quad \forall \varphi \in \mathbb T^\nu \end{aligned}$$
(1.4)

then the result holds for any frequency vector \(\omega \in \mathbb R^\nu \), without requiring any non-resonance condition. Furthermore, we show also the orbital and the asymptotic stability of these quasi-periodic solutions in high Sobolev norms. More precisely, for any sufficiently regular initial datum which is \(\delta \)-close to the invariant torus (w.r. to the \(H^s\) topology), the corresponding solution of (1.1) is global in time and it satisfies the following properties.

  • Orbital stability For all times \(t \ge 0\), the distance in \(H^s\) between the solution and the invariant torus is of order \(O(\delta )\).

  • Asymptotic stability The solution converges asymptotically to the invariant torus in high Sobolev norm \(\Vert \cdot \Vert _{H^s_x}\) as \(t \rightarrow + \infty \), with a rate of convergence which is exponential, i.e. \(O(e^{- \alpha t})\), for any arbitrary \(\alpha \in (0, 1)\).

In order to state precisely our main results, we introduce some notations. For any vector \(a = (a_1, \ldots , a_p) \in \mathbb R^p\), we denote by |a| its Euclidean norm, namely \(|a| := \sqrt{a_1^2 + \cdots + a_p^2}\). Let \(d, n \in \mathbb N\) and a function \(u \in L^2(\mathbb T^d, \mathbb R^n)\). Then u(x) can be expanded in Fourier series

$$\begin{aligned} u(x) = \sum _{\xi \in \mathbb Z^d} \widehat{u}(\xi ) e^{\mathrm{i} x \cdot \xi } \end{aligned}$$

where its Fourier coefficients \(\widehat{u}(\xi )\) are defined by

$$\begin{aligned} \widehat{u}(\xi ) := \frac{1}{(2 \pi )^d} \int _{\mathbb T^d} u(x) e^{- \mathrm{i} x \cdot \xi }\,d x, \quad \forall \xi \in \mathbb Z^d\,. \end{aligned}$$

For any \(s \ge 0\), we denote by \(H^s(\mathbb T^d, \mathbb R^n)\) the standard Sobolev space of functions \(u : \mathbb T^d \rightarrow \mathbb R^n\) equipped by the norm

$$\begin{aligned} \Vert u \Vert _{H^s_x} := \Big (\sum _{\xi \in \mathbb Z^d} \langle \xi \rangle ^{2 s} |\widehat{u}(\xi )|^2 \Big )^{\frac{1}{2}}, \quad \langle \xi \rangle := \mathrm{max}\{ 1, |\xi | \}\,. \end{aligned}$$
(1.5)

We also define the Sobolev space of functions with zero average

$$\begin{aligned} H^s_0(\mathbb T^d, \mathbb R^n) := \Big \{ u \in H^s(\mathbb T^d, \mathbb R^n) : \int _{\mathbb T^d} u(x)\, d x = 0 \Big \}\,. \end{aligned}$$
(1.6)

Moreover, given a Banach space \((X, \Vert \cdot \Vert _X)\) and an interval \(\mathcal{I} \subseteq \mathbb R\), we denote by \(\mathcal{C}^0_b(\mathcal{I}, X)\) the space of bounded, continuous functions \(u : \mathcal{I} \rightarrow X\), equipped with the sup-norm

$$\begin{aligned} \Vert u \Vert _{\mathcal{C}^0_t X} := \sup _{t \in \mathcal{I}} \Vert u(t) \Vert _X\,. \end{aligned}$$

For any integer \(k \ge 1\), \(\mathcal{C}^k_b(\mathcal{I}, X)\) is the space of k-times differentiable functions \(u : \mathcal{I} \rightarrow X\) with continuous and bounded derivatives equipped with the norm

$$\begin{aligned} \Vert u \Vert _{\mathcal{C}^k_t X} := \mathrm{max}_{n \le k} \Vert \partial _t^n u \Vert _{\mathcal{C}^0_t X}\,. \end{aligned}$$

In a similar way we define the spaces \(\mathcal{C}^0(\mathbb T^\nu , X)\), \(\mathcal{C}^k(\mathbb T^\nu , X)\), \(k \ge 1\) and the corresponding norms \(\Vert \cdot \Vert _{\mathcal{C}^0_\varphi X}\), \(\Vert \cdot \Vert _{\mathcal{C}^k_\varphi X}\) (where \(\mathbb T^\nu \) is the \(\nu \)-dimensional torus). We also denote by \(\mathcal{C}^N(\mathbb T^\nu \times \mathbb T^d, \mathbb R^d)\) the space of N-times continuously differentiable functions \(\mathbb T^\nu \times \mathbb T^d \rightarrow \mathbb R^d\) equipped with the standard \(\mathcal{C}^N\) norm \(\Vert \cdot \Vert _{\mathcal{C}^N}\).

Notation. Throughout the whole paper, the notation \(A \lesssim B\) means that there exists a constant C which can depend on the number of frequencies \(\nu \), the dimension of the torus d, the constant \(\gamma \) appearing in the diophantine condition (1.3) and on the \(\mathcal{C}^N\) norm of the forcing term \(\Vert f \Vert _{\mathcal{C}^N}\) such that \(A \le B\). Given n positive real numbers \(s_1, \ldots , s_n > 0\), we write \(A \lesssim _{s_1, \ldots , s_n} B\) if there exists a constant \(C = C(s_1, \ldots , s_n) > 0\) (eventually depending also on \(d, \nu , \gamma , \Vert f \Vert _{\mathcal{C}^N}\)) such that \(A \le C B\).

We are now ready to state the main results of our paper.

Theorem 1.1

(Existence of quasi-periodic solutions) Let \(s > d/2 + 1\), \(N > \frac{3 \nu }{2} + s + 2\), \(\omega \in \mathbb R^\nu \) diophantine (see 1.3) and assume that the forcing term f is in \( \mathcal{C}^N(\mathbb T^\nu \times \mathbb T^d, \mathbb R^d)\) and it satisfies (1.2). Then there exists \(\varepsilon _0 = \varepsilon _0(f, s, d, \nu ) \in (0, 1)\) small enough and a constant \(C = C(f, s, d, \nu ) > 0\) large enough such that for any \(\varepsilon \in (0, \varepsilon _0)\) there exist \(U \in \mathcal{C}^1(\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^d))\), \(P \in \mathcal{C}^0(\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R))\) satisfying

$$\begin{aligned} \int _{\mathbb T^\nu \times \mathbb T^d} U(\varphi , x)\, d \varphi \, d x = 0, \quad \int _{\mathbb T^d} P(\varphi , x)\, d x= 0, \quad \forall \varphi \in \mathbb T^\nu \end{aligned}$$

such that \((u_\omega (t, x), p_\omega ( t, x)) : = (U(\omega t, x), P(\omega t, x))\) solves the Navier–Stokes equation (1.1) and

$$\begin{aligned} \Vert U \Vert _{\mathcal{C}^1_\varphi H^s_x}, \Vert P \Vert _{\mathcal{C}^0_\varphi H^s_x} \le C \varepsilon \,. \end{aligned}$$

If the forcing term f has zero space average, i.e. it satisfies (1.4), then the same statement holds for any frequency vector \(\omega \in \mathbb R^\nu \) and \(U(\varphi , x)\) satisfies

$$\begin{aligned} \int _{\mathbb T^d} U(\varphi , x)\, d x = 0, \quad \forall \varphi \in \mathbb T^\nu \,. \end{aligned}$$

Theorem 1.2

(Stability) Let \(\alpha \in (0, 1)\), \(s > d/2 + 1\), \(N > \frac{3 \nu }{2} + s + 2\), \(u_\omega \), \(p_\omega \) be given in Theorem 1.1. Then there exists \(\delta = \delta (f , s, \alpha , d, \nu ) \in (0, 1)\) small enough and a constant \(C = C(f, s, \alpha , d, \nu ) > 0\) large enough such that for \(\varepsilon \le \delta \) and for any initial datum \(u_0 \in H^s(\mathbb T^d, \mathbb R^d)\) satisfying

$$\begin{aligned} \Vert u_0 - u_\omega (0, \cdot ) \Vert _{H^s_x} \le \delta , \quad \int _{\mathbb T^d} \Big ( u_0(x) - u_\omega (0, x) \Big )\, d x = 0 \end{aligned}$$

there exists a unique global solution (up) of the Navier–Stokes equation (1.1) with initial datum \(u(0, x) = u_0(x)\) which satisfies

$$\begin{aligned} \begin{aligned}&u \in \mathcal{C}^0_b \Big ([0, + \infty ), H^s(\mathbb T^d, \mathbb R^d) \Big ) \cap \mathcal{C}^1_b \Big ([0, + \infty ), H^{s - 2}(\mathbb T^d, \mathbb R^d) \Big )\,, \quad \\&\quad p \in \mathcal{C}^0_b \Big ([0, + \infty ), H^s_0(\mathbb T^d, \mathbb R) \Big ), \\&\int _{\mathbb T^d}\Big ( u(t, x) - u_\omega (t, x) \Big )\, d x = 0\,, \quad \forall t \ge 0\,, \\&\Vert u(t, \cdot ) - u_\omega (t, \cdot ) \Vert _{H^s_x}\,,\, \Vert \partial _t u(t, \cdot ) - \partial _t u_\omega (t, \cdot ) \Vert _{H^{s - 2}_x}\,,\, \Vert p(t, \cdot ) - p_\omega (t, \cdot ) \Vert _{H^s_x} \le C \delta e^{- \alpha t} \end{aligned} \end{aligned}$$

for any \(t \ge 0\).

The investigation of the Navier–Stokes equation with time periodic external force dates back to Serrin [40], Yudovich [41], Lions [30], Prodi [36] and Prouse [37]. In these papers the authors proved the existence of weak periodic solutions on bounded domains, oscillating with the same frequency of the external force. The existence of weak quasi-periodic solutions in dimension two has been proved by Prouse [38]. More recently these results have been extended to unbounded domains by Maremonti [27], Maremonti-Padula [28], Salvi [39] and then by Galdi [19, 20], Galdi-Silvestre [21], Galdi-Kyed [22] and Kyed [32]. We point out that in some of the aforementioned results, no smallness assumptions on the forcing term are needed and therefore, the periodic solutions obtained are not small in size, see for instance [28, 36,37,38,39,40,41]. The asymptotic stability of periodic solutions (also referred to as attainability property) has been also investigated in [27, 28], but it is only proved with respect to the \(L^2\)-norm and the rate of convergence provided is \(O(t^{- \eta })\) for some constant \(\eta > 0\). More recently Galdi and Hishida [23] proved the asymptotic stability for the Navier–Stokes equation with a translation velocity term, by using the Lorentz spaces and they provided a rate of convergence which is essentially \(O(t^{- \frac{1}{2} + \varepsilon })\). In the present paper we consider the Navier–Stokes equation on the d-dimensional torus with a small, quasi-periodic in time external force. We show the existence of smooth quasi-periodic solutions (which are also referred to as invariant tori) of small amplitude and we prove their orbital and asymptotic stability in \(H^s\) for s large enough (at least larger than \(d/2 + 1\)). Furthermore the rate of convergence to the invariant torus, in \(H^s\), for \(t \rightarrow + \infty \) is of order \(O(e^{- \alpha t})\) for any arbitrary \(\alpha \in (0, 1)\). To the best of our knowledge, this is the first result of this kind.

It is also worth to mention that the existence of quasi-periodic solutions, that is also referred to as KAM (Kolmogorov-Arnold-Moser) theory, for dispersive and hyperbolic-type PDEs is a more difficult matter, due to the presence of the so-called small divisors problem. The existence of time-periodic and quasi-periodic solutions of PDEs started in the late 1980s with the pioneering papers of Kuksin [33], Wayne [43] and Craig-Wayne [13], see also [31, 34] for generalizations to PDEs with unbounded nonlinearities. We refer to the recent review [7] for a complete list of references.

Many PDEs arising from fluid dynamics like the water waves equations or the Euler equation are fully nonlinear or quasi-linear equations (the nonlinear part contains as many derivatives as the linear part). The breakthrough idea, based on pseudo-differential calculus and micro-local analysis, in order to deal with these kind of PDEs has been introduced by Iooss, Plotnikov and Toland [25] in the problem of finding periodic solutions for the water waves equation. The methods developed in [25], combined with a KAM-normal form procedure have been used to develop a general method for PDEs in one dimension, which allows to construct quasi-periodic solutions of quasilinear and fully nonlinear PDEs, see [1, 2, 11, 17] and references therein. The extension of KAM theory to higher space dimension \(d > 1\) is a difficult matter due to the presence of very strong resonance-phenomena, often related to high multiplicity of eigenvalues. The first breakthrough results in this directions (for equations with perturbations which do not contain derivatives) have been obtained by Eliasson and Kuksin [16] and by Bourgain [12] (see also Berti-Bolle [8, 9], Geng-Xu-You [24], Procesi-Procesi [35], Berti-Corsi-Procesi [10].)

Extending KAM theory to PDEs with unbounded perturbations in higher space dimension is one of the main open problems in the field. Up to now, this has been achieved only in few examples, see [4,5,6, 15, 18, 29] and recently on the 3D Euler equation [3] which is the most meaningful physical example.

For the Navier–Stokes equation, unlike in the aforementioned papers on KAM for PDEs, the existence of quasi-periodic solutions is not a small divisors problem and it can be done by using a classical fixed point argument. This is due to the fact that the Navier–Stokes equation is a parabolic PDE and the presence of dissipation avoids the small divisors. In the same spirit, it is also worth to mention [14, 42], in which the authors investigate quasi-periodic solutions of some PDEs with singular damping, in which the small divisors problem is avoided thanks to this damping term. We also point out that the present paper is the first example in which the stability of invariant tori, in high Sobolev norms, is proved for all times (and it is even an asymptotic stability). This is possible since the presence of the dissipation allows to prove strong time-decay estimates from which one deduces orbital and asymptotic stability. In the framework of dispersive and hyperbolic PDEs, the orbital stability of invariant tori is usually proved only for large, but finite, times by using normal form techniques. The first result in this direction has been proved in [26]. In the remaining part of the introduction, we sketch the main points of our proof.

As we already explained above, the absence of small divisors is due to the fact that the Navier–Stokes equation is a parabolic PDE. More precisely, this fact is related to invertibility properties of the linear operator \(L_\omega := \omega \cdot \partial _\varphi - \Delta \) (where \(\omega \cdot \partial _\varphi := \sum _{i = 0}^\nu \omega _i \partial _{\varphi _i}\)) acting on Sobolev spaces of functions \(u(\varphi , x)\), \((\varphi , x) \in \mathbb T^\nu \times \mathbb T^d\) with zero average w.r. to x. Since the eigenvalues of \(L_\omega \) are \(\mathrm{i} \omega \cdot \ell + |j|^2\), \(\ell \in \mathbb Z^\nu \), \(j \in \mathbb Z^d {\setminus } \{ 0 \}\), the inverse of \(L_\omega \) gains two space derivatives, see Lemma 3.2. This is suffcient to perform a fixed point argument on the map \(\Phi \) defined in (3.13) from which one deduces the existence of smooth quasi-periodic solutions of small amplitude. The asymptotic and orbital stability of quasi-periodic solutions (which are constructed in Sect. 3) are proved in Sect. 4. More precisely we show that for any initial datum \(u_0\) which is \(\delta \)-close to the quasi-periodic solution \(u_\omega (0, x)\) in \(H^s\) norm (and such that \(u_0 - u_\omega (0, \cdot )\) has zero average), there exists a unique solution (up) such that

$$\begin{aligned} \Vert u (t, \cdot ) - u_\omega (t, \cdot ) \Vert _{H^s_x} = O(\delta e^{- \alpha t}), \quad \Vert p (t, \cdot ) - p_\omega (t, \cdot ) \Vert _{H^s_x} = O(\delta e^{- \alpha t})\,, \quad \alpha \in (0, 1) \end{aligned}$$

for any \(t \ge 0\). This is exactly the content of Theorem 1.2, which easily follows from Proposition 4.1. This Proposition is proved also by a fixed point argument on the nonlinear map \(\Phi \) defined in (4.31) in weighted Sobolev spaces \(\mathcal{E}_s\) (see (4.14)), defined by the norm

$$\begin{aligned} \Vert u \Vert _{\mathcal{E}_s} := \sup _{t \ge 0} e^{\alpha t} \Vert u(t, \cdot ) \Vert _{H^s_x} \end{aligned}$$

where \(\alpha \in (0, 1)\) is a fixed constant. The fixed point argument relies on some dispersive-type estimates for the heat propagator \(e^{t \Delta }\), which are proved in Sect. 4.1. The key estimates are the following.

  1. 1.

    For any \(u_0 \in H^{s - 1}(\mathbb T^d, \mathbb R^d)\) with zero average and for any \(n \in \mathbb N\), \(\alpha \in (0, 1)\), \(t > 0\), one has

    $$\begin{aligned} \Vert e^{t \Delta } u_0 \Vert _{H^s_x} \le C(n, \alpha ) t^{- \frac{n}{2}} e^{- \alpha t} \Vert u_0 \Vert _{H^{s - 1}_x} \end{aligned}$$
    (1.7)

    for some constant \(C(n, \alpha ) > 0\) (see Lemma 4.2). This estimate states that the heat propagator gains one-space derivative and exponential decay in time \(e^{- \alpha t} t^{- \frac{n}{2}}\). Note that, without gain of derivatives on \(u_0\), the exponential decay is stronger, namely \(e^{- t}\), see Lemma 4.2-(i).

  2. 2.

    For any \(f \in \mathcal{E}_{s - 1}\)

    $$\begin{aligned} \Big \Vert \int _0^t e^{(t - \tau )\Delta } f(\tau , \cdot )\, d\tau \Big \Vert _{H^s_x} \le C(\alpha ) e^{- \alpha t} \Vert f \Vert _{\mathcal{E}_{s - 1}} \end{aligned}$$
    (1.8)

    for some constant \(C(\alpha ) > 0\) (see Proposition 4.5). This estimate states that the integral term which usually appears in the Duhamel formula (see (4.31)) gains one space derivative w.r. to f(tx) and keeps the same exponential decay in time as f(tx).

We also remark that the constants \(C(n, \alpha )\), \(C(\alpha )\) appearing in the estimates (1.7), (1.8) tend to \(\infty \) when \(\alpha \rightarrow 1\). This is the reason why it is not possible to get a decay \(O(e^{- t})\) in the asymptotic stability estimate provided in Theorem 1.2.

The latter two estimates allow to show in Proposition 4.9 that the map \(\Phi \) defined in (4.31) is a contraction. The proof of Theorem 1.2 is then easily concluded in Sect. 4.3.

It is also worth to mention that our methods does not cover the zero viscosity limit \(\mu \rightarrow 0\), where \(\mu \) is the usual viscosity parameter in front of the laplacian (that we set for convenience equal to one). Indeed some constants in our estimates become infinity when \(\mu \rightarrow 0\). Actually, it would be very interesting to study the singular perturbation problem for \(\mu \rightarrow 0\) and to see if one is able to recover the quasi-periodic solutions of the Euler equation constructed in [3].

As a concluding remark, we mention that the methods used in this paper also apply to other parabolic-type equations with some technical modifications. For instance, one could prove the existence of quasi-periodic solutions (Theorem 1.1) for a general fully nonlinear parabolic type equation of the form

$$\begin{aligned} \partial _t u - \Delta u + m u + N(x, u, \nabla u, \nabla ^2 u) = \varepsilon f(\omega t, x), \quad m > 0 \end{aligned}$$

where N is a smooth nonlinearity depending also on the second derivatives of u and which is at least quadratic w.r. to \((u, \nabla u, \nabla ^2 u)\). Indeed, as we explained above, this can be done by a fixed point argument, by inverting the operator \(L_\omega := \omega \cdot \partial _\varphi - \Delta + m\). The inverse of this operator gains two space derivatives and hence it compensates the fact that the nonlinearity has a loss of two space-derivatives.

We prefer in this paper to focus on the Navier–Stokes equation for clarity of exposition and since it is a very important physical model.

2 Functional Spaces

In this section we collect some standard technical tools which will be used in the proof of our results. For \(u = (u_1, \ldots , u_n) \in H^s(\mathbb T^d, \mathbb R^n)\), one has

$$\begin{aligned} \Vert u \Vert _{H^s_x} \simeq \mathrm{max}_{i = 1, \ldots , n} \Vert u_i \Vert _{H^s_x}\,. \end{aligned}$$
(2.1)

The following standard algebra lemma holds.

Lemma 2.1

Let \(s > d/2\) and \(u, v \in H^s(\mathbb T^d, \mathbb R^n)\). Then \(u \cdot v \in H^s(\mathbb T^d, \mathbb R)\) (where \(\cdot \) denotes the standard scalar product on \(\mathbb R^n\)) and \(\Vert u \cdot v \Vert _{H^s_x} \lesssim _s \Vert u \Vert _{H^s_x} \Vert v \Vert _{H^s_x}\).

We also consider functions

$$\begin{aligned} \mathbb T^\nu \rightarrow L^2(\mathbb T^d, \mathbb R^n), \quad \varphi \mapsto u(\varphi , \cdot ) \end{aligned}$$

which are in \(L^2\Big (\mathbb T^\nu , L^2(\mathbb T^d, \mathbb R^n)\Big )\). We can write the Fourier series of a function \(u \in L^2\Big (\mathbb T^\nu , L^2(\mathbb T^d, \mathbb R^n)\Big )\) as

$$\begin{aligned} u(\varphi , \cdot ) = \sum _{\ell \in \mathbb Z^\nu } \widehat{u}(\ell , \cdot ) e^{\mathrm{i} \ell \cdot \varphi } \end{aligned}$$
(2.2)

where

$$\begin{aligned} \widehat{u}(\ell , \cdot ) := \frac{1}{(2 \pi )^\nu } \int _{\mathbb T^\nu }u(\varphi , \cdot ) e^{- \mathrm{i} \ell \cdot \varphi }\, d \varphi \in L^2(\mathbb T^d, \mathbb R^n), \quad \ell \in \mathbb Z^\nu \,. \end{aligned}$$
(2.3)

By expanding also the function \(\widehat{u}(\ell , \cdot )\) in Fourier series, we get

$$\begin{aligned} \begin{aligned}&\widehat{u}(\ell , x) = \sum _{j \in \mathbb Z^d} \widehat{u}(\ell , j) e^{\mathrm{i} j \cdot x}\,, \\&\widehat{u}(\ell , j) := \frac{1}{(2 \pi )^{\nu + d}} \int _{\mathbb T^{\nu + d}} u(\varphi , x) e^{- \mathrm{i} \ell \cdot \varphi } e^{- \mathrm{i} j \cdot x}\, d \varphi \, d x, \quad (\ell , j) \in \mathbb Z^\nu \times \mathbb Z^d \end{aligned} \end{aligned}$$
(2.4)

and hence we can write

$$\begin{aligned} u(\varphi , x) = \sum _{\ell \in \mathbb Z^\nu } \sum _{j \in \mathbb Z^d} \widehat{u}(\ell , j) e^{\mathrm{i} \ell \cdot \varphi } e^{\mathrm{i} j \cdot x}\,. \end{aligned}$$
(2.5)

For any \(\sigma , s \ge 0\), we define the Sobolev space \(H^\sigma \Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^n) \Big )\) as the space of functions \(u \in L^2\Big (\mathbb T^\nu , L^2(\mathbb T^d, \mathbb R^n) \Big )\) equipped by the norm

$$\begin{aligned} \Vert u \Vert _{\sigma , s} \equiv \Vert u \Vert _{H^\sigma _\varphi H^s_x} := \Big ( \sum _{\ell \in \mathbb Z^\nu } \langle \ell \rangle ^{2 \sigma } \Vert \widehat{u}(\ell ) \Vert _{H^s_x} \Big )^{\frac{1}{2}} = \Big (\sum _{\ell \in \mathbb Z^\nu } \sum _{j \in \mathbb Z^d} \langle \ell \rangle ^{2 \sigma } \langle j \rangle ^{2 s} |\widehat{u}(\ell , j)|^2 \Big )^{\frac{1}{2}}\,. \end{aligned}$$
(2.6)

Similarly to (2.1), one has that for \(u = (u_1, \ldots , u_n) \in H^\sigma \Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^n) \Big )\)

$$\begin{aligned} \Vert u \Vert _{\sigma , s} \simeq \mathrm{max}_{i = 1, \ldots , n} \Vert u_i \Vert _{\sigma , s}\,. \end{aligned}$$
(2.7)

If \(\sigma >\nu /2\), then

$$\begin{aligned} \begin{aligned}&H^\sigma \Big ( \mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^n) \Big ) \quad \text {is compactly embedded in} \quad \mathcal{C}^0 \Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^n) \Big )\,, \\&\text {and} \quad \Vert u \Vert _{\mathcal{C}^0_\varphi H^s_x} \lesssim _\sigma \Vert u \Vert _{H^\sigma _\varphi H^s_x}\,. \end{aligned} \end{aligned}$$
(2.8)

Moreover, the following standard algebra property holds.

Lemma 2.2

Let \(\sigma > \frac{\nu }{2}\), \(s > \frac{d}{2}\), \(u, v \in H^\sigma \Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^n) \Big )\). Then \(u \cdot v \in H^\sigma \Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R) \Big )\) and \(\Vert u \cdot v \Vert _{\sigma , s} \lesssim _{\sigma , s} \Vert u \Vert _{\sigma , s} \Vert v \Vert _{\sigma , s}\).

For any \(u \in L^2(\mathbb T^d, \mathbb R^n)\) we define the orthogonal projections \(\pi _0\) and \(\pi _0^\bot \) as

$$\begin{aligned} \begin{aligned} \pi _0 u := \frac{1}{(2 \pi )^d} \int _{\mathbb T^d} u(x)\, d x = \widehat{u}(0) \quad \text {and} \quad \pi _0^\bot u := u - \pi _0 u= \sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} \widehat{u}(\xi ) e^{\mathrm{i} x \cdot \xi }\,. \end{aligned} \end{aligned}$$
(2.9)

According to (2.9), (2.5), every function \(u \in L^2\Big (\mathbb T^\nu , L^2(\mathbb T^d, \mathbb R^n) \Big )\) can be decomposed as

$$\begin{aligned} \begin{aligned} u(\varphi , x)&= u_0(\varphi ) + u_\bot (\varphi , x)\,, \\ u_0(\varphi )&:= \pi _0 u(\varphi ) = \sum _{\ell \in \mathbb Z^\nu } \widehat{u}(\ell , 0) e^{\mathrm{i} \ell \cdot \varphi }\,, \\ u_\bot (\varphi , x)&:= \pi _0^\bot u(\varphi , x) = \sum _{\ell \in \mathbb Z^\nu } \sum _{j \in \mathbb Z^d {\setminus } \{ 0 \}} \widehat{u}(\ell , j) e^{\mathrm{i} \ell \cdot \varphi } e^{\mathrm{i} j \cdot x}\,. \end{aligned} \end{aligned}$$
(2.10)

Clearly if \(u \in H^\sigma \Big ( \mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^n) \Big )\), \(\sigma , s \ge 0\), then

$$\begin{aligned} \begin{aligned}&u_0 \in H^\sigma (\mathbb T^\nu , \mathbb R^d) \quad \text {and} \quad \Vert u_0 \Vert _\sigma \le \Vert u \Vert _{\sigma , 0} \le \Vert u \Vert _{\sigma , s}\,, \\&u_\bot \in H^\sigma \Big ( \mathbb T^\nu , H^s_0(\mathbb T^d, \mathbb R^n) \Big ) \quad \text {and} \quad \Vert u_\bot \Vert _{\sigma , s} \le \Vert u \Vert _{\sigma , s}\,, \\&\Vert u \Vert _{\sigma , s} = \Vert u_0 \Vert _\sigma + \Vert u_\bot \Vert _{\sigma , s}\,. \end{aligned} \end{aligned}$$
(2.11)

We also prove the following lemma that we shall apply in Sect. 4.

Lemma 2.3

Let \(\sigma > \nu /2\), \(U \in H^\sigma \Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^n) \Big )\) and \(\omega \in \mathbb R^\nu \). Defining \(u_\omega (t, x) := U(\omega t, x)\), \((t, x) \in \mathbb R\times \mathbb T^d\), one has that \(u_\omega \in \mathcal{C}^0_b\Big (\mathbb R, H^s(\mathbb T^d, \mathbb R^n) \Big )\) and \(\Vert u_\omega \Vert _{\mathcal{C}^0_t H^s_x} \lesssim _\sigma \Vert U \Vert _{\sigma , s}\).

Proof

By the Sobolev embedding (2.8), and using that the map \(\mathbb R\rightarrow \mathbb T^d\), \(t \mapsto \omega t\) is continuous, one has that \(u_\omega \in \mathcal{C}^0_b\Big (\mathbb R, H^s(\mathbb T^d, \mathbb R^n) \Big )\) and

$$\begin{aligned} \Vert u_\omega \Vert _{\mathcal{C}^0_t H^s_x} \le \Vert U \Vert _{\mathcal{C}^0_\varphi H^s_x} \lesssim _\sigma \Vert U \Vert _{H^\sigma _\varphi H^s_x}\,. \end{aligned}$$

2.1 Leray Projector and Some Elementary Properties of the Navier–Stokes Equation

We introduce the space of zero-divergence vector fields

$$\begin{aligned} \mathcal{D}_0(\mathbb T^d) := \Big \{ u \in L^2(\mathbb T^d, \mathbb R^d) : \mathrm{div}(u) = 0 \Big \} \end{aligned}$$
(2.12)

where clearly the divergence has to be interpreted in a distributional sense. The \(L^2\)-orthogonal projector on this subspace of \(L^2(\mathbb T^d, \mathbb R^d)\) is called the Leray projector and its explicit formula is given by

$$\begin{aligned} \begin{aligned}&{\mathfrak {L} } : L^2(\mathbb T^d, \mathbb R^d) \rightarrow \mathcal{D}_0(\mathbb T^d)\,, \\&{\mathfrak {L}}(u) := u + \nabla (- \Delta )^{- 1} \mathrm{div}(u) \end{aligned} \end{aligned}$$
(2.13)

where the inverse of the laplacian (on the space of zero average functions) \((- \Delta )^{- 1}\) is defined by

$$\begin{aligned} (- \Delta )^{- 1} u(x) := \sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} \frac{1}{|\xi |^2} \widehat{u}(\xi ) e^{\mathrm{i} x \cdot \xi }\,. \end{aligned}$$
(2.14)

By expanding in Fourier series, the Leray projector \({\mathfrak {L}}\) can be written as

$$\begin{aligned} {\mathfrak {L}}(u)(x) = u(x) + \sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} \frac{\xi }{|\xi |^2} \xi \cdot \widehat{u}(\xi ) e^{\mathrm{i} x \cdot \xi }\,. \end{aligned}$$
(2.15)

By the latter formula, one immediately deduces some elementary properties of the Leray projector \({\mathfrak {L}}\). One has

$$\begin{aligned} \int _{\mathbb T^d} {\mathfrak {L}}(u)(x)\, d x = \int _{\mathbb T^d} u(x)\, d x, \quad \forall u \in L^2(\mathbb T^d, \mathbb R^d) \end{aligned}$$
(2.16)

and for any Fourier multiplier \(\Lambda \), \(\Lambda u(x) = \sum _{\xi \in \mathbb Z^d} \Lambda (\xi ) \widehat{u}(\xi ) e^{\mathrm{i} x \cdot \xi }\), the commutator

$$\begin{aligned}{}[{\mathfrak {L}}, \Lambda ] = {\mathfrak {L}} \Lambda - \Lambda {\mathfrak {L}} = 0\,. \end{aligned}$$
(2.17)

Moreover

$$\begin{aligned} \begin{aligned}&\Vert {\mathfrak {L}} (u) \Vert _{H^s_x} \lesssim \Vert u \Vert _{H^s_x}, \quad \forall u \in H^s(\mathbb T^d, \mathbb R^d)\,, \\&\Vert {\mathfrak {L}}(u) \Vert _{\sigma , s} \lesssim \Vert u \Vert _{\sigma , s}, \quad \forall u \in H^\sigma \Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^d) \Big ) \,. \end{aligned} \end{aligned}$$
(2.18)

For later purposes, we now prove the following Lemma.

Lemma 2.4

  1. (i)

    Let \(u, v \in H^1(\mathbb T^d, \mathbb R^d)\) and assume that \(\mathrm{div}(u) = 0\), then \(u \cdot \nabla v\), \({\mathfrak {L}}(u \cdot \nabla v)\) have zero average.

  2. (ii)

    Let \(\sigma > \nu /2\), \(s > d/2\), \(u \in H^\sigma \Big ( \mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^d) \Big )\), \(v \in H^\sigma \Big ( \mathbb T^\nu , H^{s + 1}(\mathbb T^d, \mathbb R^d) \Big )\). Then \(u \cdot \nabla v \in H^\sigma \Big ( \mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^d) \Big )\) and \(\Vert u \cdot \nabla v\Vert _{\sigma , s} \lesssim _{\sigma , s} \Vert u \Vert _{\sigma , s} \Vert v \Vert _{\sigma , s + 1}\).

Proof of (i)

By integrating by parts,

$$\begin{aligned} \begin{aligned} \int _{\mathbb T^d}{\mathfrak {L}}( u \cdot \nabla v)\, d x {\mathop {=}\limits ^{(2.16)}} \int _{\mathbb T^d} u \cdot \nabla v\, d x = - \int _{\mathbb T^d} \mathrm{div}(u) v\, d x = 0\,. \end{aligned} \end{aligned}$$

Proof of (ii)

For \(u = (u_1, \ldots , u_d)\), \(v = (v_1, \ldots , v_d)\), the vector field \(u \cdot \nabla v\) is given by

$$\begin{aligned} u \cdot \nabla v = \Big (u \cdot \nabla v_1, u \cdot \nabla v_2, \ldots , u \cdot \nabla v_d \Big )\,. \end{aligned}$$

Then the claimed statement follows by (2.7) and the algebra Lemma 2.2.

3 Construction of Quasi-Periodic Solutions

We look for quasi periodic solutions \(u_\omega (t, x)\), \(p_\omega (t,x)\) of the Eq. (1.1), oscillating with frequency \(\omega = (\omega _1, \ldots , \omega _\nu ) \in \mathbb R^\nu \), namely we look for \(u_\omega (t, x) := U(\omega t, x)\), \(p_\omega (t, x) := P(\omega t, x)\) where \(U : \mathbb T^\nu \times \mathbb T^d \rightarrow \mathbb R^d\) and \(P : \mathbb T^\nu \times \mathbb T^d \rightarrow \mathbb R\) are smooth functions. This leads to solve a functional equation for \(U(\varphi , x)\), \(P(\varphi , x)\) of the form

$$\begin{aligned} {\left\{ \begin{array}{ll} \omega \cdot \partial _\varphi U - \Delta U + U \cdot \nabla U + \nabla P = \varepsilon f(\varphi , x) \\ {{\,\mathrm{div}\,}}U = 0\,. \end{array}\right. } \end{aligned}$$
(3.1)

If we take the divergence of the first equation in (3.1), one gets

$$\begin{aligned} \Delta P = \mathrm{div}\Big (\varepsilon f - U \cdot \nabla U \Big ) \end{aligned}$$
(3.2)

and by projecting on the space of zero divergence vector fields, one gets a closed equation for U of the form

$$\begin{aligned} \omega \cdot \partial _\varphi U - \Delta U + {\mathfrak {L}}(U \cdot \nabla U) = \varepsilon {\mathfrak {L}}(f), \quad U(\varphi , \cdot ) \in \mathcal{D}_0(\mathbb T^d) \end{aligned}$$
(3.3)

where we recall the definitions (2.12), (2.13). According to the splitting (2.10) and by applying the projectors \(\pi _0, \pi _0^\bot \) to the Eq. (3.3) one gets the decoupled equations

$$\begin{aligned} \omega \cdot \partial _\varphi U_0(\varphi ) = \varepsilon f_0(\varphi ) \end{aligned}$$
(3.4)

and

$$\begin{aligned} \omega \cdot \partial _\varphi U_\bot - \Delta U_\bot + {\mathfrak {L}}(U_\bot \cdot \nabla U_\bot ) = \varepsilon {\mathfrak {L}}(f_\bot )\,. \end{aligned}$$
(3.5)

Then, since \(\omega \) is diophantine (see 1.3) and using that

$$\begin{aligned} \int _{\mathbb T^\nu } f_0(\varphi )\, d \varphi = \int _{\mathbb T^\nu \times \mathbb T^d} f(\varphi , x)\, d \varphi \, d x {\mathop {=}\limits ^{(1.2)}} 0, \end{aligned}$$

(\(\widehat{f}(0, 0) = 0\)) the averaged equation (3.4) can be solved explicitely by setting

$$\begin{aligned} U_0(\varphi ) := (\omega \cdot \partial _\varphi )^{- 1} f_0(\varphi ) = \sum _{\ell \in \mathbb Z^\nu {\setminus } \{ 0 \}} \frac{\widehat{f}(\ell , 0)}{\mathrm{i} \omega \cdot \ell } e^{\mathrm{i} \ell \cdot \varphi }\,. \end{aligned}$$
(3.6)

By (2.11) and using (1.3), one gets the estimate

$$\begin{aligned} \Vert U_0 \Vert _\sigma \le \varepsilon \gamma ^{- 1} \Vert f_0 \Vert _{\sigma + \nu }\, \le \varepsilon \gamma ^{- 1} \Vert f \Vert _{\sigma + \nu , 0}. \end{aligned}$$
(3.7)

Remark 3.1

(Non resonance conditions) The diophantine condition (1.3) on the frequency vector \(\omega \) is used only to solve the averaged equation (3.4). In order to solve the Eq. (3.5) on the space of zero average functions (with respect to x) no resonance conditions are required.

We now solve the Eq. (3.5) by means of a fixed point argument. To this end, we need to analyze some invertibility properties of the linear operator

$$\begin{aligned} L_\omega := \omega \cdot \partial _\varphi - \Delta . \end{aligned}$$
(3.8)

Lemma 3.2

(Invertibility of \(L_\omega \)) Let \(\sigma , s \ge 0\), \(g \in H^\sigma \Big (\mathbb T^\nu , H^s_0(\mathbb T^d, \mathbb R^d) \Big )\) and assume that g has zero divergence. Then there exists a unique \(u := L_\omega ^{- 1} g \in H^\sigma \Big (\mathbb T^\nu , H^{s + 2}_0(\mathbb T^d, \mathbb R^d) \Big )\) with zero divergence which solves the equation \(L_\omega u = g\). Moreover

$$\begin{aligned} \Vert u \Vert _{\sigma , s + 2} \le \Vert g \Vert _{\sigma , s}\,. \end{aligned}$$
(3.9)

Proof

By (2.5), we can write

$$\begin{aligned} L_\omega u(\varphi , x) = \sum _{\ell \in \mathbb Z^\nu } \sum _{j \in \mathbb Z^3 {\setminus } \{ 0 \}} \big (\mathrm{i} \omega \cdot \ell + |j|^2 \big )\widehat{u}(\ell , j) e^{\mathrm{i} \ell \cdot \varphi } e^{\mathrm{i} j \cdot x}\,. \end{aligned}$$

Note that since \(j \ne 0\), one has that

$$\begin{aligned} |\mathrm{i} \omega \cdot \ell + |j|^2| = \sqrt{|\omega \cdot \ell |^2 + |j|^4} \ge |j|^2\,. \end{aligned}$$
(3.10)

Hence, the equation \(L_\omega u = g\) admits the unique solution with zero space average given by

$$\begin{aligned} u (\varphi , x) := L_\omega ^{- 1} g(\varphi , x) = \sum _{\ell \in \mathbb Z^\nu } \sum _{j \in \mathbb Z^d {\setminus } \{ 0 \}} \dfrac{\widehat{g}(\ell , j)}{\mathrm{i} \omega \cdot \ell + |j|^2} e^{\mathrm{i} \ell \cdot \varphi } e^{\mathrm{i} j \cdot x} \end{aligned}$$
(3.11)

Clearly if \(\mathrm{div}(g) = 0\) and then also \(\mathrm{div}(u) = 0\). We now estimate \(\Vert u \Vert _{\sigma , s + 2}\). According to (2.6), (3.11), one has

$$\begin{aligned} \begin{aligned} \Vert u \Vert _{\sigma , s + 2}^2&= \sum _{\ell \in \mathbb Z^\nu } \sum _{j \in \mathbb Z^d {\setminus } \{ 0 \}} \langle \ell \rangle ^{2 \sigma } \langle j \rangle ^{2(s + 2)}\dfrac{|\widehat{g}(\ell , j)|^2}{|\mathrm{i} \omega \cdot \ell + |j|^2|^2} \\&{\mathop {\le }\limits ^{(3.10)}} \sum _{\ell \in \mathbb Z^\nu } \sum _{j \in \mathbb Z^d {\setminus } \{ 0 \}} \langle \ell \rangle ^{2 \sigma } | j |^{2(s + 2)} |j|^{- 4}|\widehat{g}(\ell , j)|^2 = \Vert g \Vert _{\sigma , s}^2 \end{aligned} \end{aligned}$$

which proves the claimed statement.

We now implement the fixed point argument for the Eq. (3.5) (to simplify notations we write U instead of \(U_\bot \)). For any \(\sigma , s, R \ge 0\), we define the ball

$$\begin{aligned} \mathcal{B}_{\sigma , s}(R) := \Big \{ U \in H^\sigma \Big (\mathbb T^\nu , H^s_0(\mathbb T^d, \mathbb R^d) \Big ) : \mathrm{div}(U) = 0\,, \quad \Vert U \Vert _{\sigma , s} \le R\Big \}\,. \end{aligned}$$
(3.12)

and we define the nonlinear operator

$$\begin{aligned} \Phi (U) := L_\omega ^{- 1} {\mathfrak {L}}\Big (\varepsilon f - U \cdot \nabla U \Big ), \quad U \in \mathcal{B}_{\sigma , s}(R)\,. \end{aligned}$$
(3.13)

The following Proposition holds.

Proposition 3.3

(Contraction for \(\Phi \)) Let \(\sigma > \nu /2\), \(s > d/2 + 1\), \(f \in \mathcal{C}^N(\mathbb T^\nu \times \mathbb T^d, \mathbb R^d)\), \(N > \sigma + s - 2\). Then there exists a constant \(C_* = C_*(f , \sigma , s) > 0\) large enough and \(\varepsilon _0= \varepsilon _0(f , \sigma , s) \in (0, 1)\) small enough, such that for any \(\varepsilon \in (0, \varepsilon _0)\), the map \(\Phi : \mathcal{B}_{\sigma , s}(C_* \varepsilon ) \rightarrow \mathcal{B}_{\sigma , s}(C_* \varepsilon )\) is a contraction.

Proof

Let \(U \in \mathcal{B}_{\sigma , s} (C_* \varepsilon )\). We apply Lemmata 2.4-(i), 3.2 from which one immediately deduces that

$$\begin{aligned} \int _{\mathbb T^3 } \Phi (U)\, d x = 0, \quad \mathrm{div}\big (\Phi (U) \big ) = 0\,. \end{aligned}$$
(3.14)

Moreover

$$\begin{aligned} \begin{aligned} \Vert \Phi (U) \Vert _{\sigma , s}&= \Big \Vert L_\omega ^{- 1} {\mathfrak {L}}\Big (\varepsilon f - U \cdot \nabla U \Big ) \Big \Vert _{\sigma , s} {\mathop {\lesssim }\limits ^{(2.18), (3.9)}} \Big \Vert \varepsilon f - U \cdot \nabla U \Big \Vert _{\sigma , s - 2} \\&\lesssim \varepsilon \Vert f \Vert _{\sigma , s - 2} + \Vert U \cdot \nabla U \Vert _{\sigma , s - 1} \,. \end{aligned} \end{aligned}$$

Note that since \(f \in \mathcal{C}^N\) with \(N > \sigma + s - 2\), one has that \(\Vert f \Vert _{\sigma , s - 2} \lesssim \Vert f \Vert _{\mathcal{C}^N}\). In view of Lemma 2.4-(ii), using that \(\sigma > \nu /2\), \(s - 1 > d/2\), one gets that

$$\begin{aligned} \Vert \Phi (U ) \Vert _{\sigma , s} \le C(f, s, \sigma ) \big (\varepsilon + \Vert U \Vert _{\sigma , s - 1}\Vert U \Vert _{\sigma , s} \big ) \le C(f, s, \sigma ) \big (\varepsilon + \Vert U \Vert _{\sigma , s}^2 \big ) \end{aligned}$$

for some constant \(C(f, s, \sigma ) > 0\). Using that \(\Vert U \Vert _{\sigma , s} \le C_* \varepsilon \), one gets that

$$\begin{aligned} \Vert \Phi (U ) \Vert _{\sigma , s} \le C(f, s, \sigma ) \varepsilon + C(f, s, \sigma ) C_*^2 \varepsilon ^2 \le C_* \varepsilon \end{aligned}$$

provided

$$\begin{aligned} C_* \ge 2 C(f, s, \sigma )\quad \text {and} \quad \varepsilon \le \frac{1}{2 C(f , s, \sigma ) C_*}\,. \end{aligned}$$

Hence \(\Phi : \mathcal{B}_{\sigma , s}(C_* \varepsilon ) \rightarrow \mathcal{B}_{\sigma , s}(C_* \varepsilon )\). Now let \(U_1, U_2 \in \mathcal{B}_{\sigma , s}(C_* \varepsilon )\) and we estimate

$$\begin{aligned} \Phi (U_1) - \Phi (U_2) = L_\omega ^{- 1} {\mathfrak {L}} \Big ( U_1 \cdot \nabla U_1 - U_2 \cdot \nabla U_2 \Big )\,. \end{aligned}$$

One has that

$$\begin{aligned} \begin{aligned} \Vert \Phi (U_1) - \Phi (U_2) \Vert _{\sigma , s}&\le \Big \Vert L_\omega ^{- 1} {\mathfrak {L}} \Big (( U_1 - U_2)\cdot \nabla U_1 \Big ) \Big \Vert _{\sigma , s} + \Big \Vert L_\omega ^{- 1} {\mathfrak {L}} \Big (U_2\cdot \nabla (U_1 - U_2) \Big ) \Big \Vert _{\sigma , s} \\&{\mathop {\lesssim _{s, \sigma }}\limits ^{(2.18), (3.9)\,,Lemma\, 2.4}} \Vert U_1 - U_2 \Vert _{\sigma , s } \Vert U_1 \Vert _{\sigma , s } + \Vert U_1 - U_2 \Vert _{\sigma , s } \Vert U_2 \Vert _{\sigma , s } \\&\le C(s, \sigma )\big ( \Vert U_1 \Vert _{\sigma , s} + \Vert U_2 \Vert _{\sigma , s} \big ) \Vert U_1 - U_2 \Vert _{\sigma , s} \end{aligned} \end{aligned}$$

for some constant \(C(s, \sigma ) > 0\). Since \(U_1, U_2 \in \mathcal{B}_{\sigma , s}(C_* \varepsilon )\), one then has that

$$\begin{aligned} \Vert \Phi (U_1) - \Phi (U_2) \Vert _{\sigma , s} \le 2 C(s, \sigma ) C_* \varepsilon \Vert U_1 - U_2 \Vert _{\sigma , s} \le \frac{1}{2} \Vert U_1 - U_2 \Vert _{\sigma , s} \end{aligned}$$

provided \(\varepsilon \le \frac{1}{4 C(s, \sigma ) C_*}\,.\) Hence \(\Phi \) is a contraction.

3.1 Proof of Theorem 1.1

Proposition 3.3 implies that for \(\sigma > \nu /2\), \(s > \frac{d}{2} + 1\), there exists a unique \(U_\bot \in H^\sigma (\mathbb T^\nu , H^s_0(\mathbb T^d, \mathbb R^d))\), \(\Vert U_\bot \Vert _{\sigma , s} \lesssim _{\sigma , s} \varepsilon \) which is a fixed point of the map \(\Phi \) defined in (3.13). We fix \(\sigma := \nu /2 + 2\) and \(N > \frac{3 \nu }{2} + s + 2 \). By the Sobolev embedding property (2.8), since \(\sigma - 1 > \nu /2\), one gets that

$$\begin{aligned} \begin{aligned} U_\bot \in \mathcal{C}^1_b\Big (\mathbb T^\nu , H^s_0(\mathbb T^d, \mathbb R^d) \Big ), \quad \Vert U_\bot \Vert _{\mathcal{C}^1_\varphi H^s_x} \lesssim _s \varepsilon \end{aligned} \end{aligned}$$
(3.15)

and \(U_\bot \) is a solution of the Eq. (3.5). Similarly, by recalling (3.4), (3.6), (3.7), one gets that

$$\begin{aligned} U_0 \in \mathcal{C}^1 (\mathbb T^\nu , \mathbb R^d), \quad \Vert U_0 \Vert _{\mathcal{C}^1_\varphi } \le \varepsilon \gamma ^{- 1} \Vert f \Vert _{\frac{3 \nu }{2} + 2, 0} \le \varepsilon \gamma ^{- 1} \Vert f \Vert _{\mathcal{C}^N}, \quad \int _{\mathbb T^\nu } U_0(\varphi )\, d \varphi = 0 \end{aligned}$$
(3.16)

and \(U_0\) is a solution of the Eq. (3.4). Hence \(U = U_0 + U_\bot \in \mathcal{C}^1\Big ( \mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^d) \Big )\) is a solution of (3.3) and it satisfies \(\int _{\mathbb T^\nu \times \mathbb T^d} U(\varphi , x)\, d \varphi \, d x = 0\). The unique solution with zero average in x of the Eq. (3.2) is given by

$$\begin{aligned} P:= (- \Delta )^{- 1}\mathrm{div}\Big ( U \cdot \nabla U - \varepsilon f \Big )\,. \end{aligned}$$

Hence, \(P \in \mathcal{C}^0_b\Big (\mathbb T^\nu , H^s_0(\mathbb T^d, \mathbb R^d) \Big )\) and

$$\begin{aligned} \begin{aligned} \Vert P \Vert _{\mathcal{C}^0_\varphi H^s_x}&{\mathop {\lesssim }\limits ^{(2.8), \sigma = \nu /2 + 2}}\Vert P \Vert _{\sigma , s} \lesssim _{\sigma , s} \varepsilon \Vert f \Vert _{\sigma , s - 1} + \Vert U \cdot \nabla U \Vert _{\sigma , s - 1} \\&{\mathop {\lesssim _{\sigma , s}}\limits ^{Lemma\, 2.4}} \varepsilon \Vert f \Vert _{\sigma , s - 1} + \Vert U \Vert _{\sigma , s}^2\,. \end{aligned} \end{aligned}$$

The claimed estimate on P then follows since \( \Vert f \Vert _{\sigma , s - 1} \lesssim \Vert f \Vert _{\mathcal{C}^N}\), \(\Vert U \Vert _{\sigma , s} \le C_* \varepsilon \). Note that if f has zero average in x, one has that

$$\begin{aligned} f_0(\varphi ) = \pi _0 f(\varphi ) = \frac{1}{(2 \pi )^d} \int _{\mathbb T^d} f(\varphi , x)\, d x = 0\,, \quad \forall \varphi \in \mathbb T^\nu \,. \end{aligned}$$

The Eq. (3.4) reduces to \(\omega \cdot \partial _\varphi U_0 = 0\). Hence the only solution \(U = U_0 + U_\bot \) of (3.3) with zero average in x is the one where we choose \(U_0 = 0\) and hence \(U = U_\bot \). The claimed statement has then been proved.

4 Orbital and Asymptotic Stability

We now want to study the Cauchy problem for the Eq. (1.1) for initial data which are close to the quasi-periodic solution \((u_\omega , p_\omega )\), where

$$\begin{aligned} u_\omega (t, x) : = U(\omega t, x), \quad p_\omega (t, x) := P(\omega t, x) \end{aligned}$$
(4.1)

and the periodic functions \(U \in \mathcal{C}^1\Big (\mathbb T^\nu , H^s(\mathbb T^d, \mathbb R^d) \Big )\), \(P\in \mathcal{C}^0\Big (\mathbb T^\nu , H^s_0(\mathbb T^d, \mathbb R^d) \Big )\) are given by Theorem 1.1. We then look for solutions which are perturbations of the quasi-periodic ones \((u_\omega , p_\omega )\), namely we look for solutions of the form

$$\begin{aligned} u(t, x) = u_\omega (t, x) + v( t, x), \quad p(t, x) = p_\omega ( t, x) + q(t, x)\,. \end{aligned}$$
(4.2)

Plugging the latter ansatz into the Eq. (1.1), one obtains an equation for v(tx), q(tx) of the form

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t v - \Delta v + u_\omega \cdot \nabla v + v \cdot \nabla u_\omega + v \cdot \nabla v + \nabla q = 0 \\ \mathrm{div}(v) = 0\,. \end{array}\right. } \end{aligned}$$
(4.3)

If we take the divergence in the latter equation we get the equation for the pressure q(tx)

$$\begin{aligned} - \Delta q = \mathrm{div}\Big ( u_\omega \cdot \nabla v + v \cdot \nabla u_\omega + v \cdot \nabla v\Big )\,. \end{aligned}$$
(4.4)

By using the Leray projector defined in (2.13), we then get a closed equation for v of the form

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t v - \Delta v + {\mathfrak {L}}\Big (u_\omega \cdot \nabla v + v \cdot \nabla u_\omega + v \cdot \nabla v \Big )= 0 \\ \mathrm{div}(v) = 0\,. \end{array}\right. } \end{aligned}$$
(4.5)

We prove the following

Proposition 4.1

Let \(s > d/2 + 1\), \(\alpha \in (0, 1)\). Then there exists \(\delta = \delta (s, \alpha , d, \nu )\in (0, 1)\) small enough and \(C = C(s, \alpha , d, \nu ) > 0\) large enough, such that for any \(\varepsilon \in (0, \delta )\) and for any initial datum \(v_0 \in H^s_0(\mathbb T^d, \mathbb R^d)\) with \(\Vert v_0 \Vert _{H^s_x} \le \delta \), there exists a unique global solution

$$\begin{aligned} v \in \mathcal{C}^0_b \Big ( [0, + \infty ), H^s_0(\mathbb T^d, \mathbb R^d) \Big ) \cap \mathcal{C}^1_b\Big ( [0, + \infty ), H^{s - 2}_0(\mathbb T^d, \mathbb R^d) \Big ) \end{aligned}$$
(4.6)

of the Eq. (4.5) which satisfies

$$\begin{aligned} \Vert v (t, \cdot ) \Vert _{H^s_x}, \Vert \partial _t v (t, \cdot )\Vert _{H^{s - 2}_x}\le C\delta e^{- \alpha t}, \quad \forall t \ge 0\,. \end{aligned}$$
(4.7)

The Proposition above will be proved by a fixed point argument in some weighted Sobolev spaces which take care of the decay in time of the solutions we are looking for. In the next section we shall exploit some decay estimates of the linear heat propagator which will be used in the proof of our result.

4.1 Dispersive Estimates for the Heat Propagator

In this section we analyze some properties of the heat propagator. We recall that the heat propagator is defined as follows. Consider the Cauchy problem for the heat equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u - \Delta u = 0 \\ u(0, x) = u_0(x), \end{array}\right. } \quad u_0 \in H^s_0(\mathbb T^d, \mathbb R^d)\,. \end{aligned}$$
(4.8)

It is well known that there exists a unique solution

$$\begin{aligned} u \in \mathcal{C}_b^0\Big ([0, + \infty ), H^s_0(\mathbb T^d, \mathbb R^d) \Big ) \cap \mathcal{C}_b^1\Big ([0, + \infty ), H^{s - 2}_0(\mathbb T^d, \mathbb R^d) \Big ) \end{aligned}$$

which can be written as \(u(t, x) := e^{t \Delta } u_0(x)\), namely

$$\begin{aligned} u(t, x) = e^{t \Delta } u_0(x) = \sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} e^{- t |\xi |^2} \widehat{u}_0(\xi ) e^{\mathrm{i} x \cdot \xi }\,. \end{aligned}$$
(4.9)

Lemma 4.2

  1. (i)

    Let \(u_0 \in H^s_0(\mathbb T^d, \mathbb R^d)\). Then

    $$\begin{aligned} \Vert e^{t \Delta } u_0 \Vert _{H^s_x} \le e^{- t} \Vert u_0 \Vert _{H^s_x}, \quad \forall t \ge 0\,. \end{aligned}$$
    (4.10)
  2. (ii)

    Let \(u_0 \in H^{s - 1}_0(\mathbb T^d, \mathbb R^d)\). Then, for any integer \(n \ge 1\) and for any \(\alpha \in (0, 1)\),

    $$\begin{aligned} \Vert e^{t \Delta } u_0 \Vert _{H^{s }_x} \lesssim _n t^{- \frac{n}{2}} (1 - \alpha )^{- \frac{n}{2}} e^{- \alpha t}\Vert u_0 \Vert _{H^{s - n}_x} \lesssim _n t^{- \frac{n}{2}} (1 - \alpha )^{- \frac{n}{2}} e^{- \alpha t} \Vert u_0 \Vert _{H^{s - 1}_x}, \quad \forall t > 0\,. \end{aligned}$$
    (4.11)

Proof

The item (i) follows by (4.9), using that \(e^{- t |\xi |^2} \le e^{- t}\) for any \(t \ge 0\), \(\xi \in \mathbb Z^d {\setminus } \{ 0 \}\), since \(|\xi |^2 \ge 1\). We now prove the item (ii). Let \(n \in \mathbb N\), \(\alpha \in (0, 1)\). One has

$$\begin{aligned} \begin{aligned} \Vert e^{t \Delta } u_0 \Vert _{H^{s}_x}^2&= \sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} | \xi |^{2 s } e^{- 2 t |\xi |^2} |\widehat{u}_0(\xi )|^2 \\&= \sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} e^{- 2 \alpha t |\xi |^2} | \xi |^{2 (s - n)} |\xi |^{2n} e^{- 2(1 - \alpha ) t |\xi |^2} |\widehat{u}_0(\xi )|^2\,. \end{aligned} \end{aligned}$$
(4.12)

Using that for any \(\xi \in \mathbb Z^d {\setminus } \{ 0 \}\), \(t \ge 0\), \(e^{- 2 \alpha t |\xi |^2} \le e^{- 2 \alpha t}\), by (4.12), one gets that

$$\begin{aligned} \Vert e^{t \Delta } u_0 \Vert _{H^{s}_x}^2 \le e^{- 2 \alpha t} \sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} | \xi |^{2 (s - n)} |\xi |^{2n} e^{- 2(1 - \alpha ) t |\xi |^2} |\widehat{u}_0(\xi )|^2\,. \end{aligned}$$
(4.13)

By Lemma A.1 (applied with \(\zeta = 2 (1 - \alpha ) t\)), one has that

$$\begin{aligned} \sup _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} |\xi |^{2n} e^{- 2(1 - \alpha ) t |\xi |^2} \le \sup _{y \ge 0} y^n e^{- 2(1 - \alpha ) t y} \le \frac{C(n)}{(1 - \alpha )^n t^n} \end{aligned}$$

for some constant \(C(n)> 0\). Therefore by (4.13), one gets that

$$\begin{aligned} \begin{aligned} \Vert e^{t \Delta } u_0 \Vert _{H^{s }_x}^2&\lesssim _n t^{- n} e^{- 2 \alpha t} (1 - \alpha )^{- n}\sum _{\xi \in \mathbb Z^d {\setminus } \{ 0 \}} | \xi |^{2 (s - n)} |\widehat{u}_0(\xi )|^2 \\&\lesssim _n t^{- n} e^{- 2 \alpha t} (1 - \alpha )^{- n}\Vert u_0 \Vert _{H^{s - n}_x}^2\,. \end{aligned} \end{aligned}$$

The second inequality in (4.11) clearly follows since \(\Vert \cdot \Vert _{H^{s - n}_x} \le \Vert \cdot \Vert _{H^{s - 1}_x}\) for \(n \ge 1\).

We fix \(\alpha \in (0, 1)\) and for any \(s \ge 0\), we define the space

$$\begin{aligned} \mathcal{E}_{s} := \Big \{ u \in \mathcal{C}_b^0\Big ([0, + \infty ), H^s_0(\mathbb T^d, \mathbb R^d) \Big ) : \Vert u \Vert _{\mathcal{E}_s} := \sup _{t \ge 0} e^{\alpha t} \Vert u(t) \Vert _{H^s_x} \Big \}\,. \end{aligned}$$
(4.14)

Clearly

$$\begin{aligned} \begin{aligned}&\Vert \cdot \Vert _{\mathcal{E}_s} \le \Vert \cdot \Vert _{\mathcal{E}_{s'}}, \quad \forall s \le s'\,. \end{aligned} \end{aligned}$$
(4.15)

The following elementary lemma holds:

Lemma 4.3

  1. (i)

    Let \(u \in \mathcal{E}_s\). Then

    $$\begin{aligned} \begin{aligned}&\Vert u \Vert _{\mathcal{C}^0_t H^s_x} \lesssim \Vert u \Vert _{\mathcal{E}_s} \quad \text {and} \quad \Vert {\mathfrak {L}}(u) \Vert _{\mathcal{E}_s} \lesssim \Vert u \Vert _{\mathcal{E}_s}\,, \\&\Vert u (t) \Vert _{H^s_x} \le e^{- \alpha t} \Vert u \Vert _{\mathcal{E}_s}, \quad \forall t \ge 0\,. \end{aligned} \end{aligned}$$
    (4.16)
  2. (ii)

    Let \(s > d/2\), \(u \in \mathcal{E}_s\), \(v \in \mathcal{C}^0_b\Big ([0, + \infty ), H^{s + 1}(\mathbb T^d, \mathbb R^d) \Big )\), \(\mathrm{div}(u) = 0\). Then the product \(u \cdot \nabla v \in \mathcal{E}_s\) and

    $$\begin{aligned} \Vert u \cdot \nabla v \Vert _{\mathcal{E}_s} \lesssim _s \Vert u \Vert _{\mathcal{E}_s} \Vert v \Vert _{\mathcal{C}^0_t H^{s + 1}_x}\,. \end{aligned}$$
    (4.17)
  3. (iii)

    Let \(s > d/2\), \(u \in \mathcal{C}^0_b\Big ([0, + \infty ), H^s(\mathbb T^d, \mathbb R^d) \Big )\), \(\mathrm{div}(u) = 0\) and \(v \in \mathcal{E}_{s + 1}\). Then the product \(u \cdot \nabla v \in \mathcal{E}_s\) and

    $$\begin{aligned} \Vert u \cdot \nabla v \Vert _{\mathcal{E}_s} \lesssim _s \Vert u \Vert _{\mathcal{C}^0_t H^s_x} \Vert v \Vert _{\mathcal{E}_{s + 1}}\,. \end{aligned}$$
    (4.18)
  4. (iv)

    Let \(s> d/2\), \(u \in \mathcal{E}_s\), \(\mathrm{div}(u)= 0\), \(v \in \mathcal{E}_{s + 1}\). Then \(u \cdot \nabla v \in \mathcal{E}_{s}\) and

    $$\begin{aligned} \Vert u \cdot \nabla v \Vert _{\mathcal{E}_s} \lesssim _s \Vert u \Vert _{\mathcal{E}_s} \Vert v \Vert _{\mathcal{E}_{s + 1}}\,. \end{aligned}$$
    (4.19)

Proof

The item (i) is very elementary and it follows in a straightforward way by the definition (4.14) and by recalling the estimate (2.18) on the Leray projector \({\mathfrak {L}}\). We prove the item (ii). By Lemma 2.4-(i), one has that \(u(t) \cdot \nabla v(t)\) has zero average in x. Moreover

$$\begin{aligned} u \cdot \nabla v = \Big ( u \cdot \nabla v_1, \ldots , u \cdot \nabla v_d \Big ) \end{aligned}$$

therefore, since \(s > d/2\), by Lemma 2.1 and using (2.1) one has that for any \(t \in [0, + \infty )\) \(\Vert u (t) \cdot \nabla v(t) \Vert _{H^s_x} \lesssim _s \Vert u(t) \Vert _{H^s_x} \Vert v(t) \Vert _{H^{s + 1}_x}\) implying that

$$\begin{aligned} \begin{aligned} e^{\alpha t} \Vert u(t) \cdot \nabla v(t) \Vert _{H^s_x}&\lesssim _s e^{\alpha t} \Vert u(t) \Vert _{H^s_x} \Vert v(t) \Vert _{H^{s + 1}_x} \lesssim _s \Big ( \sup _{t \ge 0} e^{\alpha t} \Vert u(t) \Vert _{H^s_x} \Big ) \Big ( \sup _{t \ge 0} \Vert v(t) \Vert _{H^{s + 1}_x}\Big ) \\&\lesssim _s \Vert u \Vert _{\mathcal{E}_s} \Vert v \Vert _{\mathcal{C}^0_t H^{s + 1}_x}\,. \end{aligned} \end{aligned}$$
(4.20)

Passing to the supremum over \(t \ge 0\) in the left hand side of (4.20), we get the claimed statement. The item (iii) follows by similar arguments and the item (iv) follows by applying items (i) and (ii).

We now prove some estimates for the heat propagator \(e^{t \Delta }\) in the space \(\mathcal{E}_s\).

Lemma 4.4

Let \(s \ge 0\), \(u_0 \in H^s_0(\mathbb T^d, \mathbb R^d)\). Then \(\Vert e^{t \Delta } u_0 \Vert _{\mathcal{E}_s} \lesssim _\alpha \Vert u_0 \Vert _{H^s_x}\).

Proof

We have to estimate uniformly w.r. to \(t \ge 0\), the quantity \(e^{\alpha t} \Vert e^{t \Delta } u_0 \Vert _{H^s_x}\). For \(t \in [0, 1]\), since \(e^{\alpha t} \le e^{\alpha } {\mathop {\le }\limits ^{\alpha < 1}} e\) and by applying Lemma 4.2-(i), one gets that

$$\begin{aligned} e^{\alpha t} \Vert e^{t \Delta } u_0 \Vert _{H^s_x} \lesssim \Vert e^{t \Delta } u_0 \Vert _{H^s_x} \lesssim \Vert u_0 \Vert _{H^s_x}\,, \quad \forall t \in [0, 1]\,. \end{aligned}$$
(4.21)

For \(t > 1\), by applying Lemma 4.2-(ii) (for \(n = 1\)), one gets that

$$\begin{aligned} e^{\alpha t}\Vert e^{t \Delta } u_0 \Vert _{H^s_x} \lesssim _\alpha t^{- \frac{1}{2}} \Vert u_0 \Vert _{H^{s - 1}_x} \lesssim _\alpha \Vert u_0 \Vert _{H^s_x}\,, \quad \forall t > 1\,. \end{aligned}$$
(4.22)

Hence the claimed statement follows by (4.21), (4.22) passing to the supremum over \(t \ge 0\).

The main result of this section is the following Proposition

Proposition 4.5

Let \(s\ge 1\), \(f \in \mathcal{E}_{s - 1}\) and define

$$\begin{aligned} u(t) \equiv u(t, \cdot ) := \int _0^t e^{(t - \tau )\Delta } f(\tau , \cdot )\, d \tau \,. \end{aligned}$$
(4.23)

Then \(u \in \mathcal{E}_s\) and

$$\begin{aligned} \Vert u \Vert _{\mathcal{E}_s} \lesssim _\alpha \Vert f \Vert _{\mathcal{E}_{s - 1}}\,. \end{aligned}$$
(4.24)

The proof is split in several steps. The first step is to estimate the integral in (4.23) for any \(t \in [0, 1]\).

Lemma 4.6

Let \(t \in [0, 1]\), \(f \in \mathcal{E}_{s - 1}\) and u defined by (4.23). Then

$$\begin{aligned} \Vert u (t) \Vert _{H^s_x} \lesssim _\alpha \Vert f \Vert _{\mathcal{C}^0_t H^{s - 1}_x}\,. \end{aligned}$$

Proof

Let \(t \in [0, 1]\). Then

$$\begin{aligned} \begin{aligned} \Vert u (t) \Vert _{H^s_x}&\le \int _0^t \Big \Vert e^{(t - \tau )\Delta } f(\tau , \cdot ) \Big \Vert _{H^s_x} d \tau {\mathop {\lesssim _\alpha }\limits ^{(4.11)}} \int _0^t e^{- \frac{t - \tau }{2}}\frac{\Vert f(\tau , \cdot ) \Vert _{H^{s - 1}_x}}{\sqrt{t - \tau }}\, d \tau \\&{\mathop {\lesssim _\alpha }\limits ^{e^{- \frac{t - \tau }{2}} \le 1}} \int _0^t \frac{1}{\sqrt{t - \tau }}\, d \tau \Vert f \Vert _{\mathcal{C}^0_t H^{s - 1}_x}\,. \end{aligned} \end{aligned}$$
(4.25)

By making the change of variables \(z = t - \tau \), one gets that

$$\begin{aligned} \int _0^t \frac{1}{\sqrt{t - \tau }}\, d \tau = \int _0^t \frac{1}{\sqrt{z}}\,d z {\mathop {\le }\limits ^{t \le 1}} \int _0^1 \frac{1}{\sqrt{z}}\, d z = 2 \end{aligned}$$

and hence in view of (4.25), one gets \(\Vert u (t) \Vert _{H^s_x} \lesssim _\alpha \Vert f \Vert _{\mathcal{C}^0_t H^{s - 1}_x}\) for any \(t \in [0, 1]\), which is the claimed stetement.

For \(t > 1\), we split the integral term in (4.23) as

$$\begin{aligned} \int _0^t e^{(t - \tau )\Delta } f(\tau , \cdot )\, d \tau = \int _0^{t - \frac{1}{2}} e^{(t - \tau )\Delta } f(\tau , \cdot )\, d \tau + \int _{t - \frac{1}{2}}^t e^{(t - \tau )\Delta } f(\tau , \cdot )\, d \tau \end{aligned}$$

and we estimate separately the two terms in the latter formula. More precisely the first term is estimated in Lemma 4.7 and the second one in Lemma 4.8.

Lemma 4.7

Let \(t > 1\). Then

$$\begin{aligned} \Big \Vert \int _0^{t - \frac{1}{2}} e^{(t- \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x} \lesssim _\alpha e^{- \alpha t} \Vert f \Vert _{\mathcal{E}_{s - 1}} \end{aligned}$$

Proof

Let \(t > 1\). One then has

$$\begin{aligned} \begin{aligned} \Big \Vert \int _0^{t - \frac{1}{2}} e^{(t- \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x}&\le \int _0^{t - \frac{1}{2}} \Big \Vert e^{(t- \tau ) \Delta } f(\tau , \cdot ) \Big \Vert _{H^s_x}\, d \tau \\&{\mathop {\lesssim _{n, \alpha }}\limits ^{(4.11)}} \int _0^{t - \frac{1}{2}} e^{- \alpha (t - \tau )} \frac{1}{(t - \tau )^{\frac{n}{2}}} \Vert f(\tau , \cdot ) \Vert _{H^{s - 1}_x}\, d \tau \\&\lesssim _{n, \alpha } e^{- \alpha t} \int _0^{t - \frac{1}{2}} \frac{1}{(t - \tau )^{\frac{n}{2}}} e^{\alpha \tau } \Vert f(\tau , \cdot ) \Vert _{H^{s - 1}_x}\, d \tau \\&\lesssim _{n, \alpha } e^{- \alpha t} \Big ( \int _0^{t - \frac{1}{2}} \frac{d \tau }{(t - \tau )^{\frac{n}{2}}} \Big )\,\,\sup _{\tau \ge 0} e^{\alpha \tau } \Vert f(\tau , \cdot ) \Vert _{H^{s - 1}_x}\,. \end{aligned} \end{aligned}$$
(4.26)

By choosing \(n = 4\) and by making the change of variables \(z = t - \tau \), on gets that

$$\begin{aligned} \int _0^{t - \frac{1}{2}} \frac{d \tau }{(t - \tau )^{\frac{n}{2}}} = \int _0^{t - \frac{1}{2}} \frac{d \tau }{(t - \tau )^{2}} = \int _{\frac{1}{2}}^{t} \frac{d z}{z^2} \le \int _{\frac{1}{2}}^{+ \infty } \frac{d z}{z^2} < \infty \,. \end{aligned}$$

The latter estimate, together with the estimate (4.26) imply the claimed statement.

Lemma 4.8

Let \(t > 1\). Then

$$\begin{aligned} \Big \Vert \int _{t - \frac{1}{2}}^{t} e^{(t- \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x} \lesssim _\alpha e^{- \alpha t} \Vert f \Vert _{\mathcal{E}_{s - 1}}\,. \end{aligned}$$

Proof

One has

$$\begin{aligned} \begin{aligned} \Big \Vert \int _{t - \frac{1}{2}}^{t} e^{(t- \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x}&\le \int _{t - \frac{1}{2}}^{t}\Big \Vert e^{(t- \tau ) \Delta } f(\tau , \cdot ) \Big \Vert _{H^s_x}\, d \tau \\&{\mathop {\lesssim _\alpha }\limits ^{(4.11)}} e^{- \alpha t}\int _{t - \frac{1}{2}}^t \frac{1}{\sqrt{t - \tau }} e^{\alpha \tau }\Vert f(\tau , \cdot ) \Vert _{H^{s - 1}_x}\, d \tau \\&\lesssim _\alpha e^{- \alpha t} \int _{t - \frac{1}{2}}^t \frac{d \tau }{\sqrt{t - \tau }} \,\, \Big ( \sup _{\tau \ge 0} e^{\alpha \tau } \Vert f(\tau , \cdot ) \Vert _{H^{s - 1}_x} \Big )\,. \end{aligned} \end{aligned}$$
(4.27)

By making the change of variables \(z = t - \tau \), one gets

$$\begin{aligned} \int _{t - \frac{1}{2}}^t \frac{d \tau }{\sqrt{t - \tau }} = \int _0^{\frac{1}{2}} \frac{d z}{\sqrt{z}} = \sqrt{2}\,. \end{aligned}$$

The latter estimate, together with (4.27) imply the claimed statement.

Proof of Proposition 4.5

For any \(t \in [0, 1]\), since \(e^{\alpha t} \le e^{\alpha }{\mathop {\le }\limits ^{\alpha < 1}} e\), by Lemma 4.6, one has that

$$\begin{aligned} \begin{aligned} e^{\alpha t} \Big \Vert \int _0^t e^{(t - \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x}&\lesssim \Big \Vert \int _0^t e^{(t - \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x} \\&\lesssim _\alpha \Vert f \Vert _{\mathcal{C}^0_t H^{s - 1}_x} {\mathop {\lesssim _\alpha }\limits ^{(4.16)}} \Vert f \Vert _{\mathcal{E}_{s - 1}}\,. \end{aligned} \end{aligned}$$
(4.28)

For any \(t > 1\), by applying Lemmata 4.7, 4.8, one gets

$$\begin{aligned} \begin{aligned} e^{\alpha t} \Big \Vert \int _0^t e^{(t - \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x}&\le e^{\alpha t} \Big \Vert \int _0^{t - \frac{1}{2}} e^{(t - \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x} \\&\quad + e^{\alpha t} \Big \Vert \int _{t - \frac{1}{2}}^t e^{(t - \tau ) \Delta } f(\tau , \cdot )\, d \tau \Big \Vert _{H^s_x} \\&\lesssim _\alpha \Vert f \Vert _{\mathcal{E}_{s - 1}} \,. \end{aligned} \end{aligned}$$
(4.29)

The claimed statement then follows by (4.28), (4.29) and passing to the supremum over \(t \in [0, + \infty )\).

4.2 Proof of Proposition 4.1

The Proposition 4.1 is proved by a fixed point argument. For any \(\delta > 0\) and \(s\ge 0\), we define the ball

$$\begin{aligned} \mathcal{B}_s(\delta ) := \Big \{ v \in \mathcal{E}_s : \mathrm{div}(v) = 0, \quad \Vert v \Vert _{\mathcal{E}_s} \le \delta \Big \} \end{aligned}$$
(4.30)

and for any \(v \in \mathcal{B}_s(\delta )\), we define the map

$$\begin{aligned} \begin{aligned} \Phi (v)&:= e^{t \Delta } v_0 + \int _0^t e^{(t - \tau ) \Delta } \mathcal{N}( v)(\tau , \cdot )\, d \tau \,, \\ \mathcal{N}( v)&:= - {\mathfrak {L}}\Big (u_\omega \cdot \nabla v + v \cdot \nabla u_\omega + v \cdot \nabla v \Big )\,. \end{aligned} \end{aligned}$$
(4.31)

Proposition 4.9

Let \(s > d/2 + 1\), \(\alpha \in (0, 1)\). Then there exists \(\delta = \delta (s, \alpha , \nu , d) \in (0, 1)\) small enough such that for any \(\varepsilon \in (0, \delta )\), \(\Phi : \mathcal{B}_s(\delta ) \rightarrow \mathcal{B}_s(\delta )\) is a contraction.

Proof

Since \(v_0\) has zero divergence and zero average, then clearly by (4.9), \(\mathrm{div}\big ( e^{t \Delta } v_0\big ) = 0\), \(\int _{\mathbb T^d} e^{t \Delta } v_0(x)\, d x= 0\) . Now let v(tx) be a function with zero average and zero divergence. Clearly \(\mathrm{div}\Big ( \mathcal{N}(v)\Big ) = 0\) since in the definition of \(\mathcal{N}( v)\) in (4.31), there is the Leray projector. Moreover using that \(\mathrm{div}(v ) = \mathrm{div}(u_\omega ) = 0\), by Lemma 2.4-(i), one gets \(\mathcal{N}(v)\) has zero average and then by (4.9) also \(\int _0^t e^{(t - \tau ) \Delta } \mathcal{N}(v)(\tau , \cdot )\, d \tau \) has zero average. Hence, we have shown that \(\mathrm{div}(\Phi (v)) = 0\) and \(\int _{\mathbb T^d} \Phi (v)\, d x= 0\). Let now \(\Vert v \Vert _{\mathcal{E}_s} \le \delta \). We estimate \(\Vert \Phi (v) \Vert _{\mathcal{E}_s}\). By recalling (4.31), Lemma 4.4, Proposition 4.5 and Lemma 4.3, one has

$$\begin{aligned} \begin{aligned} \Vert \Phi (v) \Vert _{\mathcal{E}_s}&{\mathop {\le }\limits ^{(4.31)}} \Vert e^{t \Delta } v_0 \Vert _{\mathcal{E}_s} + \Big \Vert \int _0^t e^{(t - \tau )\Delta } \mathcal{N}(v)(\tau , \cdot )\, d \tau \Big \Vert _{\mathcal{E}_s} \\&\lesssim _\alpha \Vert v_0 \Vert _{H^s_x} + \Vert \mathcal{N}( v) \Vert _{\mathcal{E}_{s - 1}} \\&\lesssim _{s, \alpha } \Vert v_0 \Vert _{H^s_x} + \Vert u_\omega \Vert _{\mathcal{C}^0_t H^s_x} \Vert v \Vert _{\mathcal{E}_s} + \Vert v \Vert _{\mathcal{E}_s}^2\,. \end{aligned} \end{aligned}$$
(4.32)

By the estimates of Theorem 1.1, by the Definition (4.1) and by applying Lemma 2.3, one has

$$\begin{aligned} u_\omega \in \mathcal{C}^0_b \Big ( \mathbb R, H^s(\mathbb T^d, \mathbb R^d) \Big ) \quad \text {and} \quad \Vert u_\omega \Vert _{\mathcal{C}^0_t H^s_x} \lesssim _s \varepsilon {\mathop {\lesssim _s}\limits ^{\varepsilon \le \delta }} \delta \,. \end{aligned}$$
(4.33)

Since \(v \in \mathcal{B}_s(\delta )\), the estimate (4.32) implies that

$$\begin{aligned} \Vert \Phi (v) \Vert _{\mathcal{E}_s} \le C(s, \alpha )\Big ( \Vert v_0 \Vert _{H^s_x} + \delta ^2\Big )\,. \end{aligned}$$
(4.34)

Hence \(\Vert \Phi (v) \Vert _{\mathcal{E}_s} \le \delta \) provided

$$\begin{aligned} C(s, \alpha ) \Vert v_0 \Vert _{H^s_x} \le \frac{\delta }{2}, \quad C(s, \alpha )\delta \le \frac{1}{2}\,. \end{aligned}$$

These conditions are fullfilled by taking \(\delta \) small enough and \(\Vert v_0 \Vert _{H^s_x}\ll \delta \). Hence \(\Phi : \mathcal{B}_s(\delta ) \rightarrow \mathcal{B}_s(\delta )\). Now let \(v_1, v_2 \in \mathcal{B}_s(\delta )\). We need to estimate

$$\begin{aligned} \Phi (v_1) - \Phi (v_2) = \int _0^t e^{(t - \tau )\Delta } \Big (\mathcal{N}(v_1) - \mathcal{N}(v_2)\Big )(\tau , \cdot )\, d \tau \,. \end{aligned}$$
(4.35)

By (4.31)

$$\begin{aligned} \begin{aligned} \Vert \mathcal{N}(v_1) - \mathcal{N}(v_2) \Vert _{\mathcal{E}_{s - 1}}&\le \Big \Vert {\mathfrak {L}}\Big ( u_\omega \cdot \nabla (v_1 - v_2) \Big ) \Big \Vert _{\mathcal{E}_{s - 1}} + \Big \Vert {\mathfrak {L}}\Big ( (v_1 - v_2) \cdot \nabla u_\omega \Big ) \Big \Vert _{\mathcal{E}_{s - 1}} \\&\quad + \Big \Vert {\mathfrak {L}}\Big ( (v_1 - v_2) \cdot \nabla v_1 \Big ) \Big \Vert _{\mathcal{E}_{s - 1}} + \Big \Vert {\mathfrak {L}}\Big ( v_2 \cdot \nabla (v_1 - v_2) \Big ) \Big \Vert _{\mathcal{E}_{s - 1}} \\&{\mathop {\lesssim }\limits ^{(4.16)}} \Big \Vert u_\omega \cdot \nabla (v_1 - v_2) \Big \Vert _{\mathcal{E}_{s - 1}} + \Big \Vert (v_1 - v_2) \cdot \nabla u_\omega \Big \Vert _{\mathcal{E}_{s - 1}} \\&\quad + \Big \Vert (v_1 - v_2) \cdot \nabla v_1 \Big \Vert _{\mathcal{E}_{s - 1}} + \Big \Vert v_2 \cdot \nabla (v_1 - v_2) \Big \Vert _{\mathcal{E}_{s - 1}} \\&{\mathop {\lesssim _s}\limits ^{(4.17), (4.18), (4.19)}} \Big ( \Vert u_\omega \Vert _{\mathcal{C}^0_t H^s_x} + \Vert v_1 \Vert _{\mathcal{E}_s} + \Vert v_2 \Vert _{\mathcal{E}_s} \Big ) \Vert v_1 - v_2 \Vert _{\mathcal{E}_s} \,. \end{aligned} \end{aligned}$$
(4.36)

Hence, (4.33) and using that \(v_1, v_2 \in \mathcal{B}_s(\delta )\) (\(\Vert v_1 \Vert _{\mathcal{E}_s}, \Vert v_2 \Vert _{\mathcal{E}_s} \le \delta \)) and the estimate (4.36) imply that

$$\begin{aligned} \Vert \mathcal{N}(v_1) - \mathcal{N}(v_2) \Vert _{\mathcal{E}_{s - 1}} \lesssim _{s} \delta \Vert v_1 - v_2 \Vert _{\mathcal{E}_s}\,. \end{aligned}$$
(4.37)

By (4.35), one gets that

$$\begin{aligned} \begin{aligned} \Vert \Phi (v_1 ) - \Phi (v_2) \Vert _{\mathcal{E}_s}&{\mathop {\lesssim _{s, \alpha }}\limits ^{Proposition \, 4.5}} \Vert \mathcal{N}(v_1) - \mathcal{N}(v_2) \Vert _{\mathcal{E}_{s - 1}} \\&{\mathop {\le }\limits ^{(4.37)}} C(s, \alpha )\delta \Vert v_1 - v_2 \Vert _{\mathcal{E}_s} \end{aligned} \end{aligned}$$
(4.38)

for some constant \(C(s, \alpha ) > 0\). Therefore

$$\begin{aligned} \Vert \Phi (v_1 ) - \Phi (v_2) \Vert _{\mathcal{E}_s} \le \frac{1}{2} \Vert v_1 - v_2 \Vert _{\mathcal{E}_s} \end{aligned}$$

provided \(\delta \le \frac{1}{2 C(s, \alpha )}\). The claimed statement has then been proved.

Proof of Proposition 4.1concluded. By Proposition 4.9, using the contraction mapping theorem there exists a unique \(v \in \mathcal{B}_s(\delta )\) which is a fixed point of the map \(\Phi \) in (4.31). By the functional equation \(v = \Phi (v)\), one deduces in a standard way that

$$\begin{aligned} v \in \mathcal{C}^1_b \Big ( [0, + \infty ), H^{s - 2}_0(\mathbb T^d, \mathbb R^d) \Big ) \end{aligned}$$

and hence v is a solution of the Eq. (4.5). By (4.16) and using the trivial fact that \(\Vert \Delta v \Vert _{\mathcal{E}_{s - 2}} \le \Vert v \Vert _{\mathcal{E}_s}\)

$$\begin{aligned} \begin{aligned} \Vert \partial _t v \Vert _{\mathcal{E}_{s - 2}}&\lesssim \Vert v \Vert _{\mathcal{E}_s} + \Vert u_\omega \cdot \nabla v \Vert _{\mathcal{E}_{s - 2}} + \Vert v \cdot \nabla u_\omega \Vert _{\mathcal{E}_{s - 2}} + \Vert v \cdot \nabla v \Vert _{\mathcal{E}_{s - 2}} \\&{\mathop {\lesssim _s}\limits ^{(4.17)-(4.19)}} \Vert v \Vert _{\mathcal{E}_s}\Big (1 + \Vert u_\omega \Vert _{\mathcal{C}^0_t H^s_x} + \Vert v \Vert _{\mathcal{E}_s} \Big )\,. \end{aligned} \end{aligned}$$
(4.39)

Therefore using that \(v \in \mathcal{B}_s(\delta )\) (\(\Vert v \Vert _{\mathcal{E}_s} \le \delta \)) and by (4.33), \(\Vert u_\omega \Vert _{\mathcal{C}^0_t H^s_x} \lesssim _s \delta \), one gets, for \(\delta \) small enough, the estimate \(\Vert \partial _t v \Vert _{\mathcal{E}_{s - 2}} \lesssim _s \delta \) and the claimed statement follows by recalling (4.16).

4.3 Proof of Theorem 1.2

In view of Proposition 4.1, it remains only to solve the Eq. (4.4) for the pressure q(tx). The only solution with zero average of this latter equation is given by

$$\begin{aligned} q := (- \Delta )^{- 1}\Big (\mathrm{div}\Big ( u_\omega \cdot \nabla v + v \cdot \nabla u_\omega + v \cdot \nabla v\Big ) \Big )\,. \end{aligned}$$
(4.40)

Using that \(\Vert (- \Delta )^{- 1} \mathrm{div} a \Vert _{\mathcal{E}_s} \lesssim \Vert a \Vert _{\mathcal{E}_{s - 1}}\) for any \(a \in \mathcal{E}_s\), one gets the inequality

$$\begin{aligned} \begin{aligned} \Vert q \Vert _{\mathcal{E}_s}&\lesssim \Vert u_\omega \cdot \nabla v \Vert _{\mathcal{E}_{s - 1}} + \Vert v \cdot \nabla u_\omega \Vert _{\mathcal{E}_{s - 1}} + \Vert v \cdot \nabla v \Vert _{\mathcal{E}_{s - 1}} \,. \end{aligned} \end{aligned}$$
(4.41)

Hence arguing as in (4.39), one deduces the estimate \(\Vert q \Vert _{\mathcal{E}_s} \lesssim _{s} \delta \). The claimed estimate on q then follows by recalling (4.16) and the proof is concluded (recall that by (4.2), \(v = u - u_\omega \), \(q = p - p_\omega \)).