1 Introduction

Malliavin calculus has proved to be a powerful tool for the study of questions concerning the probability laws of random vectors, ranging from its very existence to the study of their properties and applications. Malliavin’s probabilistic proof of Hörmander’s hypoellipticity theorem for differential operators in quadratic form provided the existence of an infinitely differentiable density with respect to the Lebesgue measure on \({\mathbb {R}}^m\) for the law at a fixed time \(t>0\) of the solution to a stochastic differential equation (SDE) on \({\mathbb {R}}^m\) driven by a multi-dimension Brownian motion. The classical Malliavin’s criterion for existence and regularity of densities (see, e.g. [13]) requires strong regularity of the random vector X under consideration. In fact, X should be in the space \(\mathbb {D}^\infty \), meaning that it belongs to Sobolev type spaces of any degree. As a consequence, many interesting examples are out of the range of the theory, for example, SDE with Hölder continuous coefficients, and others that will be mentioned throughout this introduction.

Recently, there have been several attempts to develop techniques to prove existence of density, under weaker regularity conditions than in the Malliavin’s theory, but providing much less information on the properties of the density. The idea is to avoid applying integration by parts, and use instead some approximation procedures. A pioneer work in this direction is [12], where the random vector X is compared with a good approximation \(X^\varepsilon \) whose law is known. The proposal of the random vector \(X^\varepsilon \) is inspired by Euler numerical approximations and the comparison is done through their respective Fourier transforms. The method is illustrated with several one-dimensional examples of stochastic equations, all of them having in common that the diffusion coefficient is Hölder continuous and the drift term is a measurable function: SDEs, including cases of random coefficients, a stochastic heat equation with Neumann boundary conditions, and a SDE driven by a Lévy process.

With a similar motivation, and relying also on the idea of approximation, A. Debussche and M. Romito prove a useful criterion for the existence of density of random vectors, see [11]. In comparison with [12], the result is formulated in an abstract form, it applies to multidimensional random vectors and provides additionally information on the space where the density lives. The precise statement is given in Lemma 1. As an illustration of the method, [11] considers finite dimensional functionals of the solutions of the stochastic Navier-Stokes equations in dimension 3, and in [10] SDEs driven by stable-like Lévy processes with Hölder continuous coefficients. A similar methodology has been applied in [1, 2, 4]. The more recent work [3] applies interpolation arguments on Orlicz spaces to obtain absolute continuity results of finite measures. Variants of the criteria provide different types of properties of the density. The results are illustrated by diffusion processes with \(\log \)-Hölder coefficients and piecewise deterministic Markov processes.

Some of the methods developed in the references mentioned so far are also well-suited to the analysis of stochastic partial differential equations (SPDEs) defined by non-smooth differential operators. Indeed, consider a class of SPDEs defined by

$$\begin{aligned} Lu(t,x) = b(u(t,x)) + \sigma (u(t,x))\dot{F}(t,x), \ (t,x)\in (0,T]\times {{\mathbb {R}}^d}, \end{aligned}$$
(1)

with constant initial conditions, where L denotes a linear differential operator, \(\sigma , b{:} {\mathbb {R}}\rightarrow {\mathbb {R}}\), and F is a Gaussian noise, white in time with some spatial correlation (see Sect. 2 for the description of F). Under some set of assumptions, [20, Theorem 2.1] establishes the existence of density for the random field solution of (1) at any point \((t,x)\in (0,T]\times {{\mathbb {R}}^d}\), and also that the density belongs to some Besov space. The theorem applies for example to the stochastic wave equation in any spatial dimensions \(d\ge 1\).

The purpose of this paper is to further illustrate the range of applications of Lemma 1 with two more examples. The first one is presented in the next Sect. 2 and complements the results of [20]. In comparison with this reference, here we are able to remove the strong ellipticity property on the function \(\sigma \), which is crucial in most of the applications of Malliavin calculus to SPDEs (see [19]), but the class of differential operators L is more restrictive. Nevertheless, Theorem 1 below applies for example to the stochastic heat equation in any spatial dimension and to the stochastic wave equation with \(d\le 3\). For the latter example, if \(\sigma \), b are smooth functions and \(\sigma \) is bounded away from zero, existence and regularity of the density of u(tx) has been established in [16, 17].

The second example, developed in Sect. 3, refers to ambit fields driven by a class of Lévy bases (see 14). Originally introduced in [5] in the context of modeling turbulence, ambit fields are stochastic processes indexed by time and space that are becoming popular and useful for the applications in mathematical finance among others. The expression (14) has some similarities with the mild formulation of (1) (see 3) and can also be seen as an infinite dimensional extension of SDEs driven by Lévy processes. We are not aware of previous results on densities of ambit fields.

We end this introduction by quoting the definition of the Besov spaces relevant for this article as well as the existence of density criterion by [11].

The spaces \(B_{1,\infty }^s\), \(s>0\), can be defined as follows. Let \(f{:}{{\mathbb {R}}^d}\rightarrow {\mathbb {R}}\). For \(x,h\in {{\mathbb {R}}^d}\) set \((\varDelta ^1_hf)(x)=f(x+h)-f(x)\). Then, for any \(n\in {\mathbb {N}}\), \(n\ge 2\), let

$$\begin{aligned} (\varDelta _h^nf)(x) = \big (\varDelta ^1_h(\varDelta ^{n-1}_hf)\big )(x) = \sum _{j=0}^n (-1)^{n-j}\left( {\begin{array}{c}n\\ j\end{array}}\right) f(x+jh). \end{aligned}$$

For any \(0<s<n\), we define the norm

$$\begin{aligned} \Vert f\Vert _{B^s_{1,\infty }} = \Vert f\Vert _{L^1} + \sup _{|h|\le 1} |h|^{-s}\Vert \varDelta _h^nf\Vert _{L^1}. \end{aligned}$$

It can be proved that for two distinct \(n,n'>s\) the norms obtained using n or \(n'\) are equivalent. Then we define \(B^s_{1,\infty }\) to be the set of \(L^1\)-functions with \(\Vert f\Vert _{B^s_{1,\infty }}<\infty \). We refer the reader to [22] for more details.

In the following, we denote by \(\mathscr {C}^\alpha _b\) the set of bounded Hölder continuous functions of degree \(\alpha \). The next Lemma establishes the criterion on existence of densities that we will apply in our examples.

Lemma 1

Let \(\kappa \) be a finite nonnegative measure. Assume that there exist \(0<\alpha \le a<1\), \(n\in {\mathbb {N}}\) and a constant \(C_n\) such that for all \(\phi \in \mathscr {C}^\alpha _b\), and all \(h\in {\mathbb {R}}\) with \(|h|\le 1\),

$$\begin{aligned} \bigg |\int _{\mathbb {R}}\varDelta _h^n\phi (y)\kappa (d y)\bigg |\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}|h|^a. \end{aligned}$$
(2)

Then \(\kappa \) has a density with respect to the Lebesgue measure, and this density belongs to the Besov space \(B^{a-\alpha }_{1,\infty }({\mathbb {R}})\).

2 Nonelliptic Diffusion Coefficients

In this section we deal with SPDEs without the classical ellipticity assumption on the coefficient \(\sigma \), i.e. \(\inf _{x\in {{\mathbb {R}}^d}} |\sigma (x)|\ge c>0\). In the different context of SDEs driven by a Lévy process, this situation was considered in [10, Theorem 1.1], assuming in addition that \(\sigma \) is bounded. Here, we will deal with SPDEs in the setting of [9] with not necessarily bounded coefficients \(\sigma \). Therefore, the results will apply in particular to Anderson’s type SPDEs (\(\sigma (x)= \lambda x\), \(\lambda \ne 0\)).

We consider the class of SPDEs defined by (1), with constant initial conditions, where L denotes a linear differential operator, and \(\sigma , b{:} {\mathbb {R}}\rightarrow {\mathbb {R}}\). In the definition above, F is a Gaussian noise, white in time with some spatial correlation.

Consider the space of Schwartz functions on \({{\mathbb {R}}^d}\), denoted by \(\mathscr {S}({{\mathbb {R}}^d})\), endowed with the following inner product

$$\begin{aligned} \langle \phi ,\psi \rangle _{\mathscr {H}} {:}= \int _{{\mathbb {R}}^d}dy \int _{{\mathbb {R}}^d}\varGamma (dx) \phi (y)\psi (y-x), \end{aligned}$$

where \(\varGamma \) is a nonnegative and nonnegative definite tempered measure. Using the Fourier transform we can rewrite this inner product as

$$\begin{aligned} \langle \phi ,\psi \rangle _{\mathscr {H}} = \int _{{\mathbb {R}}^d}\mu (d\xi ) \mathscr {F}\phi (\xi )\overline{\mathscr {F}\psi (\xi )}, \end{aligned}$$

where \(\mu \) is a nonnegative definite tempered measure with \(\mathscr {F}\mu = \varGamma \). Let \(\mathscr {H}{:}= \overline{(\mathscr {S},\langle \cdot ,\cdot \rangle _\mathscr {H})}^{\langle \cdot ,\cdot \rangle _\mathscr {H}}\), and \(\mathscr {H}_T {:}= L^2([0,T];\mathscr {H})\). It can be proved that F is an isonormal Wiener process on \(\mathscr {H}_T\).

Let \(\varLambda \) denote the fundamental solution to \(Lu=0\) and assume that \(\varLambda \) is either a function or a non-negative measure of the form \(\varLambda (t,dy)dt\) such that

$$\begin{aligned} \sup _{t\in [0,T]}\varLambda (t,{{\mathbb {R}}^d})\le C_T<\infty . \end{aligned}$$

We consider

$$\begin{aligned} u(t,x)&= \int _0^t\int _{{\mathbb {R}}^d}\varLambda (t-s,x-y)\sigma (u(s,y))M(ds,dy) \nonumber \\&\quad +\,\int _0^t\int _{{\mathbb {R}}^d}\varLambda (t-s,x-y)b(u(s,y))dydy, \end{aligned}$$
(3)

as the integral formulation of (1), where M is the martingale measure generated by F. In order for the stochastic integral in the previous equation to be well-defined, we need to assume that

$$\begin{aligned} \int _0^T ds \int _{{\mathbb {R}}^d}\mu (d\xi ) |\mathscr {F}\varLambda (s)(\xi )|^2 < +\infty . \end{aligned}$$
(4)

According to [9, Theorem 13] (see also [23]), equation (3) has a unique random field solution \(\{u(t,x); (t,x)\in [0,T]\times {{\mathbb {R}}^d}\}\) which has a spatially stationary law (this is a consequence of the S-property in [9]), and for all \(p\ge 2\)

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times {{\mathbb {R}}^d}} {\mathbb {E}}\big [|u(t,x)|^p\big ] < \infty . \end{aligned}$$

We will prove the following result on the existence of a density.

Theorem 1

Fix \(T>0\). Assume that for all \(t\in [0,T]\), \(\varLambda (t)\) is a function or a nonnegative distribution such that (4) holds and \(\sup _{t\in [0,T]} \varLambda (t,{{\mathbb {R}}^d})<\infty \). Assume furthermore that \(\sigma \) and b are Lipschitz continuous functions. Moreover, we assume that

$$\begin{aligned} ct^\gamma \le&\int _0^t ds \int _{{\mathbb {R}}^d}\mu (d\xi ) |\mathscr {F}\varLambda (s)(\xi )|^2 \le Ct^{\gamma _1},\\&\int _0^t ds |\mathscr {F}\varLambda (s)(0)|^2 \le Ct^{\gamma _2}, \end{aligned}$$

for some \(\gamma , \gamma _1, \gamma _2>0\) and positive constants c and C. Suppose also that there exists \(\delta > 0\) such that

$$\begin{aligned} {\mathbb {E}}\big [|u(t,0)-u(s,0)|^2\big ] \le C|t-s|^\delta , \end{aligned}$$
(5)

for any \(s,t\in [0,T]\) and some constant \(C>0\), and that

$$\begin{aligned} \bar{\gamma }{:}= \frac{\min \{\gamma _1,\gamma _2\} + \delta }{\gamma } > 1. \end{aligned}$$

Fix \((t,x)\in (0,T]\times {\mathbb {R}}^d\). Then, the probability law of u(tx) has a density f on the set \(\{y\in {\mathbb {R}};\sigma (y)\ne 0\}\). In addition, there exists \(n\ge 1\) such that the function \(y\mapsto \vert \sigma (y)\vert ^n f(y)\) belongs to the Besov space \(B_{1,\infty }^\beta \), with \(\beta \in (0,\bar{\gamma }-1)\).

Proof

The existence, uniqueness and stationarity of the solution u is guaranteed by [9, Theorem 13]. We will apply Lemma 1 to the law of u(tx) at \(x=0\). Since the solution u is stationary in space, this is enough for our purposes. Consider the measure

$$\begin{aligned} \kappa (dy) = |\sigma (y)|^n\left( P\circ u(t,0)^{-1}\right) (dy). \end{aligned}$$

We define the following approximation of u(t, 0). Let for \(0<\varepsilon <t\)

$$\begin{aligned} u^\varepsilon (t,0) = U^\varepsilon (t,0) + \sigma (u(t-\varepsilon ,0))\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\varLambda (t-s,-y)M(d s,d y), \end{aligned}$$
(6)

where

$$\begin{aligned} U^\varepsilon (t,0) =&\int _0^{t-\varepsilon }\int _{{\mathbb {R}}^d}\varLambda (t-s,-y)\sigma (u(s,y))M(ds,dy)\\&+ \int _0^{t-\varepsilon }\int _{{\mathbb {R}}^d}\varLambda (t-s,-y)b(u(s,y))dyds \\&+\,b(u(t-\varepsilon ,0))\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\varLambda (t-s,-y)dyds. \end{aligned}$$

Applying the triangular inequality, we have

$$\begin{aligned} \bigg |\int _{\mathbb {R}}\varDelta _h^n\phi (y)\kappa (dy)\bigg |&= \big |{\mathbb {E}}\big [|\sigma (u(t,0))|^n\varDelta _h^n\phi (u(t,0))\big ]\big | \nonumber \\&\le \big |{\mathbb {E}}\big [(|\sigma (u(t,0))|^n-|\sigma (u(t-\varepsilon ,0))|^n)\varDelta _h^n\phi (u(t,0))\big ]\big | \nonumber \\&\quad +\,\big |{\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^n(\varDelta _h^n\phi (u(t,0)) - \varDelta _h^n\phi (u^\varepsilon (t,0)))\big ]\big | \nonumber \\&\quad +\,\big |{\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^n\varDelta _h^n\phi (u^\varepsilon (t,0))\big ]\big |. \end{aligned}$$
(7)

Remember that \(\Vert \varDelta _h^n\phi \Vert _{\mathscr {C}^\alpha _b}\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\). Consequently,

$$\begin{aligned} |\varDelta _h^n\phi (x)| = |\varDelta _h^{n-1}\phi (x) - \varDelta _h^{n-1}\phi (x+h)| \le C_{n-1}\Vert \phi \Vert _{\mathscr {C}^\alpha _b}|h|^\alpha , \end{aligned}$$

Using this fact, the first term on the right-hand side of the inequality in (7) can be bounded as follows:

$$\begin{aligned} \big |{\mathbb {E}}\big [&(|\sigma (u(t,0))|^n - |\sigma (u(t-\varepsilon ,0))|^n)\varDelta _h^n\phi (u(t,0))\big ]\big | \nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}|h|^\alpha {\mathbb {E}}\big [\big ||\sigma (u(t,0))|^n-|\sigma (u(t-\varepsilon ,0))|^n\big |\big ]. \end{aligned}$$
(8)

Apply the equality \(x^n-y^n = (x-y)(x^{n-1}+x^{n-2}y + \cdots + xy^{n-2}+y^{n-1})\) along with the Lipschitz continuity of \(\sigma \) and Hölder’s inequality, to obtain

$$\begin{aligned}&{\mathbb {E}}\big [\big ||\sigma (u(t,0))|^n-|\sigma (u(t-\varepsilon ,0))|^n\big |\big ] \nonumber \\&\le {\mathbb {E}}\bigg [ \big |\sigma (u(t,0))-\sigma (u(t-\varepsilon ,0))\big |\sum _{j=0}^{n-1} |\sigma (u(t,0))|^j|\sigma (u(t,0))|^{n-1-j}\bigg ] \nonumber \\&\le C\Big ({\mathbb {E}}\big [\big |u(t,0) - u(t-\varepsilon ,0)\big |^2\big ]\Big )^{1/2} \bigg ({\mathbb {E}}\bigg [\bigg (\sum _{j=0}^{n-1} |\sigma (u(t,0))|^j|\sigma (u(t,0))|^{n-1-j}\bigg )^2\bigg ]\bigg )^{\frac{1}{2}} \nonumber \\&\le C_n\Big ({\mathbb {E}}\big [\big |u(t,0) - u(t-\varepsilon ,0)\big |^2\big ]\Big )^{1/2} \nonumber \\&\le C_n\varepsilon ^{\delta /2}, \end{aligned}$$
(9)

where we have used that \(\sigma \) has linear growth, also that u(t, 0) has finite moments of any order and (5). Thus,

$$\begin{aligned} \big |{\mathbb {E}}\big [(|\sigma (u(t,0))|^n - |\sigma (u(t-\varepsilon ,0))|^n)\varDelta _h^n\phi (u(t,0))\big ]\big | \le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}|h|^\alpha \varepsilon ^{\delta /2}. \end{aligned}$$
(10)

With similar arguments,

$$\begin{aligned}&\big |{\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^n(\varDelta _h^n\phi (u(t,0)) - \varDelta _h^n\phi (u^\varepsilon (t,0)))\big ]\big |\nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}{\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^n|u(t,0)-u^\varepsilon (t,0)|^\alpha \big ] \nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\big ({\mathbb {E}}\big [|u(t,0)-u^\varepsilon (t,0)|^2\big ]\big )^{\alpha /2}\big ({\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^{2n/(2-\alpha )}\big ]\big )^{1-\alpha /2} \nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\varepsilon ^{\delta \alpha /2}\big (g_1(\varepsilon ) + g_2(\varepsilon )\big )^{\alpha /2}, \end{aligned}$$
(11)

where in the last inequality we have used the upper bound stated in [20, Lemma 2.5]. It is very easy to adapt the proof of this lemma to the context of this section. Note that the constant \(C_n\) in the previous equation does not depend on \(\alpha \) because

$$\begin{aligned} \big ({\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^{2n/(2-\alpha )}\big ]\big )^{1-\alpha /2}&\le \big ({\mathbb {E}}\big [(|\sigma (u(t-\varepsilon ,0))|\vee 1)^{2n}\big ]\big )^{1-\alpha /2} \\&\le {\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^{2n}\vee 1\big ]. \end{aligned}$$

Now we focus on the third term on the right-hand side of the inequality in (7). Let \(p_\varepsilon \) denote the density of the zero mean Gaussian random variable \(\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\varLambda (t-s,-y)M(ds,dy)\), which is independent of the \(\sigma \)-field \(\mathscr {F}_{t-\varepsilon }\) and has variance

$$\begin{aligned} g(\varepsilon ){:}=\int _0^\varepsilon ds \int _{{{\mathbb {R}}^d}} \mu (d\xi ) |\mathscr {F}\varLambda (s)(\xi )|^2 \ge C\varepsilon ^{\gamma }. \end{aligned}$$

In the decomposition (6), the random variable \(U^{\varepsilon }(t,0)\) is \(\mathscr {F}_{t-\varepsilon }\)-measurable. Then, by conditioning with respect to \(\mathscr {F}_{t-\varepsilon }\) and using a change of variables, we obtain

$$\begin{aligned}&\left| {\mathbb {E}}\left[ |\sigma (u(t-\varepsilon ,0))|^n\varDelta _h^n\phi (u^\varepsilon (t,0))\right] \right| \\&=\big |{\mathbb {E}}\big [{\mathbb {E}}\big [1_{\{\sigma (u(t-\varepsilon ,0))\ne 0\}}|\sigma (u(t-\varepsilon ,0))|^n\varDelta _h^n\phi (u^\varepsilon (t,0))\big |\mathscr {F}_{t-\varepsilon }\big ]\big ]\big |\\&= \bigg |{\mathbb {E}}\bigg [1_{\{\sigma (u(t-\varepsilon ,0))\ne 0\}}\int _{\mathbb {R}}|\sigma (u(t-\varepsilon ,0))|^n\varDelta _h^n\phi (U_t^\varepsilon + \sigma (u(t-\varepsilon ,0))y)p_{\varepsilon }(y)dy\bigg ]\bigg | \\&= \bigg |{\mathbb {E}}\bigg [1_{\{\sigma (u(t-\varepsilon ,0))\ne 0\}}\int _{\mathbb {R}}|\sigma (u(t-\varepsilon ,0))|^n\\&\qquad \times \,\phi (U_t^\varepsilon + \sigma (u(t-\varepsilon ,0))y)\varDelta _{-\sigma (u(t-\varepsilon ,0))^{-1}h}^n p_{\varepsilon }(y)dy\bigg ]\bigg | \\&\le \Vert \phi \Vert _\infty {\mathbb {E}}\bigg [1_{\{\sigma (u(t-\varepsilon ,0))\ne 0\}}|\sigma (u(t-\varepsilon ,0))|^n\int _{\mathbb {R}}\big |\varDelta _{-\sigma (u(t-\varepsilon ,0))^{-1}h}^np_{\varepsilon }(y)\big |dy\bigg ]. \end{aligned}$$

On the set \(\{\sigma (u(t-\varepsilon ,0))\ne 0\}\), the integral in the last term can be bounded as follows,

$$\begin{aligned} \int _{\mathbb {R}}\big |\varDelta _{-\sigma (u(t-\varepsilon ,0))^{-1}h}^np_{\varepsilon }(y)\big |dy&\le C_n |\sigma (u(t-\varepsilon ,0))|^{-n}|h|^n \Vert p^{(n)}_{\varepsilon }\Vert _{L^1({\mathbb {R}})} \\&\le C_n |\sigma (u(t-\varepsilon ,0))|^{-n}|h|^n g(\varepsilon )^{-n/2}, \end{aligned}$$

where we have used the property \(\Vert \varDelta _h^nf\Vert _{L^1({\mathbb {R}})}\le C_n |h|^n \Vert f^{(n)}\Vert _{L^1({\mathbb {R}})}\), and also that \(\Vert p^{(n)}_{\varepsilon }\Vert _{L_1} = (g(\varepsilon ))^{-n/2}\le C_n\varepsilon ^{-n\gamma /2}\) (see e.g. [20, Lemma 2.3]).

Substituting this into the previous inequality yields

$$\begin{aligned} \big |{\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^n\varDelta _h^n\phi (u^\varepsilon (t,0))\big ]\big | \le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}|h|^n \varepsilon ^{-n\gamma /2}, \end{aligned}$$
(12)

because \(\Vert \phi \Vert _{\infty }\le \Vert \phi \Vert _{\mathscr {C}^\alpha _b}\).

With (7), (10)–(12), we have

$$\begin{aligned}&\bigg |\int _{\mathbb {R}}\varDelta _h^n\phi (y)\kappa (dy)\bigg |\nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\Big (|h|^\alpha \varepsilon ^{\delta /2} + \varepsilon ^{\delta \alpha /2}\big (g_1(\varepsilon ) + g_2(\varepsilon )\big )^{\alpha /2} + |h|^n \varepsilon ^{-n\gamma /2}\Big )\nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\Big (|h|^\alpha \varepsilon ^{\delta /2} + \varepsilon ^{(\delta +\gamma _1)\alpha /2} + \varepsilon ^{(\delta +\gamma _2)\alpha /2} + |h|^n \varepsilon ^{-n\gamma /2}\Big )\nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\Big (|h|^\alpha \varepsilon ^{\delta /2} + \varepsilon ^{\gamma \bar{\gamma }\alpha /2} + |h|^n \varepsilon ^{-n\gamma /2}\Big ) \end{aligned}$$
(13)

Let \(\varepsilon = \tfrac{1}{2}t|h|^\rho \), with \(\rho = 2n/(\gamma n+\gamma \bar{\gamma }\alpha )\). With this choice, the last term in (13) is equal to

$$\begin{aligned} C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\Big (|h|^{\alpha + \frac{n\delta }{\gamma (n+\bar{\gamma }\alpha )}} + |h|^{\frac{n\bar{\gamma }\alpha }{n+\bar{\gamma }\alpha }}\Big ). \end{aligned}$$

Since \(\gamma _1\le \gamma \), by the definition of \(\bar{\gamma }\), we obtain

$$\begin{aligned} \bar{\gamma }-1 = \frac{\min \{\gamma _1,\gamma _2\}}{\gamma } + \frac{\delta }{\gamma } - 1 \le \frac{\delta }{\gamma }. \end{aligned}$$

Fix \(\zeta \in (0,\bar{\gamma }-1)\). We can choose \(n\in {\mathbb {N}}\) sufficiently large and \(\alpha \) sufficiently close to 1, such that

$$\begin{aligned} \alpha + \frac{n\delta }{\gamma (n+\bar{\gamma }\alpha )} > \zeta + \alpha \quad \text {and}\quad \frac{n\bar{\gamma }\alpha }{n+\bar{\gamma }\alpha }>\zeta +\alpha . \end{aligned}$$

This finishes the proof of the theorem.

Remark 1

  1. (i)

    Assume that \(\sigma \) is bounded from above but not necessary bounded away from zero. Following the lines of the proof of Theorem 1 we can also show the existence of a density without assuming the existence of moments of u(tx) of order higher than 2. This applies in particular to SPDEs whose fundamental solutions are general distributions as treated in [8], extending the result on absolute continuity given in [20, Theorem 2.1].

  2. (ii)

    Unlike [20, Theorem 2.1], the conclusion on the space to which the density belongs is less precise. We do not know whether the order \(\bar{\gamma }-1\) is optimal.

3 Ambit Random Fields

In this section we prove the absolute continuity of the law of a random variable generated by an ambit field at a fixed point \((t,x)\in [0,T]\times {{\mathbb {R}}^d}\). The methodology we use is very much inspired by [10]. Ambit fields where introduced in [5] with the aim of studying turbulence flows, see also the survey papers [6, 15]. They are stochastic processes indexed by \((t,x)\in [0,T]\times {{\mathbb {R}}^d}\) of the form

$$\begin{aligned} X(t,x) = x_0 + \iint _{A_t(x)} g(t,s;x,y)\sigma (s,y)L(ds,dy) + \iint _{B_t(x)} h(t,s;x,y)b(s,y)dyds, \end{aligned}$$
(14)

where \(x_0\in {\mathbb {R}}\), gh are deterministic functions subject to some integrability and regularity conditions, \(\sigma ,b\) are stochastic processes, \(A_t(x),B_t(x)\subseteq [0,t]\times {{\mathbb {R}}^d}\) are measurable sets, which are called ambit sets. The stochastic process L is a Lévy basis on the Borel sets \(\mathscr {B}([0,T]\times {{\mathbb {R}}^d})\). More precisely, for any \(B\in \mathscr {B}([0,T]\times {{\mathbb {R}}^d})\) the random variable L(B) has an infinitely divisible distribution; given \(B_1,\ldots ,B_k\) disjoint sets of \(B\in \mathscr {B}([0,T]\times {{\mathbb {R}}^d})\), the random variables \(L(B_1),\ldots ,L(B_k)\) are independent; and for any sequence of disjoint sets \((A_j)_{j\in {\mathbb {N}}}\subset \mathscr {B}([0,T]\times {{\mathbb {R}}^d})\),

$$\begin{aligned} L(\cup _{j=1}^\infty A_j) = \sum _{j=1}^\infty L(A_j), \quad {\mathbb {P}}\text {-almost surely}. \end{aligned}$$

Throughout the section, we will consider the natural filtration generated by L, i.e. for all \(t\in [0,T]\),

$$\begin{aligned} \mathscr {F}_t {:}= \sigma (L(A); A\in [0,t]\times {{\mathbb {R}}^d}, \lambda (A)<\infty ). \end{aligned}$$

For deterministic integrands, the stochastic integral in (14) is defined as in [18]. In the more general setting of (14), one can use the theory developed in [7]. We refer the reader to these references for the specific required hypotheses on g and \(\sigma \).

The class of Lévy bases considered in this section are described by infinite divisible distributions of pure-jump, stable-like type. More explicitly, as in [18, Proposition 2.4], we assume that for any \(B\in \mathscr {B}([0,T]\times {{\mathbb {R}}^d})\),

$$\begin{aligned} \log {\mathbb {E}}\big [\exp ({\mathrm {i}}\xi L(B))\big ] = \int _{[0,T]\times {{\mathbb {R}}^d}} \lambda (ds,dy)\int _{\mathbb {R}}\rho _{s,y}(dz) \big (\exp ({\mathrm {i}}\xi z - 1 - {\mathrm {i}}\xi z1_{ [-1,1] }(z))\big ), \end{aligned}$$

where \(\lambda \) is termed the control measure on the state space and \((\rho _{s,y})_{(s,y)\in [0,T]\times {{\mathbb {R}}^d}}\) is a family of Lévy measures satisfying

$$\begin{aligned} \int _{\mathbb {R}}\min \{1,z^2\}\rho _{s,y}(dz) = 1,\ \lambda -{\text {a.s.}} \end{aligned}$$

Throughout this section, we will consider the following set of assumptions on \((\rho _{s,y})_{(s,y)\in [0,T]\times {{\mathbb {R}}^d}}\) and on \(\lambda \).

Assumptions 1

Fix \((t,x)\in [0,T]\times {{\mathbb {R}}^d}\) and \(\alpha \in (0,2)\), and for any \(a>0\) let \(\mathscr {O}_a{:}=(-a,a)\). Then,

  1. (i)

    for all \(\beta \in [0,\alpha )\) there exists a nonnegative function \(C_\beta \in L^1(\lambda )\) such that for all \(a>0\),

    $$\int _{(\mathscr {O}_a)^c} |z|^\beta \rho _{s,y}(dz)\le C_\beta (s,y)a^{\beta -\alpha }, \ \lambda -{\text {a.s.}};$$
  2. (ii)

    there exists a non-negative function \(\bar{C}\in L^1(\lambda )\) such that for all \(a>0\),

    $$\int _{\mathscr {O}_a}|z|^2\rho _{s,y}(dz)\le \bar{C}(s,y)a^{2-\alpha }, \ \lambda -{\text {a.s.}};$$
  3. (iii)

    there exists a nonnegative function \(c\in L^1(\lambda )\) and \(r>0\) such that for all \(\xi \in {\mathbb {R}}\) with \(|\xi |>r\),

    $$\begin{aligned} \int _{\mathbb {R}}\big (1-\cos (\xi z)\big )\rho _{s,y}(dz) \ge c(s,y)|\xi |^\alpha , \ \lambda -{\text {a.s.}} \end{aligned}$$

Example 1

Let

$$\begin{aligned} \rho _{s,y}(dz) = c_1(s,y)1_{ \{z>0\} }z^{-\alpha -1}dz + c_{-1}(s,y)1_{ \{z<0\} }|z|^{-\alpha -1}dz, \end{aligned}$$

with \((s,y)\in [0,T]\times {{\mathbb {R}}^d}\), and assume that \(c_1, c_{-1}\in L^1(\lambda )\). This corresponds to stable distributions (see [18, Lemma 3.7]). One can check that Assumptions 1 are satisfied with \(C=\bar{C}=c_1 \vee c_{-1}\), and \(c=c_1\wedge c_{-1}\).

Assumptions 2

(H1) We assume that the deterministic functions \(g,h{:}\{0\le s<t\le T\}\times {{\mathbb {R}}^d}\times {{\mathbb {R}}^d}\rightarrow {\mathbb {R}}\) and the stochastic processes \((\sigma (s,y);(s,y)\in [0,T]\times {{\mathbb {R}}^d})\), \((b(s,y);(s,y)\in [0,T]\times {{\mathbb {R}}^d})\) are such that the integrals on the right-hand side of (14) are well-defined (see the conditions in [18, Theorem 2.7] and [7, Theorem 4.1]). We also suppose that for any \(y\in {{\mathbb {R}}^d}\), \(p\in [2,\infty )\) we have \(\sup _{s\in [0,T]}{\mathbb {E}}[|\sigma (s,y)|^p]<\infty \).

(H2) Let \(\alpha \) be as in Assumptions 1. There exist \(\delta _1,\delta _2>0\) such that for some \(\gamma \in (\alpha ,2]\) and, if \(\alpha \ge 1\), for all \(\beta \in [1,\alpha )\), or for \(\beta =1\), if \(\alpha <1\),

$$\begin{aligned} {\mathbb {E}}\big [|\sigma (t,x)-\sigma (s,y)|^\gamma \big ]&\le C_\gamma (|t-s|^{\delta _1\gamma } + |x-y|^{\delta _2\gamma }), \end{aligned}$$
(15)
$$\begin{aligned} {\mathbb {E}}\big [|b(t,x)-b(s,y)|^\beta \big ]&\le C_\beta (|t-s|^{\delta _1\beta } + |x-y|^{\delta _2\beta }), \end{aligned}$$
(16)

for every \((t,x), (s,y)\in [0,T]\times {{\mathbb {R}}^d}\), and some \(C_\gamma \), \(C_\beta >0\).

(H3) \(|\sigma (t,x)|>0\), \(\omega \)-a.s.

(H4) Let \(\alpha \), \(\bar{C}\), \(C_\beta \) and c as in Assumptions 1 and \(0<\varepsilon <t\). We suppose that

$$\begin{aligned}&\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y)\bar{c}(s,y) |g(t,s,x,y)|^\alpha \lambda (ds,dy)<\infty , \end{aligned}$$
(17)
$$\begin{aligned} c\varepsilon ^{\gamma _0} \le&\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) c(s,y) |g(t,s,x,y)|^\alpha \lambda (ds,dy)<\infty , \end{aligned}$$
(18)

where in (17), \(\bar{c}(s,y)=\bar{C}(s,y)\vee C_0(s,y)\), and (18) holds for some \(\gamma _0>0\).

Moreover, there exist constants \(C,\gamma _ 1,\gamma _2>0\) and \(\gamma >\alpha \) such that

$$\begin{aligned} \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) \tilde{C}_\beta (s,y) |g(t,s,x,y)|^\gamma |t-\varepsilon -s|^{\delta _1\gamma }\lambda (ds,dy)&\le C \varepsilon ^{\gamma \gamma _1}, \end{aligned}$$
(19)
$$\begin{aligned} \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) \tilde{C}_\beta (s,y) |g(t,s,x,y)|^\gamma |x-y|^{\delta _2\gamma }\lambda (ds,dy)&\le C\varepsilon ^{\gamma \gamma _2}. \end{aligned}$$
(20)

We also assume that there exist constants \(C,\gamma _3,\gamma _4>0\) such that for all \(\beta \in [1,\alpha )\), if \(\alpha \ge 1\), or for \(\beta =1\), if \(\alpha <1\),

$$\begin{aligned} \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s,x,y)|^\beta |t-\varepsilon -s|^{\delta _1\beta } dyds&\le C\varepsilon ^{\beta \gamma _3}, \end{aligned}$$
(21)
$$\begin{aligned} \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s,x,y)|^\beta |x-y|^{\delta _2\beta } dyds&\le C\varepsilon ^{\beta \gamma _4}, \end{aligned}$$
(22)

where \(\tilde{C}_\beta \) is defined as in Lemma 3.

(H5) The set \(A_t(x)\) “reaches t”, i.e. there is no \(\varepsilon >0\) satisfying \(A_t(x)\subseteq [0,t-\varepsilon ]\times {{\mathbb {R}}^d}\).

Remark 2

  1. (i)

    By the conditions in (H4), the stochastic integral in (14) with respect to the Lévy basis is well-defined as a random variable in \(L^\beta (\varOmega )\) for any \(\beta \in (0,\alpha )\) (see Lemma 3).

  1. (ii)

    One can easily derive some sufficient conditions for the assumptions in (H4). Indeed, suppose that

    $$\begin{aligned} \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) \tilde{C}_\beta (s,y) |g(t,s,x,y)|^\gamma \lambda (ds,dy) \le C\varepsilon ^{\gamma \bar{\gamma }_1}, \end{aligned}$$

    then (19) holds with \(\gamma _1 = \bar{\gamma }_1+\delta _1\). If in addition, \(A_t(x)\) consists of points \((s,y)\in [0,t]\times {{\mathbb {R}}^d}\) such that \(|x-y| \le |t-s|^\zeta \), for any \(s\in [t-\varepsilon ,t]\), and for some \(\zeta >0\), then (20) holds with \(\gamma _2 = \bar{\gamma }_1+\delta _2\zeta \). Similarly, one can derive sufficient conditions for (21), (22).

  2. (iii)

    The assumption (H5) is used in the proof of Theorem 2, where the law of X(tx) is compared with that of an approximation \(X^\varepsilon (t,x)\), which is infinitely divisible. This distribution is well-defined only if \(A_t(x)\) is non-empty in the region \([t-\varepsilon ,t]\times {{\mathbb {R}}^d}\).

  3. (iv)

    Possibly, for particular examples of ambit sets \(A_t(x)\), functions gh, and stochastic processes \(\sigma , b\), the Assumptions 2 can be relaxed. However, we prefer to keep this formulation.

We can now state the main theorem of this section.

Theorem 2

We suppose that the Assumptions 1 and 2 are satisfied and that

$$\begin{aligned} \frac{\min \{\gamma _1,\gamma _2,\gamma _3,\gamma _4\}}{\gamma _0} > \frac{1}{\alpha }. \end{aligned}$$
(23)

Fix \((t,x)\in (0,T]\times {{\mathbb {R}}^d}\). Then the law of the random variable X(tx) defined by (14) is absolutely continuous with respect to the Lebesgue measure.

3.1 Two Auxiliary Results

In this subsection we derive two auxiliary lemmas. They play a similar role as those in [10, Sects. 5.1 and 5.2], but our formulation is more general.

Lemma 2

Let \(\rho =(\rho _{s,y})_{(s,y)\in [0,T]\times {{\mathbb {R}}^d}}\) be a family of Lévy measures and let \(\lambda \) be a control measure. Suppose that Assumption 1(ii) holds. Then for all \(\gamma \in (\alpha ,2)\) and all \(a\in (0,\infty )\)

$$\begin{aligned} \int _{|z|\le a} |z|^\gamma \rho _{s,y}(dz) \le C_{\gamma ,\alpha }\bar{C}(s,y)a^{\gamma -\alpha }, \ \lambda - a.s., \end{aligned}$$

where \(C_{\gamma ,\alpha } = 2^{-\gamma +2}\frac{2^{2-\alpha }}{2^{\gamma -\alpha }-1}\). Hence

$$\begin{aligned} \int _0^T \int _{{{\mathbb {R}}^d}}\int _{|z|\le a} |z|^\gamma \rho _{s,y}(dz)\lambda (ds,dy) \le C a^{\gamma -\alpha }. \end{aligned}$$

Proof

The result is obtained by the following computations:

$$\begin{aligned} \int _{|z|\le a} |z|^\gamma \rho _{s,y}(dz)&= \sum _{n=0}^\infty \int _{\{a2^{-n-1}<|z|\le a2^{-n}\}} |z|^\gamma \rho _{s,y}(dz) \\&\le \sum _{n=0}^\infty (a2^{-n-1})^{\gamma -2} \int _{\{|z|\le a2^{-n}\}} |z|^2 \rho _{s,y}(dz) \\&\le \bar{C}(s,y)\sum _{n=0}^\infty (a2^{-n-1})^{\gamma -2} (a2^{-n})^{2-\alpha } \\&\le C_{\gamma -\alpha }\bar{C}(s,y)a^{\gamma -\alpha }. \end{aligned}$$

\(\square \)

The next lemma provides important bounds on the moments of the stochastic integrals. It plays the role of [10, Lemma 5.2] in the setting of this article.

Lemma 3

Assume that L is a Lévy basis with characteristic exponent satisfying Assumptions 1 for some \(\alpha \in (0,2)\). Let \(H=(H(t,x))_{(t,x)\in [0,T]\times {{\mathbb {R}}^d}}\) be a predictable process. Then for all \(0<\beta <\alpha <\gamma \le 2\) and for all \(0\le s< t\le s+1\),

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) g(t,r,x,y)H(r,y)L(dr,dy)\bigg |^\beta \bigg ] \nonumber \\&\le C_{\alpha ,\beta ,\gamma } |t-s|^{\beta /\alpha -1}\nonumber \\&\qquad \times \,\bigg (\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) \tilde{C}_\beta (r,y) |g(t,r,x,y)|^\gamma {\mathbb {E}}\big [|H(u,y)|^\gamma \big ]\lambda (dr,dy)\bigg )^{\beta /\gamma }, \end{aligned}$$
(24)

where \(\tilde{C}_{\beta }(r,y)\) is the maximum of \(\bar{C}(r,y)\), and \((C_\beta +C_1)(r,y)\) (see Assumptions 1 for the definitions).

Proof

There exists a Poisson random measure N such that for all \(A\in \mathscr {B}({{\mathbb {R}}^d})\),

$$\begin{aligned} L([s,t]\times A) = \int _s^t\int _A\int _{|z|\le 1} z \tilde{N}(dr,dy,dz) + \int _s^t\int _A\int _{|z|> 1}zN(dr,dy,dz) \end{aligned}$$

(see e.g. [14, Theorem 4.6]), where \(\tilde{N}\) stands for the compensated Poisson random measure \(\tilde{N}(ds,dy,dz) = N(ds,dy,dz) - \rho _{s,y}(dz)\lambda (ds,dy)\). Then we can write

$$\begin{aligned} {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) g(t,r,x,y)H(r,y)L(dr,dy)\bigg |^\beta \bigg ] \le C_\beta \big (I^1_{s,t} + I^2_{s,t} + I^3_{s,t}\big ), \end{aligned}$$
(25)

with

$$\begin{aligned} I^1_{s,t}&{:}= {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) \int _{|z|\le (t-s)^{1/\alpha }} zg(t,r,x,y)H(r,y)\tilde{N}(dr,dy,dz)\bigg |^\beta \bigg ] \\ I^2_{s,t}&{:}= {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) \int _{(t-s)^{1/\alpha } < |z|\le 1} zg(t,r,x,y)H(r,y)\tilde{N}(dr,dy,dz)\bigg |^\beta \bigg ] \\ I^3_{s,t}&{:}= {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) \int _{|z|> 1} zg(t,r,x,y)H(r,y)N(dr,dy,dz)\bigg |^\beta \bigg ] \end{aligned}$$

To give an upper bound for the first term, we apply first Burkholder’s inequality, then the subadditivity of the function \(x\mapsto x^{\gamma /2}\) (since the integral is actually a sum), Jensen’s inequality, the isometry of Poisson random measures and Lemma 2. We obtain,

$$\begin{aligned} I^1_{s,t}&\le C_\beta {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{|z|\le (t-s)^{1/\alpha }}\\&\qquad \times \,|z|^2 |g(t,r,x,y)|^2 |H(r,y)|^2 N(dr,dy,dz)\bigg |^{\beta /2}\bigg ] \\&\le C_\beta {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{|z|\le (t-s)^{1/\alpha }}\\&\qquad \times \,|z|^\gamma |g(t,r,x,y)|^\gamma |H(r,y)|^\gamma N(dr,dy,dz)\bigg |^{\beta /\gamma }\bigg ] \\&\le C_\beta \bigg ({\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{|z|\le (t-s)^{1/\alpha }}\\&\qquad \times \,|z|^\gamma |g(t,r,x,y)|^\gamma |H(r,y)|^\gamma N(dr,dy,dz)\bigg ]\bigg )^{\beta /\gamma } \\&= C_\beta \bigg ({\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\bigg (\int _{|z|\le (t-s)^{1/\alpha }} |z|^\gamma \rho _{r,y}(dz)\bigg )\\&\qquad \times \,|g(t,r,x,y)|^\gamma |H(r,y)|^\gamma \lambda (dr,dy)\bigg ]\bigg )^{\beta /\gamma } \\&\le C_{\beta }\big (C_{\gamma ,\alpha } (t-s)^{(\gamma -\alpha )/\alpha }\big )^{\beta /\gamma }\\&\qquad \times \,\bigg (\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) \bar{C}(r,y) |g(t,r,x,y)|^\gamma {\mathbb {E}}\big [|H(u,y)|^\gamma \big ]\lambda (dr,dy)\bigg )^{\beta /\gamma }. \end{aligned}$$

Notice that the exponent \((\gamma -\alpha )/\alpha \) is positive.

With similar arguments but applying now Assumption 1(i), the second term in (25) is bounded by

$$\begin{aligned} I^2_{s,t}&\le C_\beta {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) \int _{(t-s)^{1/\alpha } < |z|\le 1} |z|^2\\&\qquad \times \,|g(t,r,x,y)|^2 |H(r,y)|^2 N(dr,dy,dz)\bigg |^{\beta /2}\bigg ] \\&\le C_\beta {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{(t-s)^{1/\alpha } < |z|\le 1} |z|^\beta \\&\qquad \times \,|g(t,r,x,y)|^\beta |H(r,y)|^\beta N(dr,dy,dz)\bigg ] \\&= C_\beta {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\bigg (\int _{(t-s)^{1/\alpha } < |z|\le 1} |z|^\beta \rho _{r,y}(dz)\bigg )\\&\qquad \times \,|g(t,r,x,y)|^\beta |H(r,y)|^\beta \lambda (dr,dy)\bigg ] \\&\le C_{\beta }(t-s)^{(\beta -\alpha )/\alpha } {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) C_\beta (r,y)\\&\qquad \times \,|g(t,r,x,y)|^\beta \big [|H(r,y)|^\beta \lambda (dr,dy)\bigg ] \\&\le C_{\beta ,\gamma }(t-s)^{(\beta -\alpha )/\alpha } \bigg (\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) C_\beta (r,y)\\&\qquad \times \,|g(t,r,x,y)|^\gamma {\mathbb {E}}\big [|H(r,y)|^\gamma \big ] \lambda (dr,dy)\bigg )^{\beta /\gamma }, \end{aligned}$$

where in the last step we have used Hölder’s inequality with respect to the finite measure \(C_\beta (r,y)\lambda (dr,dy)\).

Finally, we bound the third term in (25). Suppose first that \(\beta \le 1\). Using the subadditivity of \(x\mapsto x^\beta \) and Lemma 2(i) yields

$$\begin{aligned} I^3_{s,t}&\le C_\beta {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{|z|> 1} |z|^\beta |g(t,r,x,y)|^\beta |H(r,y)|^\beta N(dr,dy,dz)\bigg ] \\&\le C_\beta {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\bigg (\int _{|z|> 1} |z|^\beta \rho _{r,y}(dz)\bigg )\\&\qquad \times \,|g(t,r,x,y)|^\beta |H(r,y)|^\beta \lambda (dr,dy)\bigg ] \\&\le C_\beta {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) C_\beta (r,y) |g(t,r,x,y)|^\beta |H(r,y)|^\beta \lambda (dr,dy)\bigg ] \\&\le C_\beta \bigg (\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) C_\beta (r,y) |g(t,r,x,y)|^\gamma {\mathbb {E}}\big [|H(r,y)|^\gamma \big ] \lambda (dr,dy)\bigg )^{\beta /\gamma }, \end{aligned}$$

where in the last step we have used Hölder’s inequality with respect to the finite measure \(C_\beta (r,y)\lambda (dr,dy)\).

Suppose now that \(\beta >1\) (which implies that \(\alpha >1\)). We apply Hölder’s inequality with respect to the finite measure \(C_1(r,y)\lambda (dr,dy)\) and Assumption 1(i)

$$\begin{aligned} I^3_{s,t}&\le 2^{\beta -1}{\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{|z|> 1} zg(t,r,x,y)H(r,y)\tilde{N}(dr,dy,dz)\bigg |^\beta \bigg ] \\&\quad +\,2^{\beta -1}{\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\left( \int _{|z|> 1} |z| \rho _{r,y}(dz)\right) \\&\qquad \times \,g(t,r,x,y)H(r,y)\lambda (dr,dy)\bigg |^\beta \bigg ] \\&\le C_\beta {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{|z|> 1} |z|^2 |g(t,r,x,y)|^2\\&\qquad \times \,|H(r,y)|^2 N(dr,dy,dz)\bigg |^{\beta /2}\bigg ] \\&\quad +\,C_\beta {\mathbb {E}}\bigg [\bigg |\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) C_1(r,y) |g(t,r,x,y)|H(r,y)|\lambda (dr,dy)\bigg |^\beta \bigg ] \\&\le C_\beta {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y)\int _{|z|> 1} |z|^\beta |g(t,r,x,y)|^\beta |H(r,y)|^\beta N(dr,dy,dz)\bigg ] \\&\quad +\,C_\beta \bigg (\int _s^t\int _{{\mathbb {R}}^d}C_1(r,y)\lambda (dr,dy)\bigg )^{\beta -1}\\&\qquad \times \,\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) C_1(r,y) |g(t,r,x,y)|^\beta {\mathbb {E}}\big [|H(r,y)|^\beta \big ]\lambda (dr,dy) \\&\le C_\beta {\mathbb {E}}\bigg [\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) (C_1(r,y)+C_\beta (r,y))\\&\qquad \times \,|g(t,r,x,y)|^\beta |H(u,y)|^\beta \lambda (dr,dy)\bigg ] \\&\le C_\beta \bigg (\int _s^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(r,y) (C_1(r,y)+C_\beta (r,y))\\&\qquad \times \,|g(t,r,x,y)|^\gamma {\mathbb {E}}\big [|H(u,y)|^\gamma \big ] \lambda (dr,dy)\bigg )^{\beta /\gamma }, \end{aligned}$$

where in the last step we have used Hölder’s inequality with respect to the finite measure \((C_1(r,y) + C_\beta (r,y))\lambda (dr,dy)\). We are assuming \(0<t-s\le 1\), and \(0<\beta <\alpha \). Hence, the estimates on the terms \(I^i_{s,t}\), \(i=1,2,3\) imply (24)

3.2 Existence of Density

With the help of the two lemmas in the previous subsection, we can now give the proof of Theorem 2. Fix \((t,x)\in (0,T]\times {{\mathbb {R}}^d}\) and let \(0<\varepsilon <t\) to be determined later. We define an approximation of the ambit field X(tx) by

$$\begin{aligned} X^\varepsilon (t,x) = U^\varepsilon (t,x) + \sigma (t-\varepsilon ,x)\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y)g(t,s;x,y)L(ds,dy), \end{aligned}$$
(26)

where

$$\begin{aligned} U^\varepsilon (t,x) = x_0&+ \int _0^{t-\varepsilon }\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) g(t,s;x,y)\sigma (s,y)L(ds,dy) \\&+ \int _0^{t-\varepsilon }\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) h(t,s;x,y)b(s,y)dyds \\&+\,b(t-\varepsilon ,x)\int _0^{t-\varepsilon }\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) h(t,s;x,y)dyds \end{aligned}$$

Note that \(U^\varepsilon (t,x)\) is \(\mathscr {F}_{t-\varepsilon }\)-measurable.

The stochastic integral in (26) is well defined in the sense of [18] and is a random variable having an infinitely divisible distribution. Moreover, the real part of its characteristic exponent is given by

$$\begin{aligned} \mathfrak {R}\big (\log {\mathbb {E}}\big [\exp (i\xi X)\big ]\big ) = \int _{\mathbb {R}}\big (1-\cos (\xi z)\big )\rho _f(dz), \end{aligned}$$

where

$$\begin{aligned} \rho _f(B) = \int _{[0,T]\times {{\mathbb {R}}^d}}\int _{\mathbb {R}}1_{ \{zf(s,y)\in B\backslash \{0\}\} }\rho _{s,y}(dz)\lambda (ds,dy). \end{aligned}$$

In the setting of this section, the next lemma plays a similar role as [20, Lemma 2.3]. It generalizes [10, Lemma 3.3] to the case of Lévy bases as integrators.

Lemma 4

The Assumptions 1, along with (17) and (18) hold. Then, the random variable

$$\begin{aligned} X{:}= \int _{t-\varepsilon }^t \int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) g(t,s,x,y)L(ds,dy) \end{aligned}$$

has a \(\mathscr {C}^\infty \)-density \(p_{t,x,\varepsilon }\), and for all \(n\in {\mathbb {N}}\) there exists a finite constant \(C_n>0\) such that \(\Vert p_{t,x,\varepsilon }^{(n)}\Vert _{L^1({\mathbb {R}})}\le C_{n,t,x} (\varepsilon ^{\gamma _0}\wedge 1)^{-n/\alpha }\).

Proof

We follow the proof of [10, Lemma 3.3], which builds on the methods of [21]. First we show that for \(|\xi |\) sufficiently large, and every \(t\in (0,T]\),

$$\begin{aligned} c_{t,x,\varepsilon }|\xi |^\alpha \le \mathfrak {R}\varPsi _{X}(\xi ) \le C|\xi |^\alpha . \end{aligned}$$
(27)

Indeed, let r be as in Assumption 1(iii). Then, for \(|\xi | > r\), we have

$$\begin{aligned} \mathfrak {R}\varPsi _{X}(\xi )&= \int _{\mathbb {R}}\big (1-\cos (\xi z)\big )\rho _f(dz)\nonumber \\&= \int _{t-\varepsilon }^t \int _{{{\mathbb {R}}^d}}\lambda (ds,dy) \int _{\mathbb {R}}\big (1-\cos (\xi z 1_{ A_t(x) }(s,y) g(t,s,x,y))\big )\rho _{s,y}(dz)\nonumber \\&\ge |\xi |^\alpha \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) |g(t,s,x,y)|^\alpha c(s,y)\lambda (ds,dy)\nonumber \\&\ge c_{t,\varepsilon ,x} \varepsilon ^{\gamma _0}|\xi |^\alpha . \end{aligned}$$
(28)

This proves the lower bound in (27) for \(|\xi | > r\).

In order to prove the upper bound in (27), we set

$$\begin{aligned} a_{\xi ,t,s,x,y} {:}= |\xi |1_{ A_t(x) }(s,y)|g(t,s,x,y)| \end{aligned}$$

and use the inequality \((1-\cos (x))\le 2(x^2\wedge 1)\) to obtain

$$\begin{aligned} \mathfrak {R}\varPsi _X(\xi )&= \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\lambda (ds,dy) \int _{\mathbb {R}}\big (1-\cos (z\xi 1_{ A_t(x) }(s,y)g(t,s,x,y))\big )\rho _{s,y}(dz)\nonumber \\&\le 2 \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\lambda (ds,dy) \int _{\mathbb {R}}\big (|z|^2|\xi |^21_{ A_t(x) }(s,y)|g(t,s,x,y)|^2\wedge 1\big )\rho _{s,y}(dz)\nonumber \\&= 2\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\lambda (ds,dy)\int _{|z|\le a_{\xi ,t,s,x,y}^{-1}} |z|^2|\xi |^21_{ A_t(x) }(s,y)|g(t,s,x,y)|^2\rho _{s,y}(dz)\nonumber \\&\quad +\,2\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\lambda (ds,dy)\int _{|z|\ge a_{\xi ,t,s,x,y}^{-1}} \rho _{s,y}(dz). \end{aligned}$$
(29)

Then, using Assumption 1(ii), the first integral in the right-hand side of the last equality in (29) can be bounded as follows:

$$\begin{aligned}&\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\lambda (ds,dz) |\xi |^21_{ A_t(x) }(s,y)|g(t,s,x,y)|^2 \left( \int _{|z|\le a_{\xi ,t,s,x,y}^{-1}} |z|^2\rho _{s,y}(dz)\right) \\&\le |\xi |^\alpha \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y)|g(t,s,x,y)|^\alpha \bar{C}(s,y)\lambda (ds,dy)\\&\le C |\xi |^\alpha , \end{aligned}$$

where in the last inequality, we have used (17).

Consider now the last integral in (29). By applying Assumption 1(i) with \(\beta =0\) and (17)

$$\begin{aligned}&\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}\lambda (ds,dy) \left( \int _{|z|\ge a_{\xi ,t,s,x,y}^{-1}} \rho _{s,y}(dz)\right) \\&\qquad \le |\xi |^\alpha \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) C_0(s,y)|g(t,s,x,y)|^\alpha \lambda (ds,dy)\\&\qquad \le C |\xi |^\alpha . \end{aligned}$$

Hence, we have established that

$$\begin{aligned} \mathfrak {R}\varPsi _X(\xi ) \le C |\xi |^\alpha , \end{aligned}$$

for \(|\xi |\) sufficiently large.

To complete the proof, we can follow the same arguments as in [10, Lemma 3.3] which rely on the result in [21, Proposition 2.3]. Note that the exponent \(\gamma _0\) on the right-hand side of the gradient estimate accounts for the lower bound of the growth of the term in (18), which in the case of SDEs is equal to 1.

The next lemma shows that the error in the approximation \(X^\varepsilon (t,x)\) in (26) and the ambit field X(tx) is bounded by a power of \(\varepsilon \).

Lemma 5

Assume that Assumptions 1 hold for some \(\alpha \in (0,2)\) and that \(\sigma ,b\) are Lipschitz continuous functions. Then, for any \(\beta \in (0,\alpha )\), and \(\varepsilon \in (0, t\wedge 1)\),

$$\begin{aligned} {\mathbb {E}}\big [|X(t,x)-X^\varepsilon (t,x)|^\beta \big ] \le C_{\beta }\varepsilon ^{\beta \left( \frac{1}{\alpha }+\bar{\gamma }\right) -1}, \end{aligned}$$

where \(\bar{\gamma }{:}= \min \{\gamma _1,\gamma _2,\gamma _3,\gamma _4\}\).

Proof

Clearly,

$$\begin{aligned}&{\mathbb {E}}\big [|X(t,x)-X^\varepsilon (t,x)|^\beta \big ] \\&\le C_\beta {\mathbb {E}}\bigg [\bigg |\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) g(t,s;x,y)(\sigma (s,y)-\sigma (t-\varepsilon ,x))L(ds,dy)\bigg |^\beta \bigg ] \\&+\,C_\beta {\mathbb {E}}\bigg [\bigg |\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y)h(t,s;x,y)(b(s,y)-b(t-\varepsilon ,x))dyds\bigg |^\beta \bigg ]. \end{aligned}$$

Fix \(\gamma \in (\alpha ,2]\) and apply Lemma 3 to the stochastic process \(H(s,y){:}=\sigma (s,y)-\sigma (t-\varepsilon ,x)\), where the arguments \(t,\varepsilon , x\) are fixed. We obtain

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg |\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) g(t,s;x,y)(\sigma (s,y)-\sigma (t-\varepsilon ,x))L(ds,dy)\bigg |^\beta \bigg ] \\&\le C_{\alpha ,\beta ,\gamma }\varepsilon ^{\beta /\alpha -1} \bigg (\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y)\tilde{C}_\beta (s,y) 1_{ A_t(x) }|g(t,s,x,y)|^\gamma \\&\qquad \times \, {\mathbb {E}}\big [|\sigma (s,y)-\sigma (t-\varepsilon ,x)|^\gamma \big ]\lambda (ds,dy)\bigg )^{\beta /\gamma } \end{aligned}$$

Owing to hypothesis (H2) this last expression is bounded (up to the constant \(C_{\alpha ,\beta ,\gamma }\varepsilon ^{\beta /\alpha -1}\)) by

$$\begin{aligned} \bigg (\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y)&\tilde{C}_\beta (s,y) 1_{ A_t(x) }|g(t,s,x,y)|^\gamma \\&\times \,\big (|t-\varepsilon -s|^{\delta _1\gamma } + |x-y|^{\delta _2\gamma }\big )\lambda (ds,dy)\bigg )^{\beta /\gamma } \end{aligned}$$

The inequality (19) implies

$$\begin{aligned}&\varepsilon ^{\beta /\alpha -1}\bigg (\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) \tilde{C}_\beta (s,y)|g(t,s,x,y)|^\gamma |t-\varepsilon -s|^{\delta _1\gamma }\lambda (ds,dy)\bigg )^{\beta /\gamma }\\&\qquad \le C \varepsilon ^{\beta (\frac{1}{\alpha }+\gamma _1)-1}, \end{aligned}$$

and (20) yields

$$\begin{aligned}&\varepsilon ^{\beta /\alpha -1}\bigg (\int _{t-\varepsilon }^t\int _{{{\mathbb {R}}^d}} 1_{ A_t(x) }(s,y) \tilde{C}_\beta (s,y)|g(t,s,x,y)|^\gamma |x-y|^{\delta _2\gamma }\lambda (ds,dy)\bigg )^{\beta /\gamma }\\&\qquad \le C \varepsilon ^{\beta (\frac{1}{\alpha }+\gamma _2)-1}. \end{aligned}$$

Thus,

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg |\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y) g(t,s;x,y)(\sigma (s,y)-\sigma (t,x))L(ds,dy)\bigg |^\beta \bigg ]\nonumber \\&\qquad \le C \varepsilon ^{\beta (\frac{1}{\alpha }+[\gamma _1\wedge \gamma _2])-1}. \end{aligned}$$
(30)

Assume that \(\beta \ge 1\) (and therefore \(\alpha >1\)). Hölder’s inequality with respect to the finite measure h(tsxy)dyds, (H2), (21), (22), imply

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg |\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) h(t,s;x,y)(b(s,y)-b(t-\varepsilon ,x))dyds\bigg |^\beta \bigg ] \\&\le C_{\beta }\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s,x,y)|^\beta {\mathbb {E}}\big [|b(s,y)-b(t-\varepsilon ,x)|^\beta \big ]dyds \\&\le C_{\beta }\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s,x,y)|^\beta |t-\varepsilon -s|^{\delta _1\beta } dyds \\&+\,C_{\beta }\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s,x,y)|^\beta |x-y|^{\delta _2\beta } dyds \\&\le C \varepsilon ^{\beta (\gamma _3\wedge \gamma _4)}. \end{aligned}$$

Suppose now that \(\beta <1\), we use Jensen’s inequality and once more, (H2), (21), (22), to obtain

$$\begin{aligned}&{\mathbb {E}}\bigg [\bigg |\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) h(t,s;x,y)(b(s,y)-b(t-\varepsilon ,x))dyds\bigg |^\beta \bigg ] \\&\le \bigg ({\mathbb {E}}\bigg [\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s;x,y)||b(s,y)-b(t-\varepsilon ,x)|dyds\bigg ]\bigg )^\beta \\&= \bigg (\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s;x,y)|{\mathbb {E}}\big [|b(s,y)-b(t-\varepsilon ,x)|\big ]dyds\bigg )^\beta \\&\le C\bigg (\int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ B_t(x) }(s,y) |h(t,s;x,y)|\left[ |t-s|^{\delta _1}|+|x-y|^{\delta _2}\right] dyds\bigg )^\beta \\&\le C\varepsilon ^{\beta (\gamma _3\wedge \gamma _4)}. \end{aligned}$$

This finishes the proof.

We are now in a position to prove Theorem 2.

Proof

(Proof of Theorem 2) We consider the inequality

$$\begin{aligned} |{\mathbb {E}}[|\sigma (t,x)|^n\varDelta _h^n\phi (X(t,x))]| \le&\,\big |{\mathbb {E}}\big [\big (|\sigma (t,x)|^n - |\sigma (t-\varepsilon ,x)|^n\big )\varDelta _h^n\phi (X(t,x))\big ]\big | \nonumber \\&+\,\big |{\mathbb {E}}\big [|\sigma (t-\varepsilon ,x)|^n\big (\varDelta _h^n\phi (X(t,x)) - \varDelta _h^n\phi (X^\varepsilon (t,x))\big )\big ]\big | \nonumber \\&+\,\big |{\mathbb {E}}\big [|\sigma (t-\varepsilon ,x)|^n\varDelta _h^n\phi (X^\varepsilon (t,x))\big ]\big |. \end{aligned}$$
(31)

Fix \(\eta \in (0,\alpha \wedge 1)\). As in (8) we have

$$\begin{aligned}&\big |{\mathbb {E}}\big [\big (|\sigma (t,x)|^n - |\sigma (t-\varepsilon ,x)|^n\big )\varDelta _h^n\phi (X(t,x))\big ]\big | \\&\le C_n\Vert \phi \Vert _{\mathscr {C}_b^\eta }|h|^\eta {\mathbb {E}}\big [\big ||\sigma (t,x)|^n - |\sigma (t-\varepsilon ,x)|^n\big |\big ]\big | \end{aligned}$$

Now we proceed as in (9) using the finiteness of the moments of \(\sigma (t,x)\) stated in Hypothesis (H1), and (H2). Then for all \(\gamma \in (\alpha ,2]\) we have

$$\begin{aligned}&{\mathbb {E}}\big [\big ||\sigma (t,x)|^n-|\sigma (t-\varepsilon ,x)|^n\big |\big ] \nonumber \\&= {\mathbb {E}}\bigg [ \big |\sigma (t,x)-\sigma (t-\varepsilon ,x)\big |\sum _{j=0}^{n-1} |\sigma (t,x)|^j|\sigma (t-\varepsilon ,x)|^{n-1-j}\bigg ] \nonumber \\&\le C\Big ({\mathbb {E}}\big [\big |\sigma (t,x) - \sigma (t-\varepsilon ,x)\big |^\gamma \big ]\Big )^{1/\gamma }\\&\qquad \times \,\bigg ({\mathbb {E}}\bigg [\bigg (\sum _{j=0}^{n-1} |\sigma (t,x)|^j|\sigma (t-\varepsilon ,x))|^{n-1-j}\bigg )^{\gamma /(\gamma -1)}\bigg ]\bigg )^{1-1/\gamma } \nonumber \\&\le C_n\varepsilon ^{\delta _1}. \end{aligned}$$

Therefore

$$\begin{aligned} \big |{\mathbb {E}}\big [\big (|\sigma (t,x)|^n - |\sigma (t-\varepsilon ,x)|^n\big )\varDelta _h^n\phi (X(t,x))\big ]\big | \le C_n\Vert \phi \Vert _{\mathscr {C}_b^\eta }|h|^\eta \varepsilon ^{\delta _1}. \end{aligned}$$
(32)

Consider the inequality \(\Vert \varDelta _h^n\phi \Vert _{\mathscr {C}^\alpha _b}\le C_n\Vert \phi \Vert _{\mathscr {C}^\alpha _b}\), and apply Hölder’s inequality with some \(\beta \in (\eta ,\alpha )\) to obtain

$$\begin{aligned}&\big |{\mathbb {E}}\big [|\sigma (t-\varepsilon ,x)|^n(\varDelta _h^n\phi (X(t,x)) - \varDelta _h^n\phi (X^\varepsilon (t,x)))\big ]\big | \nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\eta _b}{\mathbb {E}}\big [|\sigma (t-\varepsilon ,x)|^n|X(t,x)-X^\varepsilon (t,x)|^\eta \big ] \nonumber \\&\le C_n\Vert \phi \Vert _{\mathscr {C}^\eta _b}\big ({\mathbb {E}}\big [|X(t,x)-X^\varepsilon (t,x)|^{\beta }\big ]\big )^{\eta /\beta }\big ({\mathbb {E}}\big [|\sigma (u(t-\varepsilon ,0))|^{n\beta /(\beta -\eta )}\big ]\big )^{1-\eta /\beta } \nonumber \\&\le C_{n,\beta }\Vert \phi \Vert _{\mathscr {C}^\eta _b}\varepsilon ^{\eta \left( \frac{1}{\alpha }+\bar{\gamma }\right) -\frac{\eta }{\beta }}, \end{aligned}$$
(33)

where \(\bar{\gamma }{:}= \min \{\gamma _1,\gamma _2,\gamma _3,\gamma _4\}\), and we have applied Lemma 5.

Conditionally to \(\mathscr {F}_{t-\varepsilon }\), the random variable

$$\begin{aligned} \int _{t-\varepsilon }^t\int _{{\mathbb {R}}^d}1_{ A_t(x) }(s,y)g(t,s;x,y)L(ds,dy) \end{aligned}$$

has an infinitely divisible law and a \(\mathscr {C}^\infty \)-density \(p_{t,x\varepsilon }\) for which a gradient estimate holds (see Lemma 4). Then, by a discrete integration by parts, and owing to (H3),

$$\begin{aligned}&\big |{\mathbb {E}}\big [|\sigma (t-\varepsilon ,x)|^n\varDelta _h^n\phi (X^\varepsilon (t,x))\big ]\big | \\&= \bigg |{\mathbb {E}}\bigg [\int _{\mathbb {R}}|\sigma (t-\varepsilon ,x)|^n\varDelta _h^n\phi (U_t^\varepsilon + \sigma (t-\varepsilon ,x)y)p_{t,x,\varepsilon }(y)dy\bigg ]\bigg | \\&= \bigg |{\mathbb {E}}\bigg [\int _{\mathbb {R}}|\sigma (t-\varepsilon ,x)|^n\phi (U_t^\varepsilon + \sigma (t-\varepsilon ,x)y)\varDelta _{-\sigma (t-\varepsilon ,x)^{-1}h}^n p_{t,x,\varepsilon }(y)dy\bigg ]\bigg | \\&\le \Vert \phi \Vert _\infty {\mathbb {E}}\bigg [|\sigma (t-\varepsilon ,x)|^n\int _{\mathbb {R}}\big |\varDelta _{-\sigma (t-\varepsilon ,x)^{-1}h}^np_{t,x,\varepsilon }(y)\big |dy\bigg ]. \end{aligned}$$

From Lemma 4 it follows that

$$\begin{aligned} \int _{\mathbb {R}}\big |\varDelta _{-\sigma (t-\varepsilon ,x)^{-1}h}^np_{t,x,\varepsilon }(y)\big |dy&\le C_n |\sigma (t-\varepsilon ,x)|^{-n}|h|^n \Vert p^{(n)}_{t,x,\varepsilon }\Vert _{L^1({\mathbb {R}})} \\&\le C_n |\sigma (t-\varepsilon ,x)|^{-n}|h|^n \varepsilon ^{-n\gamma _0/\alpha }, \end{aligned}$$

which yields

$$\begin{aligned} \big |{\mathbb {E}}\big [|\sigma (t-\varepsilon ,x)|^n\varDelta _h^n\phi (X^\varepsilon (t,x))\big ]\big | \le C_n\Vert \phi \Vert _{\mathscr {C}^\eta _b}|h|^n \varepsilon ^{-n\gamma _0/\alpha }, \end{aligned}$$
(34)

because \(\Vert \phi \Vert _{\infty }\le \Vert \phi \Vert _{\mathscr {C}^\eta _b}\).

The estimates (31)–(34) imply

$$\begin{aligned} |{\mathbb {E}}[|\sigma (t,x)|^n\varDelta _h^n\phi (X(t,x))]| \le C_{n,\beta }\Vert \phi \Vert _{\mathscr {C}_b^\eta } \big (|h|^\eta \varepsilon ^{\delta _1} + \varepsilon ^{\eta \left( \frac{1}{\alpha }+\bar{\gamma }\right) -\frac{\eta }{\beta }} + |h|^n \varepsilon ^{-n\gamma _0/\alpha }\big ) \end{aligned}$$

Set \(\varepsilon =\frac{t}{2}|h|^\rho \), with \(|h|\le 1\) and

$$\begin{aligned} \rho \in \left( \frac{\alpha \beta }{\beta +\alpha \beta \bar{\gamma }-\alpha }, \frac{\alpha (n-\eta )}{n\gamma _0}\right) . \end{aligned}$$

Notice that, since \(\lim _{n\rightarrow \infty }\frac{\alpha (n-\eta )}{n\gamma _0}= \frac{\alpha }{\gamma _0}\), for \(\beta \) close to \(\alpha \) and \(\gamma _0\) as in the hypothesis, this interval is nonempty. Then, easy computations show that with the choices of \(\varepsilon \) and \(\rho \), one has

$$\begin{aligned} |h|^\eta \varepsilon ^{\delta _1} + \varepsilon ^{\eta \left( \frac{1}{\alpha }+\bar{\gamma }\right) -\frac{\eta }{\beta }} + |h|^n \varepsilon ^{-n\gamma _0/\alpha }\le 3|h|^\zeta , \end{aligned}$$

with \(\zeta >\eta \). Hence, with Lemma 1 we finish the proof of the theorem.

Remark 3

  1. (i)

    If \(\sigma \) is bounded away from zero, then one does not need to assume the existence of moments of sufficiently high order. In this case one can follow the strategy in [20].

  1. (ii)

    The methodology used in this section is not restricted to pure-jump stable-like noises. One can also adapt it to the case of Gaussian space-time white noises.