Maximal Inequalities for Martingales and Their Differential Subordinates
 633 Downloads
 2 Citations
Abstract
Keywords
Martingale Maximal inequality Differential subordinationMathematics Subject Classification (2000)
Primary 60G44 Secondary 60G421 Introduction
Since the works of Kolmogorov, Hardy and Littlewood, Wiener, Doob and many other mathematicians, maximal inequalities have played an important role in analysis and probability. One of the main goals of this paper is to present a method of proving such estimates for continuoustime Hilbertspacevalued local martingales satisfying differential subordination.
We start with introducing the necessary background and notation. Let \((\Omega ,\mathcal{F },\mathbb P )\) be a complete probability space, filtered by a nondecreasing rightcontinuous family \((\mathcal{F }_t)_{t\ge 0}\) of sub\(\sigma \)fields of \(\mathcal{F }\). In addition, we assume that \(\mathcal{F }_0\) contains all the events of probability \(0\). Let \(X,\, Y\) be two adapted local martingales, taking values in a certain separable Hilbert space \(\mathcal H \) with norm \(\cdot \) and scalar product \(\langle \cdot ,\cdot \rangle \). With no loss of generality, we may take \(\mathcal H =\ell ^2\). As usual, we assume that the trajectories of the processes are rightcontinuous and have limits from the left. The symbol \([X,X]\) will stand for the quadratic covariance process of \(X\): this object is given by \([X,X]=\sum _{n=1}^\infty [X^n,X^n]\), where \(X^n\) denotes the \(n\)th coordinate of \(X\) and \([X^n,X^n]\) is the usual square bracket of the realvalued martingale \(X^n\) (see e.g. Dellacherie and Meyer [15] for details). In what follows, \(X^*=\sup _{t\ge 0}X_t\) will denote the maximal function of \(X\), we also use the notation \(X^*_t=\sup _{0\le s\le t}X_s\). Furthermore, for \(1\le p\le \infty \), we shall write \(X_p=\sup _{t\ge 0}X_t_p\) and \(X_p=\sup _\tau X_\tau _p\), where the second supremum is taken over all adapted bounded stopping times \(\tau \).
Theorem 1.1
For \(p=1\), the above moment inequality does not hold with any finite constant, but we have the corresponding weaktype \((1,1)\) estimate. In fact, we have the following result for a wider range of parameters \(p\), proved by Burkholder [8] for \(1\le p\le 2\) and Suh [22] for \(p>2\). See also Wang [23].
Theorem 1.2
There are many other related results, see e.g., the papers [3] and [4] by Bañuelos and Wang, [11] and [13] by Burkholder and consult the references therein. For more recent works, we refer the interested reader to the papers [18, 19, 20] by the author, and [6, 7] by Borichev et al. The estimates have found numerous applications in many areas of mathematics, in particular, in the study of the boundedness of various classes of Fourier multipliers (consult, for instance, [1, 2, 3, 12, 16, 17]).
There is a general method, invented by Burkholder, which enables one not only to establish various estimates for differentially subordinated martingales, but is also very efficient in determining the optimal constants in such inequalities. The idea is to construct an appropriate special function, an upper solution to a nonlinear problem corresponding to the inequality under investigation, and then to exploit its properties. See the survey [13] for the detailed description of the technique in the discretetime setting and consult Wang [23] for the necessary changes which have to be implemented so that the method worked in the continuoustime setting.
The above results can be extended in another, very interesting direction. Namely, in the present paper we will be interested in inequalities involving the maximal functions of \(X\) and/or \(Y\). Burkholder [14] modified his technique so that it could be used to study such inequalities for stochastic integrals, and applied it to obtain the following result, which can be regarded as another version of (1.1) for \(p=1\).
Theorem 1.3
As we have already observed above, if \(X\) and \(Y\) satisfy the assumptions of this theorem, then \(Y\) is differentially subordinate to \(X\). An appropriate modification of the proof in [14] shows that the assertion is still valid if we impose this less restrictive condition on the processes. However, the assertion does not hold any more if we pass from the real to the vector valued case. Here is one of the main results of this paper.
Theorem 1.4
This is a very surprising result. In most cases, the inequalities for stochastic integrals of real valued martingales carry over, with unchanged constants, to the corresponding bounds for vectorvalued local martingales satisfying differential subordination. In other words, given a sharp inequality for \(\mathcal H \)valued differentially subordinated martingales, the extremal processes, i.e. those for which the equality is (almost) attained, can be usually realized as stochastic integrals in which the integrator takes values in onedimensional subspace of \(\mathcal H \). See e.g., the statements of Theorems 1.1 and 1.2. Here the situation is different: the optimal constant does depend on the dimension of the range of \(X\) and \(Y\).
Finally, let us mention here another related result. In general, the best constants in nonmaximal inequalities for differentially subordinated local martingales do not change when we restrict ourselves to continuouspath processes; see e.g., Section 15 in [8] for the justification of this phenomenon. However, if we study the maximal estimates, the best constants may be different: for example, the passage to continuouspath local martingales reduces the constant \(\gamma \) in (1.2) to \(\sqrt{2}\). Specifically, we have the following theorem, which is one of the principal results of [21].
Theorem 1.5
We have organized the paper as follows. The next section is devoted to an extension of Burkholder’s method. In Sect. 3 we apply the technique to establish (1.3). In Sect. 4 we prove that the constant \(\beta \) cannot be replaced in (1.3) by a smaller one. The final part of the paper contains the proofs of technical facts needed in the earlier considerations.
2 On the Method of Proof
Burkholder’s method from [14] is a powerful tool for proving maximal inequalities for transforms of discretetime realvalued martingales. Then the results for the wider setting of stochastic integrals are obtained by the use of approximation theorems of Bichteler [5]. This approach has the advantage that it avoids practically all the technicalities which arise naturally in the study of continuoustime processes. On the other hand, it does not allow to study estimates for (local) martingales under differential subordination; the purpose of this section is to present a refinement of the method which can be used to handle such problems.
Lemma 2.1
If \(X\) and \(Y\) are semimartingales, then \(Y\) is differentially subordinate to \(X\) if and only if \(Y^c\) is differentially subordinate to \(X^c\), \(\Delta Y_t\le \Delta X_t\) for all \(t>0\) and \(Y_0\le X_0\).
We are ready to study the interplay between the class \(\mathcal U (V)\) and the bound (2.1).
Theorem 2.2
Proof
Remark 2.3
A careful inspection of the proof of the above theorem shows that the function \(U\) need not be given on the whole \(D=\mathcal H \times \mathcal H \times (0,\infty )\times (0,\infty )\). Indeed, it suffices to define it on a certain neighborhood of the set \(\{(x,y,z,w)\in D: x\le z,\,y\le w\}\) in which the process \(Z\) takes its values. This can be further relaxed: if we are allowed to work with those \(X,\, Y\) which are bounded away from \(0\), then all we need is a \(C^2\) function \(U\) given on some neighborhood of \(\{(x,y,z,w)\in D: 0<x\le z,\,0<y\le w\}\), satisfying (2.2)–(2.5) on this set.
3 The Special Function Corresponding to (1.3)
Lemma 3.1
 (i)
We have \(\Phi (t)\le \Phi (1)\le 0\) for \(t\le 1\).
 (ii)
We have \(\Phi (t)\ge \sqrt{t}\beta \) for \(t\ge 0\).
 (iii)For any \(c\ge 0\) the functionis convex and nonincreasing.$$\begin{aligned} f(s)=\sqrt{s}\log \left(1+\frac{c}{\sqrt{s}}\right)(2\log 2)\sqrt{s} \end{aligned}$$
 (iv)For any \(c>0\), the functionis concave.$$\begin{aligned} f(s)=\sqrt{s}c\log \left(1+\frac{\sqrt{s}}{c}\right) \end{aligned}$$
Lemma 3.2
The function \(y\mapsto \Phi (y^2)\) is convex on \(\mathcal H \).
Lemma 3.3
 (i)For any \(y,\,k\in \mathcal H \), we have$$\begin{aligned} (2\log 2)(1\sqrt{1+k^2})&+\,(1\sqrt{1+k^2})\log (\sqrt{1+k^2}+y+k)\nonumber \\&+\, \sqrt{1+k^2}\log (\sqrt{1+k^2})\le 0. \end{aligned}$$(3.2)
 (ii)For any \(y,\,k\in \mathcal H \) with \(y+1\le \sqrt{1+k^2}+ky\) we have$$\begin{aligned} (2\!\!\log 2)(1\!\!\sqrt{1+k^2})&+\sqrt{1+k^2}\left[2\frac{ky}{\sqrt{1+k^2}}\!\!\log \left(1+\frac{ky}{\sqrt{1{+}k^2}}\right)\!\right]\quad \nonumber \\&\quad \qquad \qquad \le \frac{k}{1+y}\log (1+y). \end{aligned}$$(3.3)
Lemma 3.4
Equipped with these four lemmas, we turn to the following statement.
Theorem 3.5
The function \(U\) belongs to the class \(\mathcal U (V)\).
Proof

The estimate (2.2): this follows immediately from the first part of Lemma 3.1.
 The property (2.4): we derive that the lefthand side of the estimate equalswith \(S=y^2x^2+z^2\). The property follows.$$\begin{aligned} \frac{k^2h^2}{z+\sqrt{S}}\frac{(\langle y,k\rangle \langle x,h\rangle )^2}{2(z+\sqrt{S})^2\sqrt{S}}\le \frac{k^2h^2}{z+\sqrt{S}}, \end{aligned}$$

The majorization (2.3): in particular, (2.4) implies that for any \(h\) the function \(t\mapsto U(x+th,y,z,w)\) is concave on \([t_,t_+]\), where \(t_=\inf \{t: x+th\le z\}\) and \(t_+=\sup \{ t:x+th\le z \}\). Consequently, it suffices to verify (2.3) only for \((x,y,z,w)\) satisfying \(x= z\). But this reduces to the second part of Lemma 3.1.
 The condition (2.5): by homogeneity and continuity of both sides, we may assume that \(z=1\) and \(x<1\). Definefor \( t\in \mathbb{R }\) and let \(t_\), \(t_+\) be as above; note that \(t_<0\) and \(t_+>0\). By (2.4), \(H\) is concave on \([t_,t_+]\) and hence (2.5) holds if \(x+h\le 1\). Suppose then that \(x+h>1\) or, in other words, that \(t_+<1\). The vector \(x^{\prime }=x+t_+h\) satisfies \(\langle x^{\prime },h\rangle \ge 0\): this is equivalent to \(\frac{d}{dt}x+th^2_{t=t_+}\ge 0\). Hence, by (3.4), if we put \(y^{\prime }=y+t_+k\), then$$\begin{aligned} H(t)=U(x+th,y+tk,x+th\vee 1,y+tk\vee w) \end{aligned}$$This is precisely the claim. \(\square \)$$\begin{aligned}&U(x+h,\;y+k,x+h\vee 1,y+k\vee w)\\&\qquad =U(x^{\prime }+(1t_+)h,y^{\prime }+(1t_+)k,x+h\vee 1,y+k\vee w)\\&\qquad \le U(x^{\prime },y^{\prime },1,w)+\left(1+\frac{1}{\beta }\right)\frac{\langle y^{\prime },(1t_+)k\rangle \langle x^{\prime },(1t_+)h\rangle }{1+y^{\prime }}\\&\qquad =H(t_+)+H^{\prime }_(t_+)(1t_+)\\&\qquad \le H(0)+H^{\prime }(0)t_++H^{\prime }(0)(1t_+)=H(0)+H^{\prime }(0). \end{aligned}$$
Proof of (1.3)
It suffices to establish the estimate for \(X^*\in L^1\), because otherwise there is nothing to prove. Furthermore, we may assume that \(Y\) is bounded away from \(0\). To see this, consider a new Hilbert space \( \mathbb{R }\,\times \, \mathcal H \) and the martingales \((\delta ,X)\) and \((\delta ,Y)\), with \(\delta >0\). These martingales are bounded away from \(0\) and \((\delta ,Y)\) is differentially subordinate to \((\delta ,X)\). Having proved (1.3) for these processes, we let \(\delta \rightarrow 0\) and get the bound for \(X\) and \(Y\), by Lebesgue’s dominated convergence theorem.
4 Sharpness
 (i)
\((f_0,g_0)\equiv (x,y)\),
 (ii)
for any \(n\ge 1\) we have \(dg_n\le df_n\).
Lemma 4.1
 (i)
\(W\) is finite.
 (ii)\(W\) is homogeneous of order \(1\): for any \((x,y,z)\in \mathbb{R }^2\times \mathbb{R }^2\times (0,\infty )\) and \(\lambda \ne 0\),$$\begin{aligned} W(\lambda x,\pm \lambda y,\lambda  z)=\lambda  W(x,y,z). \end{aligned}$$
 (iii)
We have \(W(x,y,z)=W(x,y,x\vee z)\) for all \((x,y,z)\in \mathbb{R }^2\times \mathbb{R }^2\times (0,\infty )\).
 (iv)
We have \(W(x,y,z)\ge y\beta _0(x\vee z)\) for \((x,y,z)\in \mathbb{R }^2\times \mathbb{R }^2\times (0,\infty )\).
 (v)
For fixed \(x\in \mathbb{R }^2\) and \(z>0\), the function \(y\mapsto W(x,y,z)\) is convex on \(\mathbb{R }^2\).
 (vi)For any \((x,y,z)\in \mathbb{R }^2\times \mathbb{R }^2\times (0,\infty )\) with \(x\le z\), any \(h,\,k\in \mathbb{R }^2\) with \(k\le h\) and any \(s,t> 0\),$$\begin{aligned} \frac{s}{s+t}W(x+th,y+tk,z)+\frac{t}{s+t}W(xsh,ysk,z)\le W(x,y,z).\qquad \end{aligned}$$(4.2)
Proof
 (i)This follows from (4.1): for any \((f,g)\in \mathcal M (x,y)\) the martingale \(gy=(g_ny)_{n\ge 0}\) is differentially subordinate to \(f\), so for any \(z>0\),Taking the supremum over \((f,g)\in \mathcal M (x,y)\) yields \(W(x,y,z)\le y<\infty \).$$\begin{aligned} \mathbb{E }g_\infty \beta _0\mathbb{E }(f^*\vee z)\le y+\mathbb{E }g_\infty y\beta _0\mathbb{E }f^*\le y. \end{aligned}$$
 (ii)
Use the fact that \((f,g)\in \mathcal M (x,y)\) if and only if \((\lambda f,\pm \lambda g)\in \mathcal M (\lambda x,\pm \lambda y)\).
 (iii)
This follows immediately from the very definition of \(W\).
 (iv)
The constant pair \((x,y)\) belongs to \(\mathcal M (x,y)\).
 (v)Take any \(x,\,y_1,\,y_2\in \mathbb{R }^2,\, \alpha \in (0,1)\) and let \(y=\alpha y_1+(1\alpha )y_2\). Pick \((f,g)\in \mathcal M (x,y)\) and observe that \((f,g+y_iy)\in \mathcal M (x,y_i)\), \(i=1,\,2\). Thus,Taking the supremum over \((f,g)\in \mathcal M (x,y)\) gives the desired convexity.$$\begin{aligned} \mathbb{E }g_\infty \beta _0\mathbb{E }(f^*\vee z)&\le \alpha \left[\mathbb{E }g_\infty +y_1y\beta _0\mathbb{E }(f^*\vee z)\right]\\&+\;(1\alpha )\left[\mathbb{E }g_\infty +y_2y\beta _0\mathbb{E }(f^*\vee z)\right]\\&\le \alpha W(x,y_1,z)+(1\alpha ) W(x,y_2,z). \end{aligned}$$
 (vi)This is a consequence of the socalled “splicing argument” of Burkholder (see e.g., in [9, p. 77]). For the convenience of the reader, let us provide the easy proof. Pick \((f^+,g^+)\in \mathcal M (x+th,y+tk),\, (f^,g^)\in \mathcal M (xsh,ysk)\). These two pairs are spliced together into one pair \((f,g)\) as follows: set \((f_0,g_0)\equiv (x,y)\) and (recall that \(\Omega =[0,1]\))for \(n=1,\,2,\,\ldots \). It is not difficult to see that \((f,g)\) is a martingale pair with respect to its natural filtration. Furthermore, it is clear that this pair belongs to \(\mathcal M (x,y)\). Finally, since \(x\le z\), we have \(f_n^*\vee z=\sup _{1\le k\le n}f_k\vee z\) for \(n=1,\,2,\,\ldots \) and therefore$$\begin{aligned}&(f_n,g_n)(\omega )=\left(f^+_{n1},g^+_{n1}\right)\left(\frac{\omega (s+t)}{s}\right) \quad \text{ if} \omega \le \frac{s}{s+t},\\&\left(f^_{n1},g^_{n1}\right)\left(\left(\omega \frac{s}{s+t}\right)\frac{t+s}{t}\right) \quad \text{ if} \omega >\frac{s}{s+t} \end{aligned}$$$$\begin{aligned} W(x,y,z)&\ge \mathbb{E }g_\infty \beta _0 \mathbb{E }(f^*\vee z)\\ \!&= \!\frac{s}{t+s}\left[\mathbb{E }g_{\infty }^+\!\!\beta _0\mathbb{E }(f^{+*}\vee z)\right]\!+\!\frac{t}{t+s}\left[\mathbb{E }g_{\infty }^\beta _0\mathbb{E }(f^{*}\vee z)\right]. \end{aligned}$$
Lemma 4.2
Proof
5 Proofs of Technical Lemmas
Proof of Lemma 3.1
 (i)
We have \(\Phi ^{\prime }(t)=(1+\beta ^{1})/(2(1+\sqrt{t}))>0\) and \(\Phi (1)=(1+\beta ^{1})<0\).
 (ii)
The claim is equivalent to \(\Psi (t):=\Phi (t^2)t\,+\,\beta \ge 0\) for all \(t\ge 0\). We easily check that \(\Psi \) is convex on \([0,\infty )\) and, by virtue of (1.4), satisfies \(\Psi (\beta )=\Psi ^{\prime }(\beta )=0\).
 (iii)Since \(\lim _{s\rightarrow \infty }f^{\prime }(s)=0\), it suffices to prove the convexity of \(f\). We haveand the expression in the square brackets is nonnegative: indeed, the function$$\begin{aligned} f^{\prime \prime }(s)=\frac{1}{4s^{3/2}}\left[\log \left(1+\frac{c}{\sqrt{s}}\right) \frac{\sqrt{s}}{c+\sqrt{s}}+\frac{s}{(c+\sqrt{s})^2}\right]+\frac{2\log 2}{4s^{3/2}} \end{aligned}$$vanishes at \(0\) and is nondecreasing.$$\begin{aligned} x\mapsto \log (1+x)(1+x)^{1}+(1+x)^{2},\quad x\ge 0, \end{aligned}$$
 (iv)
We compute that \(f^{\prime \prime }(s)=[\,4(c+\sqrt{s})^2\sqrt{s}\,]^{1}\le 0\). \(\square \)
Proof of Lemma 3.2
Proof of Lemma 3.3
 (i)This follows easily from the obvious estimatesand$$\begin{aligned} \log \left(\sqrt{1+k^2}+y+k\right)\ge \log (\sqrt{1+k^2}) \end{aligned}$$$$\begin{aligned} 1\sqrt{1+k^2}\le \log (\sqrt{1+k^2}). \end{aligned}$$
 (ii)
For simplicity, we shall write \(k,\, y\) instead of \(k,\, y\), respectively. We consider two major cases.
 (1)If \(y\le 1\), then the functionis nonincreasing: its derivative at \(k\) equals$$\begin{aligned} k\mapsto (2\log 2)(1+k\sqrt{1+k^2})\frac{k}{1+y},\quad k\ge k_0, \end{aligned}$$Thus, for \(y\le 1\) all we need is to check (5.2) for \(k\) satisfying the equation \(\sqrt{1+k^2}=(2\log 2)(1+y)\). But then the estimate is equivalent to$$\begin{aligned} (2\log 2)\left(1\frac{k}{\sqrt{1+k^2}}\right)\frac{1}{1+y}&\le (2\log 2)\left(1\frac{k_0}{\sqrt{1+k_0^2}}\right)\frac{1}{2}\\&= 0.03\ldots <0. \end{aligned}$$and the lefthand side is negative, the righthand side is nonnegative.$$\begin{aligned} \left(2\log 2\frac{1}{1+y}\right)(k\sqrt{1+k^2})\le \log (1+y)+(2\log 2)y, \end{aligned}$$
 (2)
 (3)
Suppose finally, that \(y\ge 2\). As previously, the lefthand side of (5.2) is bounded from above by \(2\log 2\). On the other hand, the righthand side is larger than \(\log 3+2(2\log 2)>2\log 2\).
Proof of Lemma 3.4
Of course, we may assume that \(h\ne 0\). Furthermore, by homogeneity, it suffices to verify the estimate for \(z=1\). It is convenient to split the reasoning into three parts.
Step 1
Step 2
Step 3
For fixed \(x,\, y,\, h\), and \(k\), the lefthand side, as a function of \(\langle x,h\rangle \), is convex (see Lemma 3.1 (iii)) and hence it suffices to verify the estimate in the case when \(\langle x,h\rangle =\{xh,0\}\). These cases have been considered in Steps 1 and 2. \(\square \)
Notes
Acknowledgments
This work was partially supported by Polish Ministry of Science and Higher Education (MNiSW) Grant N N201 397437. The author would like to thank the anonymous Referee for the careful reading of the first version of the paper, the comments and remarks, which greatly improved the presentation.
Open Access
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
References
 1.Bañuelos, R., Bogdan, K.: Lévy processes and Fourier multipliers. J. Funct. Anal. 250(1), 197–213 (2007)CrossRefMATHMathSciNetGoogle Scholar
 2.Bañuelos, R., MéndezHernandez, P.J.: Space–time Brownian motion and the BeurlingAhlfors transform. Indiana Univ. Math. J. 52(4), 981–990 (2003)CrossRefMATHMathSciNetGoogle Scholar
 3.Bañuelos, R., Wang, G.: Sharp inequalities for martingales with applications to the BeurlingAhlfors and Riesz transforms. Duke Math. J. 80(3), 575–600 (1995)CrossRefMATHMathSciNetGoogle Scholar
 4.Bañuelos, R., Wang, G.: Davis’s inequality for orthogonal martingales under differential subordination. Michigan Math. J. 47, 109–124 (2000)CrossRefMATHMathSciNetGoogle Scholar
 5.Bichteler, K.: Stochastic integration and \(L^p\)theory of semimartingales. Ann. Probab. 9, 49–89 (1980)CrossRefMathSciNetGoogle Scholar
 6.Borichev, A., Janakiraman, P., Volberg, A.: Subordination by orthogonal martingales in \(L^{p}\) and zeros of Laguerre polynomials. arXiv:1012.0943 (2010)Google Scholar
 7.Borichev, A., Janakiraman, P., Volberg, A.: On Burkholder function for orthogonal martingales and zeros of Legendre polynomials. Am. J. Math. (to appear)Google Scholar
 8.Burkholder, D.L.: Boundary value problems and sharp inequalities for martingale transforms. Ann. Probab. 12, 647–702 (1984)CrossRefMATHMathSciNetGoogle Scholar
 9.Burkholder, D.L.: Martingales and Fourier analysis in Banach spaces. In: Probability and analysis (Varrenna, 1985). Lecture Notes in Mathematics, vol. 1206, pp. 61–108. Springer, Berlin (1986)Google Scholar
 10.Burkholder, D.L.: A sharp and strict \(L^p\)inequality for stochastic integrals. Ann. Probab. 15, 268–273 (1987)CrossRefMATHMathSciNetGoogle Scholar
 11.Burkholder, D.L.: Sharp inequalities for martingales and stochastic integrals. Colloque Paul Lévy (Palaiseau, 1987). Astérisque 157–158, 75–94 (1988)MathSciNetGoogle Scholar
 12.Burkholder, D.L.: A proof of Pełczyński’s conjecture for the Haar system. Studia Math. 91, 79–83 (1988)MATHMathSciNetGoogle Scholar
 13.Burkholder, D.L.: Explorations in martingale theory and its applications. In: École d’Ete de Probabilités de SaintFlour XIX–1989. Lecture Notes in Mathematics, vol. 1464, pp. 1–66. Springer, Berlin (1991)Google Scholar
 14.Burkholder, D.L.: Sharp norm comparison of martingale maximal functions and stochastic integrals. In: Proceedings of the Norbert Wiener Centenary Congress, 1994 (East Lansing, MI, 1994), Proceedings of Symposia in Applied Mathematics, vol. 52, pp. 343–358, American Mathematical Society, Providence, RI (1997)Google Scholar
 15.Dellacherie, C., Meyer, P.A.: Probabilities and Potential B. NorthHolland, Amsterdam (1982)MATHGoogle Scholar
 16.Geiss, S., MontgomerySmith, S., Saksman, E.: On singular integral and martingale transforms. Trans. Am. Math. Soc. 362(2), 553–575 (2010)CrossRefMATHMathSciNetGoogle Scholar
 17.Nazarov, F.L., Volberg, A.: Heat extension of the Beurling operator and estimates for its norm. Rossiĭskaya Akademiya Nauk. Algebra i Analiz 15(4), 142–158 (2003)MathSciNetGoogle Scholar
 18.Ose¸kowski, A.: Sharp LlogL inequality for differentially subordinated martingales. Illinois J. Math. 52(3), 745–756 (2008)Google Scholar
 19.Ose¸kowski, A.: Sharp weak type inequalities for differentially subordinated martingales. Bernoulli 15(3), 871–897 (2009)Google Scholar
 20.Ose¸kowski, A.: Sharp moment inequalities for differentially subordinated martingales. Studia Math. 201, 103–131 (2010)Google Scholar
 21.Ose¸kowski, A.: Maximal inequalities for continuous martingales and their differential subordinates. Proc. Am. Math. Soc. 139, 721–734 (2011)Google Scholar
 22.Suh, Y.: A sharp weak type \((p, p)\) inequality \((p>2)\) for martingale transforms and other subordinate martingales. Trans. Am. Math. Soc. 357(4), 1545–1564 (2005)CrossRefMATHMathSciNetGoogle Scholar
 23.Wang, G.: Differential subordination and strong differential subordination for continuous time martingales and related sharp inequalities. Ann. Probab. 23, 522–551 (1995)CrossRefMATHMathSciNetGoogle Scholar