Abstract
In this chapter, we enlarge on the previous chapter by considering processes whose jumps may be random but independent between them. We also give some further definitions on general theory of stochastic processes and stochastic analysis which may be easier to understand given that they are used in this particular application.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
State and prove the equivalent statement of Exercise 2.1.15.
- 2.
The new distribution of Y should be the mixture of the distributions of \(Y^{(1)}\) and \(Y^{(2)}\) with weights \(\frac{\lambda _1}{\lambda }\) and \(1- \frac{\lambda _1}{\lambda }\).
- 3.
Actually this is the contents of the Kolmogorov extension theorem. The measurability has to be defined in the space of càdlàg functions with an appropriate topology. To make it simple, we may use for the moment the topology given by the supremum norm on compact intervals of time.
- 4.
This is an extension in the sense that we can measure the error of the characteristic function associated with approximating the compound Poisson process using Gaussian laws.
- 5.
If you have some experience with Fourier transforms you may even realize that this implies an expansion of the distribution of \( Z_t \) using the Gaussian distribution. But one has to be careful as \( Z_t \) in general does not have a density in the case that \( N_t=0\).
- 6.
For a much more formal and general definition see Definition 3.4.13. But note that a filtration and a filtered space are not the same thing. A filtration, in general, is just an increasing sequence of \(\sigma \) algebras.
- 7.
This property follows for Lévy processes.
- 8.
See Definition 3.4.17.
- 9.
We say that a stochastic process X is a version of the stochastic process Y if \( \mathbb {P}(X_t=Y_t)=1 \) for all \( t\ge 0\).
- 10.
As we will see this will become useful when considering stochastic integrals.
- 11.
Note that \( \tilde{Z} \) can be rewritten using Z.
- 12.
Hint: Recall the argument to prove that a continuous function on a bounded interval is uniformly continuous.
- 13.
This is usually called the norm of the partition.
- 14.
This is done with the purpose of exercising the use of the Poisson random point measure.
- 15.
Recall that a negligible set is any subset of a measurable set with probability zero.
- 16.
Note the link between this definition and Exercise 2.1.38.
- 17.
In the sense that \(\mathbb {E}[\int _{\mathbb {R} \times [0,t]} |g(z,s-)|{\mathscr {N}}(dz, ds)]<\infty \) and \(\mathbb {E}[\int _{\mathbb {R} \times [0,t]} |g(z,s-)|\widehat{\mathscr {N}}(dz, ds)]<\infty \).
- 18.
This is an exercise. This also proves the integrability needed in order to obtain the martingale property.
- 19.
For any continuous semi-martingale X in \(\mathbb {R}^d\) and function \(f\in C^2(\mathbb {R})\), we have
$$h(X)=h(X_0)+\sum _{i=1}^dh_i'(X)\cdot X^i +\frac{1}{2}\sum _{i, j=1}^{d}h_{ij}'' (X)\cdot \langle X^i , X^j\rangle ,\ \mathrm {a.s.},$$where \(X\cdot Y\) means the stochastic integral of X with respect to Y, and \(\langle X, Y\rangle \) means the cross variation between X and Y.
- 20.
Usually it is called the telescopic sum formula.
- 21.
The reason for this name should be clear. The random process \(g_2\) is left-continuous, that means that its value at any time can be “predicted” by the values at times before it. In fact, the example \(g_1\) is called anticipating because it uses information that cannot be predicted as in \(g_2\). The terms associated with the terms \(\mathbb {E}[Y^2]\) are called trace terms for obvious reasons.
- 22.
This also explains why in the previous section one always takes \(g(z, s-)\) as the integrand function, that is, just to try to make you aware that when integrating a process like \(Z_s\) one really needs to use \(Z_{s-}\) rather than \(Z_{s}\). Another way of looking at this notation is that it reminds you of the need of predictability in the integrand.
- 23.
Note that this problem does not appear at all with Brownian motion. For this just consider the uniform partition \(t_i=\frac{Ti}{n}\) and the Riemann sum \(\sum _{i=0}^{n-1}B_{t_i}(B_{t_{i+1}}-B_{t_i})\) and \(\sum _{i=0}^{n-1}B_{t_i-\varepsilon _n}(B_{t_{i+1}}-B_{t_i})\) with \(\varepsilon _n\rightarrow 0\) as \(n\rightarrow \infty \). Prove that the limit of the difference converges to zero.
- 24.
As before, this has to be some norm on g.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Kohatsu-Higa, A., Takeuchi, A. (2019). Compound Poisson Process and Its Associated Stochastic Calculus. In: Jump SDEs and the Study of Their Densities. Universitext. Springer, Singapore. https://doi.org/10.1007/978-981-32-9741-8_3
Download citation
DOI: https://doi.org/10.1007/978-981-32-9741-8_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-32-9740-1
Online ISBN: 978-981-32-9741-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)