Abstract
In this chapter we provide mathematical tools to study the stochastic process from the physical point of view.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
To define a stochastic process, let us at first provide the definition of probability space. A probability space associated with a random experiment is a triple (\(\varOmega \),\(\mathcal {F}\),P), where, (i) \(\varOmega \) is a nonempty set, whose elements are known as outcomes or states, and is called the sample state; (ii) \(\mathcal {F}\) is a family of subsets of \(\varOmega \), which has the structure of a Borel \(\sigma \)-field, this means that:
(a) \(\emptyset \in \mathcal {F}\)
(b) If A \(\in \mathcal {F}\), then its complement \(A^c\) also belongs to \(\mathcal {F}\)
(c) If \(A_1, A_2, \ldots \in \mathcal {F}\) then \( \bigcup _{i=1}^{\infty } A_{i} \in \mathcal {F}\),
(iii) p is a function which associated a number p(A) to each set \(A \in \mathcal {F}\) with the following properties:
(a) \( 0 \le p(A) \le 1 \)
(b) \( p(\varOmega ) = 1 \)
(c) If \(A_1, A_2, \ldots \) are pairwise disjoints set in \(\mathcal {F}\) (that is \( A_i \cap A_j = \emptyset \), whenever \(i \ne j\)), then \( p( \bigcup _{i=1}^{\infty } A_{i}) = \sum _{i=1}^{\infty } p(A_i) \).
The elements of the \(\sigma \)-field \(\mathcal {F}\) are called events and the mapping P is called a probability measure.
For one flip of a coin, \(\varOmega = \{Head=H,Tail=T\}\). The events \(\mathcal {F}\) along with the corresponding \(\mathcal {F}= \Pi (\varOmega )\) contains all subsets of \(\varOmega \), i.e. \(\mathcal {F} = \{ \{\emptyset \} , \{H\} , \{T\} , \{H,T\} \} \) and \(p(H) = p(T) = \frac{1}{2}\). \(\{ \emptyset \} \) neither heads nor tails and \(\{H,T\}\) to having simultaneously H and T, with probabilities 0.
Definition: Let (\(\varOmega \),\(\mathcal {F}\),P) be a probability space and let T be an arbitrary set (called the index set). Any collection of random variables \(x = \{x_t : t \in T\}\) defined on (\(\varOmega \),\(\mathcal {F}\),P) is called a stochastic process with index set T.
If \(x_{t_1},x_{t_2},\ldots , x_{t_n}\) are random variables defined on some common probability space, then \(\mathbf{x}_t = (x_{t_1},x_{t_2},\ldots , x_{t_n})\) defines an \(\mathbb {R}^n\) valued random variable, also called a random vector. Stochastic processes are also often called random processes.
- 2.
This is true for an ergodic process. A stochastic process is said to be ergodic if its statistical properties can be deduced from a single, sufficiently long, random sample of the process.
- 3.
All stochastic processes satisfy the relation \(p(\mathbf{x}_3,t_3) = \int d\mathbf{x}_2 p_2(\mathbf{x}_3,t_3;\mathbf{x}_2,t_2) = \int d\mathbf{x}_2 p(\mathbf{x}_3,t_3|\mathbf{x}_2,t_2) p(\mathbf{x}_2,t_2)\). Moreover, the conditional PDF can be written as \(p(\mathbf{x}_3,t_3|\mathbf{x}_1,t_1)=\int d\mathbf{x}_2 p(\mathbf{x}_3,t_3;\mathbf{x}_2,t_2|\mathbf{x}_1,t_1)= \int d\mathbf{x}_2 p(\mathbf{x}_3,t_3|\mathbf{x}_2,t_2;\mathbf{x}_1,t_1) p(\mathbf{x}_2,t_2|x_1,t_1)\). Taking into account the Markov assumption, if \(t_3> t_2 > t_1\), we can ignore the \(\mathbf{x}_1\) dependence. Therefore, we find, \(p(\mathbf{x}_3,t_3|\mathbf{x}_1,t_1)= \int d\mathbf{x}_2 p(\mathbf{x}_3,t_3|\mathbf{x}_2,t_2) p(\mathbf{x}_2,t_2|\mathbf{x}_1,t_1)\), which is the Chapman–Kolmogorov equation.
References
H. Risken, The Fokker-Planck Equation (Springer, Berlin, 1989)
M.C. Wand, G.E. Uhlenbeck, Rev. Mod. Phys. 17, 323 (1945)
C.W. Gardiner, Handbook of Stochastic Methods (Springer, Berlin, 1983)
I.V. Girsanov, On transforming a certain class of stochastic processes by absolutely continuous substitution of measures. Theory Probab. Appl. 5, 285 (1960)
Author information
Authors and Affiliations
Corresponding author
Problems
Problems
2.1
Statistical moment-generating function
(a) Let \(\mathbf{x} = (x_1,\ldots , x_n)^{T}\) be a random vector, and \(\mathbf{u} = (u_1, \ldots , u_n)^T \in \mathbb {R}^n\), where \((\cdots )^T\) be the transpose of vector \((\cdots )\). The statistical moment-generating function is defined by
\(Z_\mathbf{x} (\mathbf {u}) = \langle e^\mathbf{{u}^T \mathbf {x}} \rangle \)
for all \(\mathbf {u}\) for which the average exists (is finite). Show that the statistical moments of order k can be determined using the following relation:
where \(k=k_1+\cdots +k_n\).
(b) The density function of the univariate normal distribution is given by
for \(-\infty<x<\infty \), where \(\mu \) is the mean and \(\sigma ^2>0\) is the variance. Show that
and prove that, \( \langle (x-\mu )^{2n} \rangle = \frac{2n!}{2^n n!} \langle (x-\mu )^{2} \rangle ^n \) and \(\langle (x-\mu )^{2n+1} \rangle = 0\), where \(n=1,2,\ldots \).
2.2
Bivariate normal distribution
The density function of the bivariate normal distribution is given by
where \((\mu _x,\mu _y)\) is the mean vector and the variance-covariance matrix is
The constraints are \(\sigma _x^2>0,\sigma _y^2>0\) and \(-1<\rho <1\), where \(\rho \) is the correlation coefficient, \(\rho =Cov(x,y)/ \sigma _x \sigma _y\) and \(Cov(x,y) = \langle (x - \mu _x) (y- \mu _y) \rangle \).
Derive the following conditional averaging and show that,
(a) \(\langle y| x \rangle =\mu _y + \rho ~ \frac{\sigma _y}{\sigma _x} ~ (x- \mu _x) \)
(b) \(\sigma ^2_{y| x} = \sigma _y^2 ~ (1-\rho ^2)\).
2.3
p-variate normal distribution
The density function of the p-variate normal distribution is given by
where \(\mathbf {x}^T=(x_1,\ldots ,x_p)\), \({\varvec{\mu }}^T=( \mu _1,\ldots ,\mu _p)\) and \(\mathbf {g}\) is a full rank variance-covariance matrix, i.e.,
where \(\mathbf {g}^{-1}\) and \(\left| \mathbf {g}\right| \) are the inverse and the determinant of \(\mathbf {g}\).
(a) Derive the statistical moment-generating function for p-variate normal distribution.
(b) Prove the following relation for the fourth-order correlation function (Wick’s Theorem):
2.4
Chapman–Kolmogorov equation
Show that the following conditional density functions, (a) Brownian motion and (b) Cauchy process satisfy the Chapman–Kolmogorov equation
where \(t_2>t_1\).
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Tabar, M.R.R. (2019). Introduction to Stochastic Processes. In: Analysis and Data-Based Reconstruction of Complex Nonlinear Dynamical Systems. Understanding Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-030-18472-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-18472-8_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-18471-1
Online ISBN: 978-3-030-18472-8
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)