Skip to main content

Part of the book series: Understanding Complex Systems ((UCS))

Abstract

In this chapter we provide mathematical tools to study the stochastic process from the physical point of view.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    To define a stochastic process, let us at first provide the definition of probability space. A probability space associated with a random experiment is a triple (\(\varOmega \),\(\mathcal {F}\),P), where, (i) \(\varOmega \) is a nonempty set, whose elements are known as outcomes or states, and is called the sample state; (ii) \(\mathcal {F}\) is a family of subsets of \(\varOmega \), which has the structure of a Borel \(\sigma \)-field, this means that:

    (a) \(\emptyset \in \mathcal {F}\)

    (b) If A \(\in \mathcal {F}\), then its complement \(A^c\) also belongs to \(\mathcal {F}\)

    (c) If \(A_1, A_2, \ldots \in \mathcal {F}\) then \( \bigcup _{i=1}^{\infty } A_{i} \in \mathcal {F}\),

    (iii) p is a function which associated a number p(A) to each set \(A \in \mathcal {F}\) with the following properties:

    (a) \( 0 \le p(A) \le 1 \)

    (b) \( p(\varOmega ) = 1 \)

    (c) If \(A_1, A_2, \ldots \) are pairwise disjoints set in \(\mathcal {F}\) (that is \( A_i \cap A_j = \emptyset \), whenever \(i \ne j\)), then \( p( \bigcup _{i=1}^{\infty } A_{i}) = \sum _{i=1}^{\infty } p(A_i) \).

    The elements of the \(\sigma \)-field \(\mathcal {F}\) are called events and the mapping P is called a probability measure.

    For one flip of a coin, \(\varOmega = \{Head=H,Tail=T\}\). The events \(\mathcal {F}\) along with the corresponding \(\mathcal {F}= \Pi (\varOmega )\) contains all subsets of \(\varOmega \), i.e. \(\mathcal {F} = \{ \{\emptyset \} , \{H\} , \{T\} , \{H,T\} \} \) and \(p(H) = p(T) = \frac{1}{2}\). \(\{ \emptyset \} \) neither heads nor tails and \(\{H,T\}\) to having simultaneously H and T, with probabilities 0.

    Definition: Let (\(\varOmega \),\(\mathcal {F}\),P) be a probability space and let T be an arbitrary set (called the index set). Any collection of random variables \(x = \{x_t : t \in T\}\) defined on (\(\varOmega \),\(\mathcal {F}\),P) is called a stochastic process with index set T.

    If \(x_{t_1},x_{t_2},\ldots , x_{t_n}\) are random variables defined on some common probability space, then \(\mathbf{x}_t = (x_{t_1},x_{t_2},\ldots , x_{t_n})\) defines an \(\mathbb {R}^n\) valued random variable, also called a random vector. Stochastic processes are also often called random processes.

  2. 2.

    This is true for an ergodic process. A stochastic process is said to be ergodic if its statistical properties can be deduced from a single, sufficiently long, random sample of the process.

  3. 3.

    All stochastic processes satisfy the relation \(p(\mathbf{x}_3,t_3) = \int d\mathbf{x}_2 p_2(\mathbf{x}_3,t_3;\mathbf{x}_2,t_2) = \int d\mathbf{x}_2 p(\mathbf{x}_3,t_3|\mathbf{x}_2,t_2) p(\mathbf{x}_2,t_2)\). Moreover, the conditional PDF can be written as \(p(\mathbf{x}_3,t_3|\mathbf{x}_1,t_1)=\int d\mathbf{x}_2 p(\mathbf{x}_3,t_3;\mathbf{x}_2,t_2|\mathbf{x}_1,t_1)= \int d\mathbf{x}_2 p(\mathbf{x}_3,t_3|\mathbf{x}_2,t_2;\mathbf{x}_1,t_1) p(\mathbf{x}_2,t_2|x_1,t_1)\). Taking into account the Markov assumption, if \(t_3> t_2 > t_1\), we can ignore the \(\mathbf{x}_1\) dependence. Therefore, we find, \(p(\mathbf{x}_3,t_3|\mathbf{x}_1,t_1)= \int d\mathbf{x}_2 p(\mathbf{x}_3,t_3|\mathbf{x}_2,t_2) p(\mathbf{x}_2,t_2|\mathbf{x}_1,t_1)\), which is the Chapman–Kolmogorov equation.

References

  1. H. Risken, The Fokker-Planck Equation (Springer, Berlin, 1989)

    Book  Google Scholar 

  2. M.C. Wand, G.E. Uhlenbeck, Rev. Mod. Phys. 17, 323 (1945)

    Article  ADS  Google Scholar 

  3. C.W. Gardiner, Handbook of Stochastic Methods (Springer, Berlin, 1983)

    Book  Google Scholar 

  4. I.V. Girsanov, On transforming a certain class of stochastic processes by absolutely continuous substitution of measures. Theory Probab. Appl. 5, 285 (1960)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Reza Rahimi Tabar .

Problems

Problems

2.1

Statistical moment-generating function

(a) Let \(\mathbf{x} = (x_1,\ldots , x_n)^{T}\) be a random vector, and \(\mathbf{u} = (u_1, \ldots , u_n)^T \in \mathbb {R}^n\), where \((\cdots )^T\) be the transpose of vector \((\cdots )\). The statistical moment-generating function is defined by

\(Z_\mathbf{x} (\mathbf {u}) = \langle e^\mathbf{{u}^T \mathbf {x}} \rangle \)

for all \(\mathbf {u}\) for which the average exists (is finite). Show that the statistical moments of order k can be determined using the following relation:

$$ \frac{\partial ^k}{\partial u_1^{k_1} \cdots \partial u_n^{k_n}} Z_\mathbf{x} (\mathbf{u})|_\mathbf{{u}=0} = \langle x_1 ^{k_1} \cdots x_n ^{k_n} \rangle $$

where \(k=k_1+\cdots +k_n\).

(b) The density function of the univariate normal distribution is given by

$$ f(x)=\frac{1}{\sqrt{2\pi }\sigma }\exp \left\{ -\frac{1}{2}\left( \frac{x-\mu }{\sigma }\right) ^2\right\} $$

for \(-\infty<x<\infty \), where \(\mu \) is the mean and \(\sigma ^2>0\) is the variance. Show that

$$ Z_x(u) = \exp \{ \mu u + \frac{\sigma ^2 u^2}{2} \} $$

and prove that, \( \langle (x-\mu )^{2n} \rangle = \frac{2n!}{2^n n!} \langle (x-\mu )^{2} \rangle ^n \) and \(\langle (x-\mu )^{2n+1} \rangle = 0\), where \(n=1,2,\ldots \).

2.2

Bivariate normal distribution

The density function of the bivariate normal distribution is given by

$$ p(x,y)=\frac{\exp \left\{ -\frac{1}{2(1-\rho ^2)}\left[ \left( \frac{x-\mu _x}{\sigma _x}\right) ^2-2\rho \left( \frac{x-\mu _x}{\sigma _x}\right) \left( \frac{y-\mu _y}{\sigma _y}\right) +\left( \frac{y-\mu _y}{\sigma _y}\right) ^2\right] \right\} }{2\pi \sigma _x\sigma _y\sqrt{1-\rho ^2}} $$

where \((\mu _x,\mu _y)\) is the mean vector and the variance-covariance matrix is

$$ \left( \begin{array}{cc} Var(x) &{} Cov(x,y) \\ Cov(x,y) &{} Var(y) \end{array} \right) =\left( \begin{array}{cc} \sigma _x^2 &{} \rho \sigma _x\sigma _y \\ \rho \sigma _x\sigma _y &{} \sigma _y^2 \end{array} \right) . $$

The constraints are \(\sigma _x^2>0,\sigma _y^2>0\) and \(-1<\rho <1\), where \(\rho \) is the correlation coefficient, \(\rho =Cov(x,y)/ \sigma _x \sigma _y\) and \(Cov(x,y) = \langle (x - \mu _x) (y- \mu _y) \rangle \).

Derive the following conditional averaging and show that,

(a) \(\langle y| x \rangle =\mu _y + \rho ~ \frac{\sigma _y}{\sigma _x} ~ (x- \mu _x) \)

(b) \(\sigma ^2_{y| x} = \sigma _y^2 ~ (1-\rho ^2)\).

2.3

p-variate normal distribution

The density function of the p-variate normal distribution is given by

$$ f(\mathbf {x})=\frac{1}{\left( 2\pi \right) ^{p/2}\left| \mathbf {g}\right| ^{1/2}}\exp \left\{ -\frac{1}{2}\left( {\mathbf x}-{\varvec{\mu }}\right) ^T\mathbf {g}^{-1}\left( {\mathbf x}-{\varvec{\mu }}\right) \right\} $$

where \(\mathbf {x}^T=(x_1,\ldots ,x_p)\), \({\varvec{\mu }}^T=( \mu _1,\ldots ,\mu _p)\) and \(\mathbf {g}\) is a full rank variance-covariance matrix, i.e.,

$$ \mathbf {g}_{ij}=Cov(x_i,x_j) = \langle (x_i - \mu _i) (x_j - \mu _j) \rangle $$

where \(\mathbf {g}^{-1}\) and \(\left| \mathbf {g}\right| \) are the inverse and the determinant of \(\mathbf {g}\).

(a) Derive the statistical moment-generating function for p-variate normal distribution.

(b) Prove the following relation for the fourth-order correlation function (Wick’s Theorem):

$$ \langle (x_i - \mu _i) (x_j - \mu _j) (x_k - \mu _k) (x_l -\mu _l) \rangle = \mathbf {g}_{ij} \mathbf {g}_{kl} + \mathbf {g}_{ik} \mathbf {g}_{jl} + \mathbf {g}_{il} \mathbf {g}_{jk} . $$

2.4

Chapman–Kolmogorov equation

Show that the following conditional density functions, (a) Brownian motion and (b) Cauchy process satisfy the Chapman–Kolmogorov equation

$$\begin{aligned} (a) \quad p(x_2,t_2|x_1,t_1)= & {} \frac{1}{\sqrt{ 2 \pi (t_2-t_1)}} \exp \left\{ - \frac{(x_2-x_1)^2}{2(t_2-t_1)} \right\} , \\\nonumber \\ \\\nonumber \\ (b) \quad p(x_2,t_2|x_1,t_1)= & {} \frac{1}{\pi } \frac{t_2-t_1}{(t_2-t_1)^2 + (x_2-x_1)^2}\end{aligned}$$

where \(t_2>t_1\).

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Tabar, M.R.R. (2019). Introduction to Stochastic Processes. In: Analysis and Data-Based Reconstruction of Complex Nonlinear Dynamical Systems. Understanding Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-030-18472-8_2

Download citation

Publish with us

Policies and ethics