# The Fractional Quantum Derivative and the Fractional Linear Scale Invariant Systems

Chapter
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 84)

## Abstract

The normal way of introducing the notion of derivative is by means of the limit of an incremental ratio that can assume three forms, depending the used translations as we saw in Chaps. 1 and 4. On the other hand, in those derivatives the limit operation is done over a set of points uniformly spaced: a linear scale was used. Here we present an alternative derivative, that is valid only for t > 0 or t < 0 and uses an exponential scale

## Keywords

Impulse Response Fractional Derivative Fractional Order System Integer Order Incremental Ratio
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## 6.1 Introduction

The normal way of introducing the notion of derivative is by means of the limit of an incremental ratio that can assume three forms, depending the used translations as we saw in and . On the other hand, in those derivatives the limit operation is done over a set of points uniformly spaced: a linear scale was used. Here we present an alternative derivative, that is valid only for t > 0 or t < 0 and uses an exponential scale. We are going to introduce the so-called Quantum Derivative [1, 2]. We proceed as in . Let. $$\Updelta_{q}$$ be the following incremental ratio:
$$\Updelta_{q} f\left( t \right) = {\frac{{f\left( t \right) - f\left( {qt} \right)}}{{\left( {1 - q} \right)t}}}$$
(6.1)
where q is a positive real number less than 1 and f(t) is assumed to be a causal type signal. The corresponding derivative is obtained by computing the limit as q goes to 1 (to be more precise, we should state q?1?)
$$D_{q} f\left( t \right) = \mathop {\lim }\limits_{q \to 1} {\frac{{f\left( t \right) - f\left( {qt} \right)}}{{\left( {1 - q} \right)t}}}$$
(6.2)
This derivative uses values of the variable below t. We can introduce another one that uses values above t. It is defined by
$$D_{{q^{ - 1} }} f\left( t \right) = \mathop {\lim }\limits_{q \to 1} {\frac{{f\left( {q^{ - 1} t} \right) - f\left( t \right)}}{{\left( {q^{ - 1} - 1} \right)t}}}$$
(6.3)

We will generalize these derivatives, first for integer orders, and afterwards for real ones as we did before. We will present the two formulations that come naturally from (6.2) and (6.3) and using values below and above the independent variable. We can define also “two-sided” derivatives as we did in , but we will not do it here.

From the Mellin transform of both derivatives we will obtain two integral formulae similar to the Liouville derivatives presented in . Although we will not study here the properties of these derivatives, it may be advanced that they can be used in scale variation problems and to deal with systems defined by Euler–Cauchy type differential equations as we will see later. For now, we will present the steps leading to the fractional quantum derivative and its relation with the Mellin transform (MT).

## 6.2 The Summation Formulations

### 6.2.1 The “Below t” Case

We begin by generalizing formula (6.1) for any positive integer order. The formula can be obtained by its repeated application, but we prefer to work in the context of the Mellin Transform due to its simplicity. Let us introduce the Mellin transform by
$$H\left( s \right) = \int\limits_{0}^{\infty } {h\left( u \right)u^{ - s - 1} {\text{d}}u}$$
(6.4)
with $$s \in C$$. This transform is slightly different from the one presented by Bertrand et al. [3] and in current literature, but this one is more convenient since leads to results similar to those obtained with the Laplace transform in the study of shift invariant systems.
Consider that our domain is R +. We introduce the multiplicative convolution defined by
$$f(t)\,\nu\, g(t) = \int\limits_{0}^{\infty } {f(t/v)} g(v){\frac{{{\text{d}}v}}{v}}$$
(6.5)
It is easy to see that the neutral element of this convolution is $$g\left( t \right) \, = \delta \left( {t - 1} \right).$$ With this, we can show that
$$\Updelta_{q} f(t) = \left[ {{\frac{{\delta (t - 1) - \delta (t - q^{ - 1} )}}{(1 - q)}}} \right]\nu [t^{ - 1} f(t)]$$
(6.6)
As it is known, the Mellin Transform of the multiplicative convolution is equal to the product of the transforms of both functions. So we obtain:
$${\mathbf{M}}[\Updelta_{q} f(t)] = {\frac{{1 - q^{s + 1} }}{1 - q}}F(s + 1)$$
(6.7)
The repeated application of the operator (6.7) leads to:
$${\mathbf{M}}[\Updelta_{q}^{N} f(t)] = \mathop \Uppi \limits_{i = 1}^{N} {\frac{{1 - q^{s + i} }}{1 - q}}F(s + N)$$
(6.8)
We are going to manipulate the first factor and use the q-binomial formula [1]
$$[1 - q^{s + 1} ]_{q}^{N} = \mathop \Uppi \limits_{i = 0}^{N - 1} (1 - q^{1 + s} q^{i} )$$
We have first
$$\mathop \Uppi \limits_{i = 1}^{N} {\frac{{1 - q^{s + i} }}{1 - q}} = {\frac{{\mathop \Uppi \nolimits_{i = 1}^{N} (1 - q^{s} q^{i} )}}{{(1 - q)^{N} }}}={\frac{{\mathop \Uppi \nolimits_{i = 0}^{N - 1} (1 - q^{1 + s} q^{i} )}}{{(1 - q)^{N} }}} = {\frac{{\left[ {1 - q^{s + 1} } \right]_{q}^{N} }}{{(1 - q)^{N} }}}$$
The Gauss binomial formula
$$[a + b]_{q}^{N} = \sum\limits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]}_{q} ( - 1)^{j} q^{{j\left( {j - 1} \right)/2}} b^{j} a^{N - j}$$
allows us to obtain a different way of expressing the formula on the right. Introducing the q-binomial coefficients
$$\left[ {_{i}^{\alpha } } \right]_{q} = {\frac{{[\alpha ]_{q} !}}{{[j]_{q} !(\alpha - i)_{q} !}}}$$
(6.9)
with $$[\alpha ]_{q}$$ given by
$$[\alpha ]_{q} = {\frac{{1 - q^{\alpha } }}{1 - q}}$$
(6.10)
the expression on the right can be written as [1, 2]
$${\frac{{[1 - q^{s + 1} ]_{q}^{N} }}{{(1 - q)^{N} }}} = {\frac{{\sum\nolimits_{j = 0}^{N} {[_{j}^{N} ]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{js} }}{{(1 - q)^{N} }}}$$
(6.11)
that inserted into (6.8) gives:
$${\mathbf{M}}\left[ {\Updelta_{q}^{N} f(t)} \right] = {\frac{{\sum\nolimits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{js} }}{{(1 - q)^{N} }}}F(s + N)$$
(6.12)
From the properties of the Mellin transform [4]
$${\mathbf{M}}^{ - 1} [q^{js} F(s + N)] = q^{ - jN} t^{ - N} f(q^{j} t)$$
(6.13)
We conclude that:
$$\Updelta_{q}^{N} f(t) = t^{ - N} {\frac{{\sum\nolimits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]_{q} } \left( { - 1} \right)^{j} q^{j(j + 1)/2} q^{ - jN} f(q^{j} t)}}{{(1 - q)^{N} }}}$$
(6.14)
To obtain the corresponding derivatives we only have to perform the limit computation [1, 2, 5]
$$D_{q}^{N} f(t) = t^{ - N} \mathop {\lim }\nolimits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{ - jN} f(q^{j} t)}}{{(1 - q)^{N} }}}$$
(6.15)
To test the behaviour of the above formula, let us compute the second derivative of the function $$f\left( t \right) \, = \, t^{3} u\left( t \right)$$, where u(t) is the Heaviside unit step. We have:
$$D_{q}^{2} f(t) = t\mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{2} {\left[ {_{j}^{2} } \right]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{j} }}{{(1 - q)^{2} }}}$$
and, from (6.11),
\begin{aligned} D_{q}^{2} f(t) &= t\mathop {\lim }\limits_{q \to 1} {\frac{{\mathop \Uppi \nolimits_{i = 0}^{1} (1 - q^{2} q^{i} )}}{{(1 - q)^{2} }}} = t\mathop {\lim }\limits_{q \to 1} {\frac{{(1 - q^{2} )(1 - q^{3} )}}{{(1 - q)^{2} }}} \\ &= t\mathop {\lim }\limits_{q \to 1} (1 + q)(1 + q + q^{2} ) = 6t \\ \end{aligned}
as expected.
From (6.8) and (6.12), we conclude that:
$$_{{\mathbf{M}}} \left[ {t^{ - N} {\frac{{\sum\nolimits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{ - jN} f(q^{j} t)}}{{(1 - q)^{N} }}}} \right] = \mathop \Uppi \limits_{i = 1}^{N} {\frac{{1 - q^{s + i} }}{1 - q}}F(s + N)$$
(6.16)
and, performing the limit computation in the right hand side,
$$_{{\mathbf{M}}} \left[ {t^{ - N} \mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{N} {_{q} \left[ {_{j}^{N} } \right]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{ - jN} f(q^{j} t)}}{{(1 - q)^{N} }}}} \right] = (1 + s)_{N} F(s + N)$$
(6.17)
where we represented by $$\left( a \right)_{n} = \, a\left( {a + 1} \right) \, \cdots \left( {a + N - 1} \right)$$ the Pochhammer symbol. From well known properties of the Gamma function, we can write
$$_{{\mathbf{M}}} \left[ {t^{ - N} \mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{N} {_{q} \left[ {_{j}^{N} } \right]} ( - 1)^{j} q^{j(j + 1)/2} q^{ - jN} f(q^{j} t)}}{{(1 - q)^{N} }}}} \right] = {\frac{\Upgamma (1 + s + N)}{\Upgamma (1 + s)}}F(s + N)$$
(6.18)
$$= ( - 1)^{N} {\frac{\Upgamma ( - s)}{\Upgamma ( - s - N)}}F(s + N)$$
(6.19)

The right hand side in (6.18) or (6.19) is the well known Mellin transform of the Nth order derivative. The left side is a new way of expressing such derivative. This expression suggests that we may work with the “derivative”$$t^{n} D_{q}^{N} f(t)$$, also called scale derivative.

We are going to generalise the previous results for the case of a real order, $$\alpha$$. So, let us return to (6.11) and substitute $$\alpha$$ for N in the left hand expression. In the numerator we obtain the fractional q-binomial $$[1 - q^{s + 1} ]_{q}^{\alpha }$$. The generalised Gauss binomial formula [1]
$$\left[ {1 + a} \right]_{q}^{\alpha } = \sum\limits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]_{q} } ( - 1)^{j} q^{j(j - 1)/2} a^{j}$$
allows us to write:
$$\left[ {1 - q^{s + 1} } \right]_{q}^{\alpha } = \sum\limits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{js}$$
(6.20)
From the properties of the Mellin transform
$${\mathbf{M}}^{ - 1} [q^{js} F(s + \alpha )] = q^{ - j\alpha } t^{ - \alpha } f(q^{j} t)$$
(6.21)
This leads us to a Grunwald–Letnikov like fractional quantum derivative:
$$D_{q}^{\alpha } f(t) = t^{ - \alpha } \mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]_{q} } ( - 1)^{j} q^{j(j + 1)/2} q^{ - j\alpha } f(q^{j} t)}}{{(1 - q)^{\alpha } }}}$$
(6.22)
that is similar to the formulation proposed by Al-Salam [6]. In (6.22) the q-binomial coefficients are given by
$$\left[ {_{j}^{\alpha } } \right]_{q} {\frac{{\left[ {1 - q^{\alpha } } \right]_{q}^{j} }}{{[j]_{q} }}}$$
(6.23)
Let us introduce the q-gamma function by
$$\Upgamma_{q} (t){\frac{{\left[ {1 - q} \right]_{q}^{\infty } }}{{\left[ {1 - q^{t} } \right]_{q}^{\infty } (1 - q)^{t - 1} }}}$$
(6.24)
where Re(t) > 0. With this function,
$$\Upgamma_{q} (n + 1) = {\frac{{[1 - q]_{q}^{n} }}{{\left( {1 - q} \right)^{n} }}} = \mathop \Uppi \limits_{i = 1}^{n} {\frac{{1 - q^{i} }}{1 - q}} = [n]_{q} !$$
(6.25)
the binomial coefficients can be written as:
$$\left[ {_{j}^{\alpha } } \right]_{q} = {\frac{{\Upgamma_{q} (\alpha + 1)}}{{\Upgamma_{q} (\alpha - j + 1)\Upgamma_{q} (j + 1)}}}$$
(6.26)
On the other hand, the fractional q-binomial in (6.20) is given by
$$\left[ {1 - q^{s + 1} } \right]_{q}^{\alpha } = {\frac{{\left[ {1 - q^{s + 1} } \right]_{q}^{\infty } }}{{\left[ {1 - q^{s + \alpha + 1} } \right]_{q}^{\infty } }}}$$
(6.27)
With (6.24), we can write
$$\left[ {1 - q^{s + 1} } \right]_{q}^{\alpha } = {\frac{{\Upgamma_{q} (1 + s + \alpha )}}{{\Upgamma_{q} (1 + s)}}}\cdot(1 - q)^{\alpha }$$
(6.28)
valid for $$\text{Re} \left( s \right) \, > \, - \min (0,\alpha ) \, - \, 1.$$ As the limit of $$\Upgamma_{q} \left( t \right)$$ when $$q \to 1$$ is $$\Upgamma \left( t \right)$$, it is a simple task to obtain:
$${\mathbf{M}}\left[ {D_{q}^{\alpha } f(t)} \right] = {\frac{\Upgamma (1 + s + \alpha )}{\Upgamma (1 + s)}}F(s + \alpha )$$
(6.29)
for $$\text{Re} \left( s \right) \, > \, - \min (0,\alpha ) \, - \, 1$$. This relation is the fractional generalisation of the integer order property [3] and allows us to obtain an integral representation of the fractional quantum derivative. We will return to this subject later.

To maintain the coherence we will consider (6.18) as the correct solution in the integer order case, for the “below t” situation.

### 6.2.2 The “Above t” Case

We are going to study the derivative using a grid of values above t. We proceed as in the last section. Let $$\Updelta_{{q^{ - 1} }}$$ be the following incremental ratio:
$$\Updelta_{{q^{ - 1} }} f(t) = {\frac{{f(q^{ - 1} t) - f(t)}}{{(q^{ - 1} - 1)t}}}$$
(6.30)
With the convolution (6.5), we can show that
$$\Updelta_{{q^{ - 1} }} f(t) = \left[ {{\frac{\delta (t - q) - \delta (t - 1)}{{(q^{ - 1} - 1)}}}} \right]\nu [t^{ - 1} f(t)]$$
(6.31)
Using the Mellin Transform we obtain:
$${\mathbf{M}}\left[ {\Updelta_{{q^{ - 1} }} f(t)} \right] = {\frac{{q^{ - (s + 1)} - 1}}{{q^{ - 1} - 1}}}F(s + 1)$$
(6.32)
The repeated application of the operator (6.32) leads to:
$${\mathbf{M}}\left[ {\mathop \Updelta \nolimits_{{q^{ - 1} }}^{N} f(t)} \right] = \mathop \Uppi \limits_{i = 1}^{N} {\frac{{q^{ - (s + i)} }}{{q^{ - 1} - 1}}}F(s + N)$$
(6.33)
We are going to transform the first factor
$$\mathop \Uppi \limits_{i = 1}^{N} {\frac{{q^{ - (s + i)} - 1}}{{q^{ - 1} - 1}}} = {\frac{{\mathop \Uppi \nolimits_{i = 0}^{N - 1} \left( {1 - q^{ - s - N} q^{i} } \right)}}{{(1 - q^{ - 1} )^{N} }}} = {\frac{{\left[ {1 - q^{ - s - N} } \right]_{q}^{N} }}{{(1 - q^{ - 1} )^{N} }}}$$
(6.34)
and use the q-binomial formula leading to
$${\mathbf{M}}\left[ {\Updelta_{{q^{ - 1} }}^{N} f(t)} \right] = {\frac{{\sum\nolimits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]_{q} } ( - 1)^{j} q^{j(j - 1)/2} q^{ - j(s + N)} }}{{(1 - q^{ - 1} )^{N} }}}F(s + N)$$
(6.35)
and with (6.16),
$$\Updelta_{{q^{ - 1} }}^{N} f(t) = t^{ - N} {\frac{{\sum\nolimits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]_{q} } ( - 1)^{j} q^{j(j - 1)/2} f(q^{ - j} t)}}{{(1 - q^{ - 1} )^{N} }}}$$
(6.36)
allowing us to obtain to the derivative:
$$D_{{q^{ - 1} }}^{N} f(t) = t^{ - N} \mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{N} {\left[ {_{j}^{N} } \right]_{q} } ( - 1)^{j} q^{(j - 1)/2} f(q^{ - j} t)}}{{(1 - q^{ - 1} )^{N} }}}$$
(6.37)
With the left hand side in (6.35) we conclude that:
$${\mathbf{M}}\left[ {D_{{q^{ - 1} }}^{N} f(t)} \right] = (1 + s)_{N} F(s + N)$$
(6.38)
that coincides with (6.17) as expected.
To generalize the above results for any order, we substitute $$\alpha$$ for N in the above expressions. We have from (6.35):
$${\mathbf{M}}\left[ {\Updelta_{{q^{ - 1} }}^{\alpha } f(t)} \right] = {\frac{{\sum\nolimits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]_{q} } ( - 1)^{j} q^{j(j - 1)/2} q^{ - j(s + \alpha )} }}{{(1 - q^{ - 1} )^{\alpha } }}}F(s + \alpha )$$
(6.39)
and then
$$D_{{q^{ - 1} }}^{\alpha } f(t) = t^{ - \alpha } \mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]_{q} } ( - 1)^{j} q^{j(j - 1)/2} f(q^{ - j} t)}}{{(1 - q^{ - 1} )^{\alpha } }}}$$
(6.40)
Using the q-binomial theorem, we have:
$$\sum\limits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]_{q} } ( - 1)^{j} q^{j(j - 1)/2} q^{ - j(s + \alpha )} = \left[ {1 - q^{ - s - \alpha } } \right]_{q}^{\alpha }$$
and
$$\left[ {1 - q^{ - s - \alpha } } \right]_{q}^{\alpha } = {\frac{{\left[ {1 - q^{ - s - \alpha } } \right]_{q}^{\infty } }}{{\left[ {1 - q^{ - s} } \right]_{q}^{\infty } }}}$$
(6.41)
So, with (6.24)
$$\left[ {1 - q^{ - s - \alpha } } \right]_{q}^{\alpha } = {\frac{{\Upgamma_{q} ( - s)}}{{\Upgamma_{q} ( - s - \alpha )}}}\cdot(1 - q)^{\alpha }$$
and finally
$${\mathbf{M}}\left[ {D_{{q^{ - 1} }}^{\alpha } f(t)} \right] = ( - 1)^{\alpha } \cdot{\frac{\Upgamma ( - s)}{\Upgamma ( - s - \alpha )}}F(s + \alpha )$$
(6.42)
valid for $$\text{Re} \left( s \right) < - \max \, (0,\alpha )$$ and in agreement with (6.38) and (6.19).

## 6.3 Integral Formulations

The two Mellin transforms in (6.29) and (6.42) lead to different integral representation of fractional derivatives by computing the corresponding inverse functions. To do it, we will use well known results of the Beta function. To start, we are going to obtain the inverse $$h_{b} \left( t \right)$$ of $${\frac{\Upgamma ( - s)}{\Upgamma ( - s - \alpha )}}$$.

As known The Euler Beta function is defined for $$\text{Re} \left( p \right) > 0$$ and $$\text{Re} \left( q \right) > 0$$ by
$$B(p,q) = \int\limits_{0}^{1} {\tau^{p - 1} (1 - \tau )^{q - 1} } {\hbox{d}}\tau$$
(6.43)
and it can be shown that [7]
$$B(p,q) = {\frac{\Upgamma (p)\Upgamma (q)}{\Upgamma (p + q)}}$$
(6.44)
This allows us to write:
$${\frac{\Upgamma ( - s)\Upgamma ( - \alpha )}{\Upgamma ( - s - \alpha )}} = \int\limits_{0}^{1} {\tau^{ - s - 1} (1 - \tau )^{ - \alpha - 1} } {\hbox{d}}\tau$$
(6.45)
Provided that $$\text{Re} \left( s \right) < 0$$ and $$\text{Re} (\alpha ) \, < \, 0.$$ This gives immediately
$$h_{b} (t) = {\frac{{( - 1)^{\alpha } }}{\Upgamma ( - \alpha )}}(1 - t)^{ - \alpha - 1} u(1 - t)$$
(6.46)
A similar procedure to obtain the inverse $$h_{a} \left( t \right)$$ of $${\frac{\Upgamma (1 + s + \alpha )}{\Upgamma (1 + s)}}$$ gives
$${\frac{\Upgamma (1 + s + \alpha )\Upgamma ( - \alpha )}{\Upgamma (1+s)}} = \int\limits_{0}^{1} {\tau^{1 + s + \alpha } (1 - \tau )^{ -\alpha - 1} } {\hbox{d}}\tau$$
(6.47)
With a variable change inside the integral, we obtain:
$${\frac{\Upgamma (1 + s + \alpha )\Upgamma ( - \alpha )}{\Upgamma (1 + s)}} = \int\limits_{1}^{\infty } {\tau^{ - s} (1 - \tau )^{ - \alpha - 1} } {\hbox{d}}\tau$$
(6.48)
and
$$h_{a} (t) = {\frac{1}{\Upgamma ( - \alpha )}}(t - 1)^{ - \alpha - 1} u(t - 1)$$
(6.49)
To compute in integral formulations of the derivatives corresponding to (6.29) and (6.42) we remark that the inverse Mellin transform of $$F(s + \alpha )$$ is given by:
$${\mathbf{M}}^{ - 1} [F(s + \alpha )] = t^{ - \alpha } f(t)$$
(6.50)
and use the convolution (6.5). With (6.46) and (6.49) we obtain the following integral formulations, valid for $$\text{Re} (\alpha ) \, < \, 0$$.
$$D_{b}^{\alpha } f(t) = - {\frac{{t^{ - \alpha } }}{\Upgamma ( - \alpha )}}\int\limits_{0}^{1} {f(t/\tau )} (1 - \tau^{ - 1} )^{ - \alpha - 1} {\text{d}}\tau$$
(6.51)
and
$$D_{a}^{\alpha } f(t) = {\frac{{t^{ - \alpha } }}{\Upgamma ( - \alpha )}}\int\limits_{1}^{\infty } {f(t/\tau )} (\tau^{ - 1} - 1)^{ - \alpha - 1} {\text{d}}\tau$$
(6.52)
Attending to the fact that the convolution (6.5) is commutative, we can obtain another set of integral formulations for the derivatives. In fact, from (6.46) and (6.49), we obtain:
$$D_{a}^{\alpha } f(t) = - {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{0}^{t} {(t/\tau - 1)^{ - \alpha - 1} } \tau^{ - \alpha } f(\tau ){\text{d}}\tau /t$$
and
$$D_{a}^{\alpha } f(t) = {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{0}^{t} {(t - \tau )^{ - \alpha - 1} } f(\tau ){\text{d}}\tau$$
(6.53)
that coincides with the Liouville derivative particularized for causal functions. Relatively to the other case, we have:
$$D_{b}^{\alpha }\, f(t) = - {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{t}^{\infty } {(t - \tau )^{ - \alpha - 1} } f(\tau ){\text{d}}\tau$$
(6.54)
that is the backward Liouville derivative for causal signals. Although we obtained these results for $$\alpha < \, 0$$, they remain valid for other values of $$\alpha$$, since $${\frac{\Upgamma ( - s)}{\Upgamma ( - s - \alpha )}}$$ and $${\frac{\Upgamma (1 + s + \alpha )}{\Upgamma (1 + s)}}$$ are analytical in the regions of convergence and we can fix an integration path independent of $$\alpha$$. This can be confirmed by expanding (6.46) and (6.49) and transforming each term of the series.

## 6.4 On the Fractional Linear Scale Invariant Systems

### 6.4.1 Introduction

Braccini and Gambardella [8] introduced the concept of “form-invariant” filters. These are systems such that a scaling of the input gives rise to the same scaling of the output. This is important in detection and estimation of signals with unknown size requiring some type of pre-processing: for example edge sharpening in image processing or in radar signals. However in their attempt to define such systems, they did not give any formulation in terms of a differential equation. The Linear Scale Invariant Systems (LScIS) were really introduced by Yazici and Kashyap [9, 10] for analysis and modelling 1/f phenomena and in general the self-similar processes, namely the scale stationary processes. Their approach was based on an integer order Euler–Cauchy differential equation. However, they solved only a particular case corresponding to the all pole case. To insert a fractional behaviour, they proposed the concept of pseudo-impulse response. Here we avoid this procedure by presenting a fractional derivative based general formulation of the LScIS. These are described by fractional Euler–Cauchy equations. The fractional quantum derivatives are suitable for dealing with these systems. The use of the Mellin transform allowed us to define the multiplicative convolution and, from it, it is shown that the power function is the eigenfunction of the LScIS and the eigenvalue is the transfer function.

The computation of the impulse response from the transfer function is done following a procedure very similar to the used in the shift-invariant systems in . We will follow a two step procedure. In the first we solve a particular case with integer differentiation orders. Later we solve for the fractional case.

### 6.4.2 The General Formulation

We are going to consider a general formulation for the LScIS. The integer order case was studied in Yazici and Kashyap [9, 10]. To do it, we need the two fractional quantum derivatives that we presented in Sect. 6.2: the “below t” (analogue to anti-causal) and “above t” (analogue to causal) derivatives. If t were a time, we would talk on anti-causal and causal. We saw that working in the context of the Mellin transform we obtain two different regions of convergence: left and right relatively to a vertical straight line. This is not needed when dealing with integer order systems because we only have one Mellin transform for $$t^{n} f^{(n)} \left( t \right)$$ if n is integer. We rewrite here the two fractional quantum derivatives we are going to use
$$D_{q}^{\alpha } f(t) = \mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]}_{q} ( - 1)^{j} q^{j(j + 1)/2} q^{ - j\alpha } f(q^{j} t)}}{{(1 - q)^{\alpha } t^{\alpha } }}}$$
(6.55)
and
$$D_{{q^{ - 1} }}^{\alpha } f(t) = \mathop {\lim }\limits_{q \to 1} {\frac{{\sum\nolimits_{j = 0}^{\infty } {\left[ {_{j}^{\alpha } } \right]_{q} } ( - 1)^{j} q^{j(j - 1)/2} f(q^{ - j} t)}}{{(1 - q^{ - 1} )^{\alpha } t^{\alpha } }}}$$
(6.56)
where $$0 \, < \, q \, < \, 1$$. When $$\alpha$$ is a positive integer, these derivatives lead to the results obtained by Yazici and Kashyap [9, 10]. We must give a special emphasis on an interesting fact: these derivatives are not local (unless $$\alpha$$ is positive integer), because they use infinite values on the left or on the right. So, the whole left or right history of the signal is needed. This is important in systems based on these derivatives: they exhibit long-range memory. With the adopted Mellin transform (6.4) we are led to results similar to those obtained with the Laplace transform in the study of shift invariant systems. The Mellin transforms of the above derivatives are given by (6.29) and (6.42):
$${\mathbf{M}}\left[ {D_{q}^{\alpha } f(t)} \right] = {\frac{\Upgamma (1 + s + \alpha )}{\Upgamma (1 + s)}}F(s + \alpha )$$
valid for $$\text{Re} \left( s \right) > - \min (0,\alpha ) \, - 1$$, in the first case and by
$${\mathbf{M}}\left[ {D_{{q^{ - 1} }}^{\alpha } f(t)} \right] = ( - 1)^{\alpha } \cdot{\frac{{\Upgamma_{q} ( - s)}}{{\Upgamma_{q} ( - s - \alpha )}}}F(s + \alpha )$$
valid for $$\text{Re} \left( s \right) < - \, \max \, (0,\alpha )$$ in the second case. It is worth remarking that the first corresponds to the causal case when working in the Laplace transform context, while the second corresponds to the anti-causal one.

### 6.4.3 The Eigenfunctions and Frequency Response

We assume that the fractional LScIS is described by the general Euler–Cauchy differential equation
$$\sum\limits_{i = 0}^{N} {a_{i} t^{\alpha_{i}} \cdot y^{(\alpha_{i})} (t)} = \sum\limits_{i = 0}^{M} {b_{i} \cdot t^{\beta_{i}} \cdot x^{(\beta_{i})} (t)}$$
(6.57)
with $$t \in R^{ + }$$. The response of the system is obtained by using the multiplicative convolution defined by (6.5). As said before the neutral element of this convolution is $$g\left( t \right) \, = \delta \left( {t - 1} \right)$$. We must call the attention to the fact the point of application of the impulse is t = 1 and not t = 0, as it is the case of the shift-invariant systems. $$\delta \left( {t - 1} \right)$$ is the inverse of $$\Updelta \left( s \right) \, = \, 1$$. On the other hand, using the derivative definitions presented above, it is easy to show that:
$$[t^{\alpha } y^{(\alpha )} (t)]\,\nu\, g(t) = t^{\alpha } [y^{(\alpha )} f(t)\, \nu\, g(t)]$$
Let h(t) be the impulse response of the system,
$$\sum\limits_{i = 0}^{N} {a_{i} t^{{\alpha_{i} }} \cdot h^{{(\alpha_{i} )}} (t)} = \sum\limits_{i = 0}^{M} {b_{i} \cdot t^{{\beta}_{i}} \cdot \delta^{({\beta}_{i})} (t - 1)}$$
(6.58)
and convolve both sides of (6.58) with x(t). We conclude immediately that
$$y(t) = \int\limits_{0}^{\infty } {h(t/u)} x(u){\frac{{{\text{d}}u}}{u}}$$
(6.59)
If $$x(t) = t^{\sigma }$$, then
$$y(t) = H(\sigma )\cdot t^{\sigma }$$
(6.60)
meaning that the power function is the eigenfunction of the system described by (6.58) or (6.59) and $$H(\sigma )$$ is the eigenvalue, that we will call Transfer Function as in the shift-invariant systems and that is given by
$$H\left( s \right) = \int\limits_{0}^{\infty } {h\left( u \right)u^{ - s - 1} {\text{d}}u}$$
(6.61)
that is the Mellin transform of the impulse response. In (6.59) put x(t) = g(at). It is a simple task to show that the output is y(at) showing that the system is really scale invariant.

## 6.5 Impulse Response Computations

### 6.5.1 The Uniform Orders Case

Equation 6.57 is difficult to solve for any derivative orders. However, when the derivative orders have the format
$$\alpha_{i} = \alpha + i\quad i = 0,1,2, \ldots ,N$$
and
$$\beta_{i} = \beta + i\quad i = 0,1,2, \ldots ,N$$
we obtain a simpler equation
$$\sum\limits_{i = 0}^{N} {a_{i}\; t^{\alpha + 1} y^{(\alpha + i)} (t)} = \sum\limits_{i = 0}^{M} {b_{i}\; t^{\beta + 1} x^{(\beta + i)} (t)}$$
(6.62)
that we will solve with the help of the Mellin transform and using the fractional quantum derivatives. As we will show, the above equation allows us to obtain two transfer functions. Each of them has two terms that lead to two inverse functions. Before going into the general solution, we will consider the special integer order case with $$\alpha = \beta = \, 0$$

### 6.5.2 The Integer Order System with $$\alpha = \beta = \, 0$$

Consider a linear system represented by the differential equation
$$\sum\limits_{i = 0}^{N} {a_{i} \;t^{i} \cdot y^{(i)} (t)} = \sum\limits_{i = 0}^{M} {b_{i} \cdot t^{i} x^{(i)} (t)}$$
(6.63)
where x(t) is the input, y(t) the output, and N and M are positive integers $$(M \le N)$$. Usually a N is chosen to be 1. We will assume that this equation is valid for every $$t \in R^{ + }$$. The system defined by (6.62) with M = 0 was already studied [see 9, 10]. However, it is interesting to repeat the computations here to acquire some background into the general case.
Applying the Mellin transform to both sides of (6.63) we obtain
$$\sum\limits_{i = 0}^{N} {a_{i} ( - 1)^{i} } ( - s)_{i}\; Y(s) = \sum\limits_{i = 0}^{M} {b_{i} \cdot } ( - 1)^{i} ( - s)_{i}\; X(s),$$
(6.64)
from where a transfer function is deduced
$$H(s) = {\frac{Y(s)}{X(s)}} = {\frac{{\sum\nolimits_{i = 0}^{M} {b_{i} \cdot } ( - 1)^{i} ( - s)_{i} }}{{\sum\nolimits_{i = 0}^{N} {a_{i} \cdot } ( - 1)^{i} ( - s)_{i} }}}$$
(6.65)
In this expression we need to transform both numerator and denominator into polynomials in the variable s. To do it we use the well known relation [11]
$$(x)_{k} = \sum\limits_{i = 0}^{k} {( - 1)^{k - i} v(k,i)} x^{i}$$
(6.66)
where v(,) represent the Stirling numbers of first kind that verify the recursion
$$v\left( {n + 1,m} \right) = v\left( {n,m - 1} \right)-nv\left( {n,m} \right)$$
(6.67)

for $$1 \le m \le n$$ and with

$$v(n,0) = \delta_{n}$$ and $$v(n,1) = ( - 1)^{n - 1} (n - 1)!$$

With some manipulation, we obtain:
$$\sum\limits_{i = 0}^{N} {a_{i} ( - 1)^{i} ( - x)_{i} } = \sum\limits_{i = 0}^{N} {\sum\limits_{k = i}^{N} {a_{k} ( - 1)^{k} v(k,i)\;( - x)^{i} } } = \sum\limits_{i = 0}^{N} {A_{i} x^{i} }$$
(6.68)
with the A i coefficients given by
$$A_{i} = ( - 1)^{i} \sum\limits_{i = k}^{N} {a_{k} ( - 1)^{k} v(k,i)}$$
(6.69)
or in a matricial format
$${\mathbf{A}} = {\mathbf{V}}\cdot{\text{a}}$$
(6.70)
where
$${\mathbf{A}} = \, \left[ {A_{0}\, A_{1} \ldots \, \ldots \, A_{N} } \right]^{T}$$
(6.71)
$${\mathbf{V}} = \, \left[ { \, v\left( {i,j} \right), \, i,j = 0,1, \, \ldots ,N} \right]$$
(6.72)
and
$${\mathbf{a}} = \, \left[ {a_{0} \;a_{1} \ldots \, \ldots \, a_{N} } \right]^{T}$$
(6.73)
With this formulation, the transfer function is given by:
$$H(s) = {\frac{{\sum\nolimits_{i = 0}^{M} {B_{i} }\; s^{i} }}{{\sum\nolimits_{i = 0}^{N} {A_{i} }\; s^{i} }}}\; M \le N$$
(6.74)
that is the quotient of two polynomials in s. In the integer order case, it is indifferent which derivative we use, because they lead to the same result (6.17). This is a consequence of two facts:
1. (a)

The derivatives (6.55) and (6.56) coincide with the classic when $$\alpha = \, N \in Z^{ + }$$;

2. (b)

The transforms defined in (6.29) and (6.42) are equal and the region of convergence is the whole complex plane

In general H(s) has the following partial fraction decomposition
$$H(s) = {\frac{{B_{M} }}{{A_{N} }}} + \sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{{m_{i} }} {{\frac{{a_{ij} }}{{(s - p_{i} )^{j} }}}} }$$
(6.75)
The constant term only exists when M = N and its inversion gives a delta at t = 1:
$${\mathbf{M}}^{ - 1} \left[ {{\frac{{B_{M} }}{{A_{N} }}}} \right] = {\frac{{B_{M} }}{{A_{N} }}}\delta (t - 1)$$
(6.76)
For inversion of a given partial fraction, we must fix the region of convergence $$\text{Re} \left( s \right) \, > \, \text{Re} \left( {p_{i} } \right) \, {\rm or}$$ $$\text{Re} \left( s \right) \, > \, \text{Re} \left( {p_{i} } \right) \,$$similar to identical situation found in the usual shift invariant systems with the Laplace transform. Let us assume that the poles are simple. From the inversion Mellin integral, we obtain [3]
$${\mathbf{M}}^{ - 1} \left[ {{\frac{1}{{\left( {s - p} \right)}}}} \right] \, = \, w\left( t \right)\cdot t^{p}$$
(6.77)
where w(t) is equal to $$u\left( {1 - t} \right)$$ or to $$u\left( {t - 1} \right)$$ in agreement with the adopted the region of convergence. By successive derivation in order to p we obtain the solution for higher order poles
$${\mathbf{M}}^{1} \left[ {{\frac{1}{{\left( {s - p} \right)^{k} }}}} \right] \, = \, w\left( t \right)\;{\frac{{\left( { - 1} \right)^{k - 1} \left[ {\log \left( t \right)} \right]^{k - 1} }}{{\left( {k - 1} \right)!}}}t^{p}$$
(6.78)
We conclude that the response corresponding to an input $$\delta \left( {t - 1} \right)$$ is given by:
$$h\left( t \right) \, = {\frac{{B_{M} }}{{A_{N} }}}\delta \left( {t - 1} \right) + \sum\limits_{i = 1}^{N} {\sum\limits_{k = 1}^{{m_{i} }} {a_{ik} } } \cdot {\frac{{\left( { - 1} \right)^{k - 1} \left[ {\log \left( t \right)} \right]^{k - 1}}}{{\left( {k - 1} \right)!}}}t^{{p_{i} }} w\left( t \right)$$
(6.79)

To compute the output to any function x(t) we only have to use the multiplicative convolution. As in the shift-invariant systems, we have several ways of choosing the region of convergence. We can have all right signals, all left signals or, mixed right and left signals. In [9, 10] the first term does not appear, since only the all-pole case was discussed.

It is interesting to make here an important remark. Verify that (6.79) behaviours like the usual responses of the anti-causal and causal systems. When $$\text{Re} \left( {p_{i} } \right) \, > \, 0$$ and $$t \, > \, 1,$$ it increases without bound as $$t \to \infty$$, while it decreases as $$t \to 0. \,$$if $$\text{Re} \left( {p_{i} } \right) \, < \, 0,$$ (6.79) increases without bound as $$t \to 0$$, while it decreases as $$t \to \infty$$. This means that we can use the well known Routh–Hurwitz test to study the stability of LScIS.

### 6.5.3 The Fractional Order System

Consider now a linear system represented by the fractional differential equation
$$\sum\limits_{i = 0}^{N} {a_{i} \;t^{\alpha + i} \cdot y^{(\alpha + i)} (t)} = \sum\limits_{i = 0}^{M} {b_{i} \cdot\; t^{\beta + i} x^{(\beta + i)} (t)}$$
(6.80)
where $$\alpha$$ and $$\beta$$ are positive real numbers. With the Mellin transform we obtain two different transfer functions depending on the derivative we use, (6.55) or (6.56). Using derivative (6.55) and its Mellin transform we have:
$$H\left( s \right) \, = {\frac{{\sum\nolimits_{i = 0}^{M} {b_{i} \left( { - 1} \right)^{i} \left( {s - \beta } \right)_{i} } }}{{\sum\nolimits_{i = 0}^{N} {a_{i} } \left( { - 1} \right)^{i} \left( {s - \alpha } \right)_{i} }}}\cdot {\frac{{\Upgamma \left( {1 + s - \alpha } \right)}}{{\Upgamma \left( {1 + s} \right)}}}\;{\frac{{\Upgamma \left( {1 + s} \right)}}{{\Upgamma \left( {1 + s - \beta } \right)}}}$$
(6.81)
Proceeding as in (6.5.2) we have
$$H\left( s \right) \, = {\frac{{\sum\limits_{i = 0}^{M} {B_{i} \left( {s - \beta } \right)^{i} } }}{{\sum\limits_{i = 0}^{N} {A_{i} } \left( {s - \alpha } \right)^{i} }}}\cdot{\frac{{\Upgamma \left( {1 + s - \alpha } \right)}}{{\Upgamma \left( {1 + s - \beta } \right)}}}$$
(6.82)
So, the transfer function in (6.82) has two parts, the first is similar to (6.74) aside translations on the pole and zero positions. Its inverse has the format:
$$h\left( t \right) \, = {\frac{{B_{M} }}{{A_{N} }}}\delta \left( {t - 1} \right) + t^{\alpha } \sum\limits_{i = 1}^{N} {\sum\limits_{k = 1}^{{m_{i} }} {c_{ik} } } \cdot{\frac{{\left( { - 1} \right)^{k - 1} \left[ {\log \left( t \right)} \right]^{k - 1} }}{{\left( {k - 1} \right)!}}}t^{{p_{i} }} w\left( t \right)$$
(6.83)
where $$\alpha + p_{i} ,i = 1,2, \, \ldots , \, N$$ are the poles. We must remark that it does not depend explicitly on $$\beta .$$ The second factor in (6.82) leads to a new convolutional factor needed to compute its complete inversion. So, we have to compute the inverse Mellin transform of
$$H_{a} \left( s \right) \, = {\frac{{\Upgamma \left( {1 + s - \alpha } \right)}}{{\Upgamma \left( {1 + s - \beta } \right)}}}$$
(6.84)
For taking account with the stability of the system, we can consider the region of convergence the half plane defined by $$\text{Re} \left( s \right) > 0.$$ This function has infinite poles at $$s = \alpha - 1 - n,$$ with n a non negative integer. To invert it we can always choose an integration path on the right of all the poles similarly to the path shown in Fig. 6.1, but with the most left segment infinitely far. The residues are given by
$$R_{n} = {\frac{{\left( { - 1} \right)^{n} t^{\alpha - 1 - n} }}{{\Upgamma \left( {\alpha - \beta - n} \right)!}}}u\left( {t - 1} \right)$$
according to the properties of the Gamma function [7]. Adding the residues, we obtain
$$h_{a} \left( t \right) \, = {\frac{{t^{\alpha - 1} }}{{\Upgamma \left( {a - \beta } \right)}}}\sum\limits_{n = 0}^{\infty } {{\frac{{\Upgamma \left( {a - \beta } \right)\left( { - 1} \right)^{n} t^{ - n} }}{{\Upgamma \left( {a - \beta - n} \right)n!}}}\;u\left( {t - 1} \right)}$$
where we can identify the binomial series. Summing it, we obtain
$$h_{a} \left( t \right) \, = {\frac{1}{{\Upgamma \left( {\alpha - \beta } \right)}}}t^{\beta } \left( {t - 1} \right)^{\alpha - \beta - 1} u\left( {t - 1} \right)$$
(6.85)
So, the impulse response corresponding to (6.82) is the multiplicative convolution of (6.83) and (6.85). However, we can obtain an alternative approach to invert (6.82). It consists in expanding its first term in N partial fractions and invert N transforms with the format $${\frac{{\Upgamma \left( {1 + s - \alpha } \right)}}{{\left( {s - \alpha - p} \right)\Upgamma \left( {1 + s - \beta } \right)}}}$$. By simplicity, we assumed that all the poles are simple. We proceed as above to compute the residues. Collecting them the impulse response is given by
\begin{aligned} h\left( t \right) \, &= {\frac{{B_{M} 1}}{{A_{N} \Upgamma \left( {\alpha - \beta } \right)}}}t^{\beta } \left( {t - 1} \right)^{\alpha - \beta - 1} u\left( {t - 1} \right) + \, t^{\alpha } \sum\limits_{i = 1}^{N} {C_{i} } \cdot{\frac{{\Upgamma \left( {1 + p_{i} } \right)}}{{\Upgamma \left( {\alpha - \beta + p_{i} + 1} \right)}}}t^{{p_{i} }} u\left( {t - 1} \right) \hfill \\ &\;\;\;\; -{\frac{{\left( { - 1} \right)^{\beta - \alpha + 1} }}{{\Upgamma \left( {\alpha - \beta } \right)}}}t^{\beta } \sum\limits_{0}^{\infty } {\left( {\begin{array}{*{20}c} {\alpha - \beta - 1} \\ n \\ \end{array} } \right)\left( { - 1} \right)^{n} } {\frac{{t^{n} }}{{\beta - \alpha - p_{i} + n}}}u\left( {t - 1} \right) \hfill \\ \end{aligned}
(6.86)
Choosing the other derivative (6.56) and its Mellin transform (6.45), we have
$$H\left( s \right) \, = {\frac{{\sum\nolimits_{i = 0}^{M} {B_{i} \left( {s - \beta } \right)^{i} } }}{{\sum\nolimits_{i = 0}^{N} {A_{i} \left( {s - \alpha } \right)^{i} } }}}\cdot \left( { - 1} \right)^{\beta - \alpha } {\frac{{\Upgamma \left( { - s + \beta } \right)}}{{\Upgamma \left( { - s + \alpha } \right)}}}$$
(6.87)
The first factor has as inverse the expression given by (6.83) for $$w\left( t \right) = - u\left( {1 - t} \right)$$. For the second we proceed as before. Now the integration path is in the right half complex plane as in Fig. 6.2 but with the most right segment infinitely far.
We proceed as above to obtain
$$h_{a} \left( t \right) \, = {\frac{1}{{\Upgamma \left( {\alpha - \beta } \right)}}}t^{\beta } \left( {t - 1} \right)^{\alpha - \beta - 1} u\left( {1 - t} \right)$$
(6.88)
To compute the final impulse response we only have to proceed as in the other case. We obtain, for the simple pole case
$$\begin{gathered} h\left( t \right) \, = {\frac{{B_{M} 1}}{{A_{N} \Upgamma \left( {\alpha - \beta } \right)}}}t^{\beta } \left( {t - 1} \right)^{\alpha - \beta - 1} u\left( {1 - t} \right) + \, \left( { - 1} \right)^{\beta - \alpha + 1} t^{\alpha } \sum\limits_{i = 1}^{N} {C_{i} } \cdot {\frac{{\Upgamma \left( {\beta - \alpha + p_{i} } \right)}}{{\Upgamma \left( {p_{i} } \right)}}}t^{{p_{i} }} \cdot u\left( {1 - t} \right) \hfill \\ \quad \quad \quad + {\frac{{\left( { - 1} \right)^{{\left( {\beta - \alpha + 1} \right)}} }}{{\Upgamma \left( {\alpha - \beta } \right)}}}t^{\beta } \sum\limits_{0}^{\infty } {\left( {\begin{array}{*{20}c} {\alpha - \beta - 1} \\ n \\ \end{array} } \right)} \left( { - 1} \right)^{n} {\frac{{t^{n} }}{{\beta - \alpha - p_{i} + n}}}u\left( {t - 1} \right) \hfill \\ \end{gathered}$$
(6.89)

We must remark that the above results are valid even if $$\alpha$$ and $$\beta$$ are positive integers. Of course, we could obtain other solutions by choosing other integration paths such that there were poles on the left and on the right of it. In these cases we would obtain “two-sided” responses. It is interesting to remark that:

If$$\alpha = \beta$$, the second terms in (6.82) and (6.87) are equal to 1, implying that the complete impulse response is given by (6.83).

When $$\alpha = 0$$ and $$\beta \ne 0$$ in (6.86) we obtain a situation very similar to the one treated by Yazici and Kashyap [9, 10].

If $$\alpha \, = \, \beta + 1$$, (6.85) and (6.88) become merely power functions and so self-similar.

### 6.5.4 A Simple Example

We are going to consider a simple system described by the differential equation:
$$t^{\alpha + 1} y^{{\left( {\alpha + 1} \right)}} \left( t \right) + a \, t^{\alpha } y^{\left( \alpha \right)} \left( t \right) = \, x\left( t \right)$$
If $$\alpha \, = \, 0$$, the impulse response comes from (6.83) and it is given by:
$$h_{s} \left( t \right) \, = t^{ - a} w\left( t \right)$$
where w(t) is equal to ?u(1?t) or to u(t?1), in agreement with the adopted region of convergence. The analogue shift invariant corresponding system
$$y '\left( t \right) \, + \, a \, y\left( t \right) \, = \, x\left( t \right)$$
has the causal and anti-causal impulse responses:
$$h_{t} (t) = \pm e^{ - at} u( \pm t)$$

As seen, we made a substitution $$t \to e^{t} .$$

Now, let $$\alpha \, \ne \, 0$$. We have, from (6.86)
$$h\left( t \right) = {\frac{{\Upgamma \left( {1 - a} \right)}}{{\Upgamma \left( {\alpha - a + 1} \right)}}}t^{\alpha - a} u\left( {t - 1} \right) + {\frac{1}{\Upgamma \left( \alpha \right)}}t^{\alpha - 1} \sum\limits_{0}^{\infty } {\left( {\begin{array}{*{20}c} {\alpha - 1} \\ n \\ \end{array} } \right)} \left( { - 1} \right)^{n} {\frac{{t^{ - n} }}{ - a + n + 1}}u\left( {t - 1} \right)$$
(6.90)
and, from (6.89)
$$h\left( t \right) \, = \left( { - 1} \right)^{ - \alpha + 1} {\frac{{\Upgamma \left( { - \alpha - a} \right)}}{{\Upgamma \left( { - a} \right)}}}t^{\alpha - a} u\left( {1 - t} \right) + {\frac{{\left( { - 1} \right)^{ - \alpha + 1} }}{\Upgamma \left( \alpha \right)}}\sum\limits_{0}^{\infty } {\left( {\begin{array}{*{20}c} {\alpha - 1} \\ n \\ \end{array} } \right)} \left( { - 1} \right)^{n} {\frac{{t^{n} }}{a - \alpha + n}}u\left( {1 - t} \right)$$
The shift invariant corresponding system
$$y^{{\left( {\alpha + 1} \right)}} \left( t \right) + a \, y^{\left( \alpha \right)} \left( t \right) = x\left( t \right)$$
has the following transfer function
$$H\left( s \right) \, = {\frac{{s^{ - \alpha } }}{s + a}}$$
and its causal impulse response is (see ):
$$h_{s} \left( t \right) = \sum\limits_{0}^{\infty } {\left( { - a} \right)^{n} {\frac{{t^{n + \alpha } }}{{\Upgamma \left( {n + \alpha + 1} \right)}}}u\left( t \right)}$$
(6.91)
As seen the above referred substitution seems not to be valid here. The anti-causal response is very similar, but it does not have any special interest. In Fig. 6.3, we present the results obtained for these systems, for several values of $$\alpha \, \left( {0, \, 0.33, \, 0.66, \, 0.99} \right)$$. The upper strip shows the results obtained with (6.90). The results in the middle strip were obtained with (6.91). The third strip shows the result of a transformation $$t \to \log \left( t \right)$$ in (6.91). Although it is not very clear in the picture, we can see the similarity between the curves corresponding to $$\alpha \, = \, 0$$ and $$\alpha \, = \, 0. 9 9$$ with the equivalent in the upper strip.

The impulse responses stated (6.86) and (6.89) depend directly on the differential equation (6.62) not on the way we followed to obtain them. This means that we are not obliged to use the quantum derivative. In fact we could also use another derivative like Grunwald–Letnikov, Riemann–Liouville or Caputo, but it would be very difficult to arrive at the results we obtained. The quantum derivative allows us to obtain such impulse responses more easily. On the other hand, those derivatives are suitable for dealing with shift-invariant systems defined over R, not $$R^{ + } .$$

In the integer order case, we can switch from the LScIS to the corresponding linear shift-invariant systems: we only have to perform a logarithmic transformation. However, this is not evident neither correct in the fractional case, due to the first term (6.86) and (6.89), as the example presented above shows. This fact may come from the difficulty in defining fractional derivative of a composite function. The lack of emphasis on this fact is due to the desire of presenting a linear system that exists by itself and not because can be the transformation of another one. It is more or less the same situation that we find when introducing difference equations. They exist and do not need to be presented as transformations of the ordinary differential equations (with bilinear or other mapping). It is curious to refer that we can obtain the corresponding shift invariant system, by considering that the transfer function in (6.62) is now a transfer function of a shift invariant system and use the Laplace transform to go back into a new differential equation.

The LScIS, being scale invariant, but not shift invariant, can be useful in detection problems and in image processing. Their conjunction with the Wavelet transform can be interesting [9, 10].

## 6.6 Conclusions

We presented the quantum fractional derivative as an alternative to the common Grünwald–Letnikov and Liouville derivatives. It was described in two formulations: summation and integral. Its Mellin transform was also presented and use to establish the relation between the two formulations. The summation formulations are similar to the Grünwald–Letnikov fractional derivatives. The main difference lies in the use of an exponential scale for the independent variable. The Grünwald–Letnikov derivatives use a linear scale. The integral formulations are similar to the Liouville derivatives. This derivative is useful to solve fractional Euler–Cauchy differential equations and can be useful in dealing with scale problems.

We introduced the general formulation of the linear scale invariant systems through the fractional Euler–Cauchy equation. To solve this equation we used the fractional quantum derivative concept and the help of the Mellin transform. As in the linear time invariant systems we obtained two solutions corresponding to the use of two different regions of convergence. We presented other interesting features of the LScIS, namely the frequency response. We made also a brief study of the stability.

There is another way of introducing two-sided quantum derivatives. To do it, we can start from the two-sided quantum derivative
$$D_{{q_{0} }} \; f\left( t \right) = {\mathop {\lim }\limits_{q \to 1}\; {\frac{{f\left( {q^{ - 1/2} t} \right) - f\left( {q^{1/2} t} \right)}}{{\left( {q^{ - 1/2} - q^{1/2} } \right)t}}}}$$
(6.92)
and proceed as in . It will not be done here.

## References

1. 1.
Kac V, Cheung P (2002) Quantum calculus. Springer, New York
2. 2.
Ash JM, Catoiu S, Rios-Collantes-de-Terán R (2002) On the nth quantum derivative. J Lond Math Soc 2(66):114–130
3. 3.
Bertrand J, Bertrand P, Ovarlez JP (2000) The mellin transform. In: Poularikas AD (ed) The Transforms and Applications Handbook, 2nd edn. CRC Press, Boca RatonGoogle Scholar
4. 4.
Poularikas AD (ed) (2000) The transforms and applications handbook. CRC Press, Boca RatonGoogle Scholar
5. 5.
Koornwinder TH (1999) Some simple applications and variants of the q-binomial formula. Informal note, Universiteit van AmsterdamGoogle Scholar
6. 6.
Al-Salam W (1966) Some fractional q-integrals and q-derivatives. Proc Edin Math Soc 15:135–140
7. 7.
Henrici P (1991) Applied and computational complex analysis, vol 2. Wiley, New York, pp 389–391
8. 8.
Braccini C, Gambardella G (1986) Form-invariant linear filtering: theory and applications. IEEE Trans Acoust Speech Signal Process ASSP-34(6):1612–1628
9. 9.
Yazici B, Kashyap RL (1997) Affine stationary processes with applications to fractional Brownian motion. In: Proceedings of 1997 International Conference on Acoustics, Speech, and Signal Processing, vol 5. Munich, Germany, pp 3669–3672, April 1997Google Scholar
10. 10.
Yazici B, Kashyap RL (1997) A class of second-order stationary self-similar processes for 1/f phenomena. IEEE Trans Signal Process 45(2):396–410
11. 11.
Abramowitz M, Stegun I (1972) Stirling Numbers of the First Kind. Sect. 24.1.3 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. Dover, New York, p 824Google Scholar