Abstract
In this chapter we attempt to find the overall growth rate either from the original set of observations or from the growth rate of each component. The focus of the chapter is to find the aggregate growth rate from the individual growth rates. We have also discussed how the calculation of growth rates can be done using regression technique. Moreover, the formula can compute average growth rate even when there are some zero or negative growth rate. The treatments of cross section data and the time series data are usually quite different. The present chapter unifies the methods in such a way that the formula can be applied both in cross section and time series data. The modified growth rate happens to be an intermediate growth rate, because it lies between geometric and arithmetic mean when all the individual growth rates are positive.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
AM, GM, and HM are called the Pythagorean means (Wikipedia). They are defined as
and
It can be proved that
Jensen’s inequality: Suppose f is a real-valued convex function. Let the domain of f be \(x_{1} ,x_{2} , \ldots ,x_{n}\). Then for any positive weights ai, i = 1, 2, …, n,
This the finite form of Jensen’s inequality. If \(a_{1} = a_{2} = \cdots = a_{n}\), then the inequality (6.21) reduces to
Example 1
Suppose f(x) = ln(x). ln(x) is a concave function. Then from the reverse inequality of (6.22), we get
or
Thus, AM ≥ GM.
Example 2
Suppose f(x) = 1/x. 1/x is a convex function. Then from the inequality (2), we get
or
Thus, AM ≥ HM.
In fact, there is a stronger inequality GM ≥ HM.
To prove it, we use the property that AM ≥ GM with arguments \(\frac{1}{{x_{1} }},\frac{1}{{x_{2} }}, \ldots ,\frac{1}{{x_{n} }}\).
This implies that GM(\(x_{1} x_{2} \ldots x_{n}\)) ≥ HM(\(x_{1} x_{2} \ldots x_{n}\)).
Thus AM ≥ GM ≥ HM. Q.E.D.
We can unify all these means by taking the generalized mean (also known as power mean or Hölder mean). Suppose we have observations \(\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\). The generalized mean may be defined as
where p is assumed to be a nonzero real number and \(x_{1} ,x_{2} , \ldots ,x_{n}\) are positive real numbers.
It is an increasing function of p. \(\mathop {\lim }\limits_{p \to - \infty } M_{p}\) = Minimum of \(x_{1} ,x_{2} , \ldots ,x_{n}\), \(M_{ - 1} = {\text{HM}},\) \(\mathop {\lim }\limits_{p \to 0} M_{p} = {\text{GM}}\), \(M_{1} = {\text{AM}},\) and \(\mathop {\lim }\limits_{p \to \infty } M_{p}\) = Maximum of \(x_{1} ,x_{2} , \ldots ,x_{n}\).
Theorem
\(\mathop {\lim }\limits_{p \to 0} M_{p} = {\text{M}}_{0}\).
Proof
\(M_{p} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) = exp \(\left\{ {\frac{{{ \ln }\left( {\mathop \sum \nolimits_{i = 1}^{n} w_{i} x_{i}^{p} } \right)}}{p}} \right\}\), assuming \(\sum {w_{i} } = 1\), so that in the special case, we have \(w_{i} = 1/{\text{n}}\).
Applying L’Hopital’s rule, we get
Q.E.D.
In fact, the generalized mean can be further generalized by taking quasi-arithmetic mean or generalized f-mean (Gf-M). This is also known as Kolmogorov mean. The generalized f-mean of n numbers \(x_{1} ,x_{2} , \ldots ,x_{n}\) is defined as
where, f is a continuous one-to-one function from I to a point in the real line, i.e., \({x}_{1} ,{x}_{2} , \ldots ,{x}_{n} \,\epsilon\,{I}\) and I is an interval in the real line and \(M_{f} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) is a value in the real line. There are many interesting properties of Gf-M. Some of these properties are given below:
-
1.
Continuity and Monotonicity: M(\(x_{1} , \ldots ,x_{n}\)) is continuous and increasing in each variable.
-
2.
Value preservation: M(x, x, …, x) = x.
-
3.
First order homogeneity: M(\(bx_{1} ,bx_{2} , \ldots ,bx_{n}\)) = bM(\(x_{1} ,x_{2} , \ldots ,x_{n}\)).
-
4.
Symmetry: M(\(x_{1} , \ldots ,x_{n}\)) is a symmetric function, i.e., the value of the function remains unchanged if we take any permutation of \(x_{1} , \ldots ,x_{n}\).
$$M(x_{1} , \ldots ,x_{n} ) = M(x_{{i_{1} }} ,x_{{i_{2} }} , \ldots ,x_{{i_{n} }} ),$$where \(i_{1} , \ldots ,i_{n}\) is a permutation of (1, 2, …, n).
There is an equivalent property known as ‘Invariance under exchange’, which may be written symbolically as: \(M( \ldots ,x_{i} , \ldots ,x_{j} \ldots )\) = M(\(\ldots ,x_{j} , \ldots ,x_{i} \ldots\)). This property guarantees anonymity.
-
5.
Averaging: Min(\(x_{1} ,x_{2} , \ldots ,x_{n}\)) ≤ M(\(x_{1} ,x_{2} , \ldots ,x_{n}\)) ≤ Max(\(x_{1} ,x_{2} , \ldots ,x_{n}\)).
-
6.
Partitioning: Mean is the mean of equal-sized sub-block
$$\begin{aligned} M_{f} \left( {x_{1} ,x_{2} , \ldots ,x_{n.k} } \right) & = M_{f} \left( {M_{p} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right),M_{f} \left( {x_{k + 1} ,x_{k + 2} , \ldots ,x_{k + n} } \right),} \right. \\ & \quad \left. { \ldots ,M_{f} \left( {x_{{\left( {n - 1} \right)k + 1}} ,x_{{\left( {n - 1} \right)k + 2}} , \ldots ,x_{nk} } \right)} \right) \\ \end{aligned}$$ -
7.
Mean Preserving Subset: Subsets of elements can be averaged a priori, without altering the mean, given that multiplicity of elements is maintained.
$$M_{f} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = M_{f} \left( {m,m, \ldots ,m,x_{k + 1} , \ldots ,x_{n} } \right)$$ -
8.
Invariance under Offsets and Scaling: It is invariant with respect to offsets and scaling of f.
$$\forall a\forall b \ne 0((\forall t\,f(t) = a + b.f(t)) \Rightarrow \forall xM_{f} (x) = M_{g} (x).$$ -
9.
Monotonicity: If f is monotonic, then \(M_{f}\) is monotonic.
-
10.
Mediality: Property for two variable mean: M(M(x, y), M(z, w))Â =Â M(M(x, z), M(y, w))
-
11.
Self-distributive property: M(x, M(y, z)) = M(M(x, y), M(x, z))
-
12.
The balancing property: M(M(x, M(x, y)), M(y, M(x, y))) = M(x, y).
The balancing property together with fixed-point, symmetry, monotonicity, and continuity property imply Gf-M, if it is an analytic function (Aumann 1934, 1937).
Kolmogorov (1930) proposed an axiomatic approach to arrive at Gf-M (Cited in de Carvalho 2016)
-
A1.
\(M\left( {x_{0} ,x_{1} , \ldots ,x_{n} } \right)\) is continuous and increasing in each variable.
-
A2.
\(M\left( {x_{0} ,x_{1} , \ldots ,x_{n} } \right)\) is a symmetric function, i.e., the value of the function remains unchanged if we take any permutation of \(x_{0} ,x_{1} , \ldots ,x_{n}\).
-
A3.
\(M\left( {x,x, \ldots ,x} \right) = x\).
-
A4.
If a part of the arguments is replaced by its corresponding mean, then the mean of the combined arguments remains unchanged. Suppose \(m = M(x_{0} ,x_{1} , \ldots ,x_{r} )\), then \(M\left( x \right) = M(x_{0} ,x_{1} , \ldots ,x_{r} ,x_{r + 1} ,x_{r + 2} , \ldots ,x_{n} )\) = \(M(m,m, \ldots ,m,x_{r + 1} ,x_{r + 2} , \ldots ,x_{n} )\), m is repeated r times.
Kolmogorov (1930) proved that if conditions (A1) to (A4) hold, then the function M(x) has the form \(M_{g} \left( x \right) = g^{ - 1} \left( {\frac{1}{n}\sum\nolimits_{i = 1}^{n} {g\left( {x_{i} } \right)} } \right)\), where g is a continuous monotonic function and \(g^{ - 1}\) is its inverse function.
Characterization of Gf-M may be done by using the combination of the above properties (Aczel and Dhrombres 1989, Chap. 17).
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Pal, M., Bharati, P. (2019). Finding Aggregate Growth Rate Using Regression Technique. In: Applications of Regression Techniques. Springer, Singapore. https://doi.org/10.1007/978-981-13-9314-3_6
Download citation
DOI: https://doi.org/10.1007/978-981-13-9314-3_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-9313-6
Online ISBN: 978-981-13-9314-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)