Skip to main content

Finding Aggregate Growth Rate Using Regression Technique

  • Chapter
  • First Online:
Applications of Regression Techniques
  • 1287 Accesses

Abstract

In this chapter we attempt to find the overall growth rate either from the original set of observations or from the growth rate of each component. The focus of the chapter is to find the aggregate growth rate from the individual growth rates. We have also discussed how the calculation of growth rates can be done using regression technique. Moreover, the formula can compute average growth rate even when there are some zero or negative growth rate. The treatments of cross section data and the time series data are usually quite different. The present chapter unifies the methods in such a way that the formula can be applied both in cross section and time series data. The modified growth rate happens to be an intermediate growth rate, because it lies between geometric and arithmetic mean when all the individual growth rates are positive.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manoranjan Pal .

Appendix

Appendix

AM, GM, and HM are called the Pythagorean means (Wikipedia). They are defined as

$${\text{AM}}\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} x_{i} ,$$
$${\text{GM}}\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = \sqrt[n]{{x_{1} x_{2} \ldots x_{n} }}$$

and

$${\text{HM}}\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = \frac{n}{{\mathop \sum \nolimits_{i = 1}^{n} 1/x_{i} }}.$$

It can be proved that

$${\text{Min}} \le {\text{HM}} \le {\text{GM}} \le {\text{AM}} \le {\text{Max}}.$$

Jensen’s inequality: Suppose f is a real-valued convex function. Let the domain of f be \(x_{1} ,x_{2} , \ldots ,x_{n}\). Then for any positive weights ai, i = 1, 2, …, n,

$$f\left( {\frac{{\sum {a_{i} x_{i} } }}{{\sum {a_{i} } }}} \right) \le \frac{{\sum {a_{i} f} \left( {x_{i} } \right)}}{{\sum {a_{i} } }}.$$
(6.21)

This the finite form of Jensen’s inequality. If \(a_{1} = a_{2} = \cdots = a_{n}\), then the inequality (6.21) reduces to

$$f\left( {\frac{{\sum {x_{i} } }}{n}} \right) \le \frac{{\sum f \left( {x_{i} } \right)}}{n}.$$
(6.22)

Example 1

Suppose f(x) = ln(x). ln(x) is a concave function. Then from the reverse inequality of (6.22), we get

$$\ln \left( {\frac{{\sum {x_{i} } }}{n}} \right) \ge \frac{{\sum {\ln } \left( {x_{i} } \right)}}{\text{n}} = \ln \left( {\sqrt[n]{{x_{1} x_{2} \ldots x_{n} }}} \right),$$

or

$$\frac{{\sum {x_{i} } }}{n} \ge \sqrt[n]{{x_{1} x_{2} \ldots x_{n} }}.$$

Thus, AM ≥ GM.

Example 2

Suppose f(x) = 1/x. 1/x is a convex function. Then from the inequality (2), we get

$$1/\left( {\frac{{\sum {x_{i} } }}{n}} \right) \le \frac{{\sum {1/{\text{x}}_{\text{i}} } }}{\text{n}},$$

or

$$\frac{{\sum {x_{i} } }}{n} \ge \frac{\text{n}}{{\sum {1/{\text{x}}_{\text{i}} } }} .$$

Thus, AM ≥ HM.

In fact, there is a stronger inequality GM ≥ HM.

To prove it, we use the property that AM ≥ GM with arguments \(\frac{1}{{x_{1} }},\frac{1}{{x_{2} }}, \ldots ,\frac{1}{{x_{n} }}\).

$$\frac{1}{n}\left( {\frac{1}{{x_{1} }} + \frac{1}{{x_{2} }} + \cdots + \frac{1}{{x_{n} }}} \right) \ge \sqrt[n]{{\frac{1}{{x_{1} }}\frac{1}{{x_{2} }} \cdots \frac{1}{{x_{n} }}}}.$$

This implies that GM(\(x_{1} x_{2} \ldots x_{n}\)) ≥ HM(\(x_{1} x_{2} \ldots x_{n}\)).

Thus AM ≥ GM ≥ HM. Q.E.D.

We can unify all these means by taking the generalized mean (also known as power mean or Hölder mean). Suppose we have observations \(\left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\). The generalized mean may be defined as

$$M_{p} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = \left( {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} x_{i}^{p} } \right)^{1/p} ,$$

where p is assumed to be a nonzero real number and \(x_{1} ,x_{2} , \ldots ,x_{n}\) are positive real numbers.

It is an increasing function of p. \(\mathop {\lim }\limits_{p \to - \infty } M_{p}\) = Minimum of \(x_{1} ,x_{2} , \ldots ,x_{n}\), \(M_{ - 1} = {\text{HM}},\) \(\mathop {\lim }\limits_{p \to 0} M_{p} = {\text{GM}}\), \(M_{1} = {\text{AM}},\) and \(\mathop {\lim }\limits_{p \to \infty } M_{p}\) = Maximum of \(x_{1} ,x_{2} , \ldots ,x_{n}\).

Theorem

\(\mathop {\lim }\limits_{p \to 0} M_{p} = {\text{M}}_{0}\).

Proof

\(M_{p} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) = exp \(\left\{ {\frac{{{ \ln }\left( {\mathop \sum \nolimits_{i = 1}^{n} w_{i} x_{i}^{p} } \right)}}{p}} \right\}\), assuming \(\sum {w_{i} } = 1\), so that in the special case, we have \(w_{i} = 1/{\text{n}}\).

$$\begin{aligned} \mathop {\lim }\limits_{p \to 0} M_{p} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) & = \mathop {\lim }\limits_{p \to 0} \exp \left\{ {\frac{{\ln \left( {\mathop \sum \nolimits_{i = 1}^{n} w_{i} x_{i}^{p} } \right)}}{p}} \right\} \\ & = \exp \left\{ {\mathop {\lim }\limits_{p \to 0} \left( {\frac{{\ln \left( {\mathop \sum \nolimits_{i = 1}^{n} w_{i} x_{i}^{p} } \right)}}{p}} \right)} \right\}. \end{aligned}$$

Applying L’Hopital’s rule, we get

$$\begin{aligned} \exp \left\{ {\mathop {\lim }\limits_{p \to 0} \left( {\frac{{\ln \left( {\mathop \sum \nolimits_{i = 1}^{n} w_{i} x_{i}^{p} } \right)}}{p}} \right)} \right\} & = \exp \left\{ {\mathop {\lim }\limits_{p \to 0} \left( {\frac{{\frac{{\mathop \sum \nolimits_{i = 1}^{n} w_{i} x_{i}^{p} \ln x_{i} }}{{\mathop \sum \nolimits_{i = 1}^{n} w_{i} x_{i}^{p} }}}}{1}} \right)} \right\} \\ & = \exp \left( {\mathop \sum \limits_{i = 1}^{n} w_{i} \ln x_{i} } \right) = \mathop \prod \limits_{i = 1}^{n} x_{i}^{{w_{i} }} = M_{0} . \\ \end{aligned}$$

Q.E.D.

In fact, the generalized mean can be further generalized by taking quasi-arithmetic mean or generalized f-mean (Gf-M). This is also known as Kolmogorov mean. The generalized f-mean of n numbers \(x_{1} ,x_{2} , \ldots ,x_{n}\) is defined as

$$M_{f} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = f^{ - 1} \left( {\frac{1}{n}\sum f \left( {x_{i} } \right)} \right),$$

where, f is a continuous one-to-one function from I to a point in the real line, i.e., \({x}_{1} ,{x}_{2} , \ldots ,{x}_{n} \,\epsilon\,{I}\) and I is an interval in the real line and \(M_{f} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) is a value in the real line. There are many interesting properties of Gf-M. Some of these properties are given below:

  1. 1.

    Continuity and Monotonicity: M(\(x_{1} , \ldots ,x_{n}\)) is continuous and increasing in each variable.

  2. 2.

    Value preservation: M(x, x, …, x) = x.

  3. 3.

    First order homogeneity: M(\(bx_{1} ,bx_{2} , \ldots ,bx_{n}\)) = bM(\(x_{1} ,x_{2} , \ldots ,x_{n}\)).

  4. 4.

    Symmetry: M(\(x_{1} , \ldots ,x_{n}\)) is a symmetric function, i.e., the value of the function remains unchanged if we take any permutation of \(x_{1} , \ldots ,x_{n}\).

    $$M(x_{1} , \ldots ,x_{n} ) = M(x_{{i_{1} }} ,x_{{i_{2} }} , \ldots ,x_{{i_{n} }} ),$$

    where \(i_{1} , \ldots ,i_{n}\) is a permutation of (1, 2, …, n).

    There is an equivalent property known as ‘Invariance under exchange’, which may be written symbolically as: \(M( \ldots ,x_{i} , \ldots ,x_{j} \ldots )\) = M(\(\ldots ,x_{j} , \ldots ,x_{i} \ldots\)). This property guarantees anonymity.

  5. 5.

    Averaging: Min(\(x_{1} ,x_{2} , \ldots ,x_{n}\)) ≤ M(\(x_{1} ,x_{2} , \ldots ,x_{n}\)) ≤ Max(\(x_{1} ,x_{2} , \ldots ,x_{n}\)).

  6. 6.

    Partitioning: Mean is the mean of equal-sized sub-block

    $$\begin{aligned} M_{f} \left( {x_{1} ,x_{2} , \ldots ,x_{n.k} } \right) & = M_{f} \left( {M_{p} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right),M_{f} \left( {x_{k + 1} ,x_{k + 2} , \ldots ,x_{k + n} } \right),} \right. \\ & \quad \left. { \ldots ,M_{f} \left( {x_{{\left( {n - 1} \right)k + 1}} ,x_{{\left( {n - 1} \right)k + 2}} , \ldots ,x_{nk} } \right)} \right) \\ \end{aligned}$$
  7. 7.

    Mean Preserving Subset: Subsets of elements can be averaged a priori, without altering the mean, given that multiplicity of elements is maintained.

    $$M_{f} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) = M_{f} \left( {m,m, \ldots ,m,x_{k + 1} , \ldots ,x_{n} } \right)$$
  8. 8.

    Invariance under Offsets and Scaling: It is invariant with respect to offsets and scaling of f.

    $$\forall a\forall b \ne 0((\forall t\,f(t) = a + b.f(t)) \Rightarrow \forall xM_{f} (x) = M_{g} (x).$$
  9. 9.

    Monotonicity: If f is monotonic, then \(M_{f}\) is monotonic.

  10. 10.

    Mediality: Property for two variable mean: M(M(x, y), M(z, w)) = M(M(x, z), M(y, w))

  11. 11.

    Self-distributive property: M(x, M(y, z)) = M(M(x, y), M(x, z))

  12. 12.

    The balancing property: M(M(x, M(x, y)), M(y, M(x, y))) = M(x, y).

The balancing property together with fixed-point, symmetry, monotonicity, and continuity property imply Gf-M, if it is an analytic function (Aumann 1934, 1937).

Kolmogorov (1930) proposed an axiomatic approach to arrive at Gf-M (Cited in de Carvalho 2016)

  1. A1.

    \(M\left( {x_{0} ,x_{1} , \ldots ,x_{n} } \right)\) is continuous and increasing in each variable.

  2. A2.

    \(M\left( {x_{0} ,x_{1} , \ldots ,x_{n} } \right)\) is a symmetric function, i.e., the value of the function remains unchanged if we take any permutation of \(x_{0} ,x_{1} , \ldots ,x_{n}\).

  3. A3.

    \(M\left( {x,x, \ldots ,x} \right) = x\).

  4. A4.

    If a part of the arguments is replaced by its corresponding mean, then the mean of the combined arguments remains unchanged. Suppose \(m = M(x_{0} ,x_{1} , \ldots ,x_{r} )\), then \(M\left( x \right) = M(x_{0} ,x_{1} , \ldots ,x_{r} ,x_{r + 1} ,x_{r + 2} , \ldots ,x_{n} )\) = \(M(m,m, \ldots ,m,x_{r + 1} ,x_{r + 2} , \ldots ,x_{n} )\), m is repeated r times.

Kolmogorov (1930) proved that if conditions (A1) to (A4) hold, then the function M(x) has the form \(M_{g} \left( x \right) = g^{ - 1} \left( {\frac{1}{n}\sum\nolimits_{i = 1}^{n} {g\left( {x_{i} } \right)} } \right)\), where g is a continuous monotonic function and \(g^{ - 1}\) is its inverse function.

Characterization of Gf-M may be done by using the combination of the above properties (Aczel and Dhrombres 1989, Chap. 17).

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Pal, M., Bharati, P. (2019). Finding Aggregate Growth Rate Using Regression Technique. In: Applications of Regression Techniques. Springer, Singapore. https://doi.org/10.1007/978-981-13-9314-3_6

Download citation

Publish with us

Policies and ethics