Advertisement

A Novel Set of Moment Invariants for Pattern Recognition Applications Based on Jacobi Polynomials

  • Rafael Augusto Rocha Angulo
  • Juan Martín Carpio
  • Alfonso Rojas-DomínguezEmail author
  • Manuel Ornelas-Rodríguez
  • Héctor Puga
Conference paper
  • 103 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12088)

Abstract

A novel set of moment invariants for pattern recognition applications, which are based on Jacobi polynomials, are presented. These moment invariants are constructed for digital images by means of a combination with geometric moments, and are invariant in the face of affine geometric transformations such as rotation, translation and scaling, on the image plane. This invariance is tested on a sample of the MPEG-7 CE-Shape-1 dataset. The results presented show that the low-order moment invariants indeed possess low variance between images that are affected by the mentioned geometric transformations.

Keywords

Jacobi polynomials Orthogonal polynomials Geometric moment invariants Jacobi moment invariants Moments 

1 Introduction

The invariant moments are a concept very often used in pattern recognition. Hu [1] presented a set of invariant moments which were obtained through the geometric moments that can be applied. Using Hu’s moments, Paschalakis and Lee presented a method in which they classified images using these invariants [2].

Orthogonal moments have been frequently used in image processing. Shu et al. presented a new approach to calculate 2-dimensional moments using Chebyshev orthogonal polynomials in binary and grayscale images [3]. Teague calculated invariant moments of images with Zernike orthogonal polynomials, instead of moments as Hu did before [4]. Benzzoubeir et al., using Legendre orthogonal polynomials and hypergeometric functions, presented a faster and more efficient way to perform 2-dimensional image analysis using Legendre’s orthogonal properties in [5].

Orthogonal moments have also been used to measure the quality of an image, i.e. to quantify how distorted or legible an image is. In [6] Abudhahir et al. presented an image quality assessment metric to detect and determine the level of distortion in these by calculating and applying Chebyshev moments. Hosny presented a more efficient way to obtain moments from an image using Gegenbauer polynomials [7]; and more recently Hosny also presented invariants based on Gegenbauer polynomials combined with geometric moments which can be applied to recognition of images [8].

Finally Herrera-Acosta et al. through the use of Gegenbauer polynomials, presented an image descriptor which allows the recognition of visual scenes and compared its performance against the popular SIFT image descriptor [9].

In this paper, a set of orthogonal moment invariants is presented. These moment invariants are obtained by the use of the Jacobi orthogonal moments and can be used in pattern recognition applications. These invariants are expressed as linear combinations of geometric moment invariants, which are presented in image geometric transformations like translation, rotation and scale.

2 Theoretical Background

In this section, basic concepts needed are introduced including regular moment invariants (Subsect. 2.1) and the Jacobi polynomials (Subsect. 2.2).

2.1 Regular Moment Invariants (RMIs)

Regular moment invariants are image characteristics that remain unchanged when a geometric transformation like translation, rotation or scaling is applied on an image [10]. Invariance to translation is achieved by computing the position of the center of mass or centroid \( \left( {x_{c} , y_{c} } \right) \) of an image [8, 9]:
$$ x_{c} = {{\mu_{10 } } \mathord{\left/ {\vphantom {{\mu_{10 } } {\mu_{00 } }}} \right. \kern-0pt} {\mu_{00 } }},\;y_{c} = {{\mu_{01 } } \mathord{\left/ {\vphantom {{\mu_{01 } } {\mu_{00 } }}} \right. \kern-0pt} {\mu_{00 } }} $$
(1)
In general, the central geometric moments can be computed as [8, 10]:
$$ \mu_{p, q} = \int_{ - \infty }^{\infty } {\int_{ - \infty }^{\infty } {\left( {x - x_{c} } \right)^{p} \left( {y - y_{c} } \right)^{q} f\left( {x, y} \right) \,{\text{d}}x{\text{d}}y} } $$
(2)
Scale invariance can be achieved through scale factor elimination, computed by:
$$ \mu '_{p, q} = \frac{{\mu_{p, q} }}{{\left( {\mu_{0, 0} } \right)^{\gamma } }} $$
(3)
where \( \gamma = \frac{1}{2}\left( {p + q + 2} \right) \). Similarly, the rotation moment invariants can be defined as:
$$ M_{p,q}^{rot} = \int_{ - \infty }^{\infty } {\int_{ - \infty }^{\infty } { \left( {x\;\cos \left( \theta \right) + y \;\sin \left( \theta \right)} \right)^{p} \left( {y\;\cos \left( \theta \right) - x\; \sin \left( \theta \right)} \right)^{q} f\left( {x, y} \right)\,{\text{d}}x{\text{d}}y} } $$
(4)
where the rotation angle \( \theta \) is computed by:
$$ \theta = \frac{1}{2} \tan^{ - 1} \left( {\frac{{2\mu_{1,1} }}{{\mu_{2,0} - \mu_{0,2} }}} \right) $$
(5)
Normalized Regular Moment Invariants (RMIs) are defined as:
$$ RMI = \frac{{RMI_{p,q} }}{{\left( {M_{0,0} } \right)^{\gamma } }} $$
(6)
Finally, RMIs to translation, rotation and scale are given by:
$$ \begin{aligned} RMI_{p,q} = \frac{1}{{\mu_{0,0}^{\gamma } }} \mathop \sum \limits_{k = 0}^{p} \mathop \sum \limits_{m = 0}^{q} \left( {\begin{array}{*{20}c} p \\ k \\ \end{array} } \right)\left( {\begin{array}{*{20}c} q \\ m \\ \end{array} } \right) \left( { - 1} \right)^{m} \left( {\sin \left( \theta \right)} \right)^{k + m} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \times \,(\cos \left( \theta \right))^{p + q - k - m} \mu_{{\left( {p - k + m} \right), \left( {q - m + k} \right)}} \hfill \\ \end{aligned} $$
(7)

2.2 Orthogonal Jacobi Polynomial

The Jacobi polynomials are the most general of the classic orthogonal polynomials in the domain [−1, 1]. All the other classical orthogonal polynomials are special cases of the Jacobi polynomials, and are obtained by setting restrictions on the parameters \( \alpha \) and \( \beta \) [11]: for instance, \( \alpha = \beta = 0 \), defines the Legendre polynomials, and more generally, making \( \alpha = \beta \), produces the Gegenbauer or Ultraspherical polynomials. Thus, the work carried out on Jacobi polynomials allows one to work with other classic orthogonal polynomials simply by selecting the values of the parameters \( \alpha \) and \( \beta \).

The Jacobi orthogonal polynomials of order \( n \) are defined as follows [12]:
$$ P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = \frac{1}{{2^{n} }} \mathop \sum \limits_{k}^{n} \left( {\begin{array}{*{20}c} {n + \alpha } \\ k \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {n + \beta } \\ {n - k} \\ \end{array} } \right)\left( {x - 1} \right)^{n - k} \left( {x + 1} \right)^{k} $$
(8)
The explicit expansion of \( P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) \) can be rewritten as [12]:
$$ P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = \mathop \sum \limits_{r = 0}^{n} k_{r, n}^{{\left( {\alpha , \beta } \right)}} x^{r} $$
(9)
with the coefficient matrix \( k_{r, n}^{{\left( {\alpha , \beta } \right)}} \) defined as follows for two different cases:
If \( \alpha = \beta \):
$$ k_{r, n}^{{\left( {\alpha , \beta } \right)}} = \frac{{\left( { - 1} \right)^{n} \left( {\alpha + 1} \right)_{n} \left( { - n} \right)_{r} \left( {n + 2\alpha + 1} \right)_{r } \varGamma \left( {\alpha + 1 + r} \right)\varGamma \left( {\frac{1}{2}} \right)}}{{2^{r} r! \left( {\alpha + 1 } \right)_{r} \varGamma \left( {\frac{r - n + 1}{2}} \right)\varGamma \left( {\frac{r + n}{2 + \alpha + 1}} \right)n!}} $$
(10)
Otherwise:
$$ k_{r,n}^{{\left( {\alpha ,\beta } \right)}} = \frac{{\left( { - 1} \right)^{n} \left( {\alpha + 1} \right)_{n} \left( { - n} \right)_{r} \left( {n + \lambda } \right)_{r} }}{{r!\left( {\alpha + 1} \right)_{r} 2^{r} n!}}\,{}_{2}{\text{F}}_{1} \left( { - n,n + \lambda + r,\alpha + 1 + r\left| {{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}} \right.} \right) $$
(11)
with two special cases:
$$ k_{n, n}^{{\left( {\alpha , \beta } \right)}} = \frac{{\left( {n + \lambda } \right)_{n} }}{{2^{n} n!}} $$
(12)
and
$$ k_{n - 1 , n}^{{\left( {\alpha , \beta } \right)}} = \frac{{\left( { \alpha - \beta } \right) \varGamma \left( {2n + \lambda - 1} \right)}}{{2^{n} \left( {n - 1} \right)! \varGamma \left( {n + \lambda } \right)}} $$
(13)

Where \( r = 0, 1 \ldots n \), and \( n \) is the maximum degree.

The Jacobi polynomials are orthogonal on [−1, 1], and satisfying the relation:
$$ \int_{ - 1}^{1} {P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)P_{m}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) w^{{\left( {\alpha ,\beta } \right)}} \left( x \right) \,{\text{d}}x = 0} $$
(14)
with respect to the weight function defined by:
$$ w^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = \left( {1 - x} \right)^{\alpha } \left( {1 + x} \right)^{\beta } $$
(15)

3 Jacobi Moment Invariants

Following [8], the Jacobi-based 2-D moments of order \( n, m \) can be defined as:
$$ A_{n, m} = \frac{1}{{h_{n} \left( {\alpha ,\beta } \right)h_{m} \left( {\alpha , \beta } \right)}}\int_{ - 1}^{1} {\int_{ - 1}^{1} { f\left( {x, y} \right)k_{n}^{{\left( {\alpha , \beta } \right)}} k_{m}^{{\left( {\alpha , \beta } \right)}} w^{{\left( {\alpha , \beta } \right)}} \left( x \right)w^{{\left( {\alpha , \beta } \right)}} \left( y \right)\,{\text{d}}x{\text{d}}} } y $$
(16)
where \( f\left( {x,y} \right) \) represents a 2-D array (e.g. a digital image) and the Jacobi normalization function is:
$$ h_{n} \left( {\alpha , \beta } \right) = \frac{{ 2^{\lambda }\Gamma \left( {{\text{n}} +\upalpha + 1 } \right)\Gamma \left( {{\text{n}} +\upbeta + 1 } \right)}}{{\left( {2n + \lambda } \right)n! \Gamma \left( {{\text{n}} +\uplambda} \right)}} $$
(17)
and \( \lambda \equiv \alpha + \beta + 1 \).
Equation (9) can be rewritten in a more computationally efficient manner as:
$$ \begin{aligned} A_{n - m, m} = \frac{1}{{h_{n - m} \left( {\alpha ,\beta } \right)h_{m} \left( {\alpha , \beta } \right)}}\int_{ - 1}^{1} {\int_{ - 1}^{1} { f\left( {x, y} \right)k_{n - m}^{{\left( {\alpha , \beta } \right)}} k_{m}^{{\left( {\alpha , \beta } \right)}} } } \hfill \\ \;\;\;\;\;\;\;\;\;\;\; \times \,w^{{\left( {\alpha , \beta } \right)}} \left( x \right)w^{{\left( {\alpha , \beta } \right)}} \left( y \right)\,{\text{d}}x{\text{d}}y \hfill \\ \end{aligned} $$
(18)
Now, the Jacobi moments invariants (JMI) can be defined as:
$$ \begin{aligned} \hat{A}_{n - m, m} = \frac{1}{{h_{n - m} \left( {\alpha ,\beta } \right)h_{m} \left( {\alpha , \beta } \right)}} \mathop \sum \limits_{r = 0}^{n - m} \mathop \sum \limits_{q = 0}^{m} k_{n - m}^{{\left( {\alpha , \beta } \right)}} k_{m}^{{\left( {\alpha , \beta } \right)}} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\; \times \int_{ - 1}^{1} {\int_{ - 1}^{1} { T\left( {x, y} \right) x^{r} y^{q} \,{\text{d}}x{\text{d}}y} } \hfill \\ \end{aligned} $$
(19)
where the so-called transformation function \( T\left( {x, y} \right) \) is:
$$ T\left( {x, y} \right) = \left( {1 - x} \right)^{\alpha } \left( {1 + x} \right)^{\beta } \left( {1 - y} \right)^{\alpha } \left( {1 + y} \right)^{\beta } f\left( {x, y} \right) $$
(20)
The integral in (19) corresponds to the RMIs of the intensity function of an image, \( f\left( {x, y} \right) \), given in Eq. (7), so that (19) can be rewritten as:
$$ \hat{A}_{n - m, m} = \frac{1}{{h_{n - m} \left( {\alpha ,\beta } \right)h_{m} \left( {\alpha , \beta } \right)}} \mathop \sum \limits_{r = 0}^{n - m} \mathop \sum \limits_{q = 0}^{m} k_{n - m}^{{\left( {\alpha , \beta } \right)}} k_{m}^{{\left( {\alpha , \beta } \right)}} RMI_{r, q} $$
(21)

4 Methodology

Based on the theory described in the previous section, the Jacobi-based moment invariants were implemented in MATLAB. The pseudocode for the different functions implemented is shown in Algorithms 1 to 4.
The Jacobi orthogonal polynomials can be used to obtain moment invariants to geometric transformations in a plane, like translation, rotation and scale. We show this application on a set of images from the MPEG-7 CE-Shape-1 dataset, which is a dataset created to evaluate the performance of 2-D shape descriptors. This dataset includes 1,440 shapes grouped in 70 classes, each containing 20 similar objects [13]. A small sample of these images is shown in Fig. 1. One fourth of the images in the CE-Shape-1 dataset (i.e. 5 images per class) were selected. These images were scaled down to 25% their original size and zero-padded to make all the images \( 301 \times 301 \) pixels in size. Afterwards the images were modified through geometric transformations: translation, scaling and rotation with the values in Table 1.
Fig. 1.

Subset of images from the MPEG-7 CE-Shape-1

Table 1.

Values used for geometric transformations of test images.

Geometric transformation

Applied values

Translation of centroid

(30, 30), (−30, −30)

Scaling

0.75, 1.25

Rotation

−45°, 45°

The result of this procedure is that two extra sets of images for each transformation in Table 1 are produced. Next, the JMIs for each of these sets and the original images are computed through Eq. (21), and the differences between the transformed datasets and the original image set are recorded. The parameters used to compute the JMIs are \( \alpha = \beta = 0.5 \), and these were computed from order 0 to 21. Several of these moments are always zero, and these were eliminated from the reported results, so that we finally end up with 14 non-zero moment invariants for each of the geometric transformations. Numerical results are reported in the next section.

5 Experimental Results and Discussion

The differences between the moment invariants of the original image and those of the transformed images were computed in order to observe their variation. The results are shown as boxplots in Figs. 2, 3 and 4. For clarity, the outliers were not plotted. The boxplots presented show the differences between the moment invariants computed on the original images and the moment invariants computed on the modified (translated, rotated or scaled) images. For the sake of a full disclosure, it must be reported that three of the moment invariants computed produce numerical values that are several orders of magnitude larger than the rest of them, and their differences were not plotted because doing so would hinder the appreciation of the rest of the values plotted.
Fig. 2.

Translation invariance of Jacobi-based moment invariants.

Fig. 3.

Scale invariance of Jacobi-based moment invariants.

Fig. 4.

Rotation invariance of Jacobi-based moment invariants.

The boxplots show that for low-order moment invariants the differences are quite small (indicating these are truly invariant to the changes introduced by the geometric transformations); however, as the order increases the differences begin to be more significant. We believe that there are several factors that can explain this behavior. One factor is the images resolution, which directly affects the precision with which the Jacobi polynomials are approximated. As the degree of the polynomials increases but the image resolution is kept fixed, an increasingly larger error between the true value of the polynomials and their computational approximation is introduced.

A second factor that is also related to the precision with which the polynomials are approximated is how well the orthogonality condition is satisfied by the polynomials as their order increases. For low order polynomials the condition is easily satisfied, but as the order increases orthogonality is no longer guaranteed. This in turn means that image descriptors, such as the moments described, gradually lose their descriptive power and are more prone to be affected by image differences like those introduced by the geometric transformations in our experiments. Finally, a third factor is that, whenever an approximation is employed, the addition of polynomials beyond a certain order is detrimental to the reconstruction ability of this technique.

6 Conclusions

A novel set of moment invariants for pattern recognition application with Jacobi-based orthogonal polynomials was described. Experiments were performed on images from the MPEG-7 CE-Shape-1 dataset. Translation, scaling and rotation were applied to the images and the Jacobi-based image moments were computed. The differences between the image moments of the original and the transformed images were reported to demonstrate the invariance of the proposal. Invariance was confirmed and clearly observed for low-order moments (differences were nearly zero in most cases), while the differences of higher-order moments were more noticeable. We believe that the precision with which the polynomials are computationally approximated is responsible for this behavior. Although low-order moment invariants can be used for pattern-recognition applications, we would like to explore possible solutions to increase the order at which invariance is achieved. In conclusion, the invariants of Jacobi polynomial-based moments work well, but more work is necessary to improve the polynomials approximation and to define the optimal degree to use for each particular image, so that we may obtain more exact descriptors and more uniformly invariant moments.

Notes

Acknowledgements

This work was supported by the National Council of Science and Technology of Mexico, through Research Grant CÁTEDRAS-2598 (A. Rojas) and Postgraduate Scholarship 712960 (Rafael A. Rocha).

References

  1. 1.
    Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. 8(2), 179–187 (1962)zbMATHGoogle Scholar
  2. 2.
    Paschalakis, S.: Pattern recognition in grey level images using moment based invariant features. In: 7th International Conference on Image Processing and its Applications, Manchester, UK, pp. 245–249, July 1999Google Scholar
  3. 3.
    Shu, H., Zhang, H., Chen, B., Haigron, P., Luo, L.: Fast computation of Tchebichef moments for binary and grayscale images. IEEE Trans. Image Process. 19(12), 3171–3180 (2010)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Teague, M.R.: Image analysis via the general theory of moments. J. Opt. Soc. Am. 70, 920–930 (1980)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Benzzoubeir, S.: Image analysis by hypergeometric function of Legendre moments. In: MELECON 2006 - 2006 IEEE Mediterranean Electrotechnical Conference, pp. 506–509 (2006)Google Scholar
  6. 6.
    Abudhahir, A., Begum, A.H.R Manimegalai, D.: Tchebichef moment based image quality measure. In: 2014 International Conference on Electronics and Communication Systems, ICECS, pp. 1–5 (2014)Google Scholar
  7. 7.
    Hosny, K.M.: Image representation using accurate orthogonal Gegenbauer moments. Pattern Recogn. Lett. 32(6), 795–804 (2011)CrossRefGoogle Scholar
  8. 8.
    Hosny, K.M.: New set of Gegenbauer moment invariants for pattern recognition applications. Arab J Sci Eng. 39(10), 7097–7107 (2014).  https://doi.org/10.1007/s13369-014-1336-8CrossRefzbMATHGoogle Scholar
  9. 9.
    Herrera-Acosta, A., Rojas-Domínguez, A., Carpio, J.M., Ornelas-Rodríguez, M., Puga, H.: Gegenbauer-Based Image Descriptors for Visual Scene Recognition. In: Castillo, O., Melin, P., Kacprzyk, J. (eds.) Intuitionistic and Type-2 Fuzzy Logic Enhancements in Neural and Optimization Algorithms: Theory and Applications. SCI, vol. 862, pp. 629–643. Springer, Cham (2020).  https://doi.org/10.1007/978-3-030-35445-9_43CrossRefGoogle Scholar
  10. 10.
    Flusser, J., Suk, T., Zitov, B.: Moments and Moment Invariants in Pattern Recognition. Wiley, Chichester (2009)CrossRefGoogle Scholar
  11. 11.
    Spencer Doman, B.G.: The Classical Orthogonal Polynomials. World Scientific, New Jersey (2016)zbMATHGoogle Scholar
  12. 12.
    Luke, Y.L.: Mathematical Functions and their Applications. Academic Press, New York (1975)zbMATHGoogle Scholar
  13. 13.
    Nunes, J.F.: Shape based image retrieval and classification. In: 5th Iberian Conference on Information Systems and Technologies, pp. 1–6 (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Tecnológico Nacional de México-Instituto Tecnológico de LeónLeónMexico

Personalised recommendations