Advertisement

Removing redundancy data with preserving the structure and visuality in a database

  • Ali Asghar Sharifi Najafabadi
  • Farah Torkamani AzarEmail author
Original Paper
  • 27 Downloads

Abstract

One of the most important challenges in the preparation and maintenance of databases such as face images is the presence of a large number of variables in the raw data. Reducing the size of each observation so that its features do not disappear and the co-signer uniquely provides dependence on the original data is a necessary requirement of functioning with databases. Principal component analysis and its improved methods are the common techniques for reducing the dimensionality of such datasets and simultaneously increasing the interpretability and minimizing information loss. In this paper, our purpose is to define one preprocessing step as nonuniform sampling to preserve raw data appearance and to increase the performance of the dimensional reduction methods. We use the properties of sparse principal component analysis to identify the the location of less important values of the raw data that do not interfere with data features. By using sparse eigenvectors, two algorithms are presented to remove redundancy from the raw data in the one-dimensional and two-dimensional cases. After removing raw data redundancy, newly obtained data in other applications such as database recognition and compression can be used. Simulation results show that using this preprocessing step reduces the memory amount and also provides the higher recognition rate.

Keywords

Nonuniform sampling Dimension reduction Principal component analysis Sparse principal component analysis 

Notes

References

  1. 1.
    Lawrence, S., Giles, C.L., Tsoi, C., Back, A.D.: Face recognition: a convolutional neural-network approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997)CrossRefGoogle Scholar
  2. 2.
    Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991).  https://doi.org/10.1162/jocn.1991.3.1.71 CrossRefGoogle Scholar
  3. 3.
    Jolliffe, I.T.: Principal Component Analysis and Factor Analysis, pp. 115–128. Springer, Berlin (1989)Google Scholar
  4. 4.
    Abdi, H., Williams, L.J.: Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2(4), 433–459 (2010)CrossRefGoogle Scholar
  5. 5.
    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997)CrossRefGoogle Scholar
  6. 6.
    Liu, Q., Lu, H., Ma, S.: Improving kernel fisher discriminant analysis for face recognition. IEEE Trans. Circuits Syst. Video Technol. 14(1), 42–49 (2004)CrossRefGoogle Scholar
  7. 7.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  8. 8.
    Balasubramanian, M., Schwartz, E.L.: The isomap algorithm and topological stability. Science 295(5552), 7–7 (2002)CrossRefGoogle Scholar
  9. 9.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in Neural Information Processing Systems 14. Vancouver, British Columbia, Canada (2002)Google Scholar
  10. 10.
    He, X., Cai, D., Yan, S., Zhang, H.-J.: Neighborhood preserving embedding. In: 10th IEEE International Conference on Computer Vision, ICCV 2005, vol. 2, pp. 1208–1213 (2005)Google Scholar
  11. 11.
    Kokiopoulou, E., Saad, Y.: Orthogonal neighborhood preserving projections: a projection-based dimensionality reduction technique. IEEE Trans. Pattern Anal. Mach. Intell. 29(12), 2143–2156 (2007)CrossRefGoogle Scholar
  12. 12.
    Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis. J. Comput. Graph. Stat. 15(2), 265–286 (2006).  https://doi.org/10.1198/106186006X113430 MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lai, Z., Wong, W.K., Xu, Y., Yang, J., Zhang, D.: Approximate orthogonal sparse embedding for dimensionality reduction. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 723–735 (2016)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Yang, J., Zhang, D., Frangi, A.F., Yang, J.Y.: Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2006)CrossRefGoogle Scholar
  15. 15.
    Xiao, C., Wang, Z.: Two-dimensional sparse principal component analysis: a new technique for feature extraction. In: 6th International Conference on Natural Computation, vol. 2, pp. 976–980 (2010)Google Scholar
  16. 16.
    Nojun, K.: Principal component analysis based on L1-norm maximization. IEEE Trans. Pattern Anal. Mach. Intell. 30(9), 1672–1680 (2008)CrossRefGoogle Scholar
  17. 17.
    Zuo, W., Zhang, D., Wang, K.: Bidirectional PCA with assembled matrix distance metric for image recognition. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 36(4), 863–872 (2006)CrossRefGoogle Scholar
  18. 18.
  19. 19.
  20. 20.
    Samaria, F.S., Harter, A.C.: Parameterisation of a stochastic model for human face identification. In: Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision, pp. 138–142 (1994)Google Scholar
  21. 21.
  22. 22.
    Larose, D.T.: Discovering Knowledge in Data: An Introduction to Data Mining, Chapter 5, k-Nearest Neighbor Algorithm, pp. 90–106. John Wiley & Sons (2005)Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.DiSPLay Laboratory, Department of Electrical EngineeringShahid Beheshti of UniversityTehranIran
  2. 2.DiSPLay Laboratory, Communication Department of Electrical Engineering FacultyShahid Beheshti of UniversityTehranIran

Personalised recommendations