Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

2D-human face recognition using SIFT and SURF descriptors of face’s feature regions

  • 44 Accesses

Abstract

Face recognition is the process of identifying people through facial images. It has become vital for security and surveillance applications and required everywhere including institutions, organizations, offices, and social places. There are a number of challenges faced in face recognition which includes face pose, age, gender, illumination, and other variable condition. Another challenge is that the database size for these applications is usually small. So, training and recognition become difficult. Face recognition methods can be divided into two major categories, appearance-based method and feature-based method. In this paper, the authors have presented the feature-based method for 2D face images. speeded up robust features (SURF) and scale-invariant feature transform (SIFT) are used for feature extraction. Five public datasets, namely Yale2B, Face 94, M2VTS, ORL, and FERET, are used for experimental work. Various combinations of SIFT and SURF features with two classification techniques, namely decision tree and random forest, have experimented in this work. A maximum recognition accuracy of 99.7% has been reported by the authors with a combination of SIFT (64-components) and SURF (32-components).

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

References

  1. 1.

    Abdurrahim, S.H., Samad, S.A., Huddin, A.B.: Review on the effects of age, gender, and race demographics on automatic face recognition. The Visual Computer 34(11), 1617–1630 (2018)

  2. 2.

    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997)

  3. 3.

    Carro, R.C., Larios, J.M.A., Huerta, E.B., Caporal, R.M., Cruz, F.R.: Face recognition using SURF. In: The Proceedings of the International Conference on Intelligent Computing, pp. 316–326 (2015)

  4. 4.

    Cedillo-Hernandez, M., Cedillo-Hernandez, A., Nakano-Miyatake, M., Perez-Meana, H.: Content based video retrieval system for mexican culture heritage based on object matching and local-global descriptors. In: The Proceedings of the International Conference on Mechatronics, Electronics and Automotive Engineering, pp. 38–43 (2014)

  5. 5.

    Chhabra, P., Garg, N.K. Kumar, M.: Content-based image retrieval system using ORB and SIFT features. Neural Comput. Appl. (2018). https://doi.org/10.1007/s00521-018-3677-9

  6. 6.

    Du, G., Su, F., Cai, A.: Face recognition using SURF features. Pattern Recognit. Comput. Vis. 7496, 749628 (2009)

  7. 7.

    Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 643–660 (2001)

  8. 8.

    Guntupalli, J.S., Gobbini, M.I.: Reading faces: from features to recognition. Trends Cognit. Sci. 21(12), 915–916 (2017)

  9. 9.

    Hassner, T., Masi, I., Kim, J., Choi, J., Harel, S., Natarajan, P., Medioni, G.: Pooling faces: template based face recognition with pooled face images. In: the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 59–67 (2016)

  10. 10.

    He, X., Niyogi, P.: Locality preserving projections. In: Advances in Neural Information Processing Systems, pp. 153–160 (2004)

  11. 11.

    He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.J.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005)

  12. 12.

    Huang, Z.H., Li, W.J., Wang, J., Zhang, T.: Face recognition based on pixel-level and feature-level fusion of the top-level’s wavelet sub-bands. Inf. Fusion 22, 95–104 (2015)

  13. 13.

    Karczmarek, P., Kiersztyn, A., Pedrycz, W., Dolecki, M.: An application of chain code-based local descriptor and its extension to face recognition. Pattern Recognit. 65, 26–34 (2017)

  14. 14.

    Ke, J., Peng, Y., Liu, S., Li, J., Pei, Z.: Face recognition based on symmetrical virtual image and original training image. J. Mod. Opt. 65(4), 367–380 (2018)

  15. 15.

    Klemm, S., Andreu, Y., Henríquez, P., Matuszewsk, B.: Robust face recognition using key-point descriptors. In: the Proceedings of the 10th International Conference on Computer Vision Theory and Applications, pp. 447–454 (2015)

  16. 16.

    Kotropoulos, C., Pitas, I.: Rule-based face detection in frontal views. IEEE Int. Conf. Acoustics Speech Signal Process. 4, 2537–2540 (1997)

  17. 17.

    Li, G., Zhou, B., Su, Y.N.: Face recognition algorithm using two dimensional locality preserving projection in discrete wavelet domain. Open Autom. Control Syst. J. 7(1), 1721–1728 (2015)

  18. 18.

    Li, J., Qiu, T., Wen, C., Xie, K., Wen, F.Q.: Robust face recognition using the deep C2D-CNN model based on decision-level fusion. Sensors 18(7), E2080 (2018)

  19. 19.

    Liong, V.E., Lu, J., Wang, G.: Face recognition using deep PCA. In: 2013 9th International Conference on Information, Communications and Signal Processing, IEEE, pp. 1–5 (2013)

  20. 20.

    Liu, B.D., Shen, B., Gui, L., Wang, Y.X., Li, X., Yan, F., Wang, Y.J.: Face recognition using class specific dictionary learning for sparse representation and collaborative representation. Neurocomputing 204, 198–210 (2016)

  21. 21.

    Lu, J., Wang, G., Zhou, J.: Simultaneous feature and dictionary learning for image set based face recognition. IEEE Trans. Image Process. 26(8), 4042–4054 (2017)

  22. 22.

    Naik, M.K., Panda, R.: A novel adaptive cuckoo search algorithm for intrinsic discriminant analysis based face recognition. Appl. Soft Comput. 38, 661–675 (2016)

  23. 23.

    Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)

  24. 24.

    Pigeon, S., Vandendorpe, L.: The M2VTS multimodal face database (release 1.00). In: International Conference on Audio-and Video-Based Biometric Person Authentication, Springer, Berlin, pp. 403–409 (1997)

  25. 25.

    Ranjan, R., Patel, V.M., Chellappa, R.: Hyperface: a deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(1), 121–135 (2019)

  26. 26.

    Samaria, F.S., Harter, A.C.: Parameterisation of a stochastic model for human face identification. In: Proceedings of 2nd IEEE Workshop Applications Computer Vision, vol. 557, no. 4, pp. 138–142 (1994)

  27. 27.

    Tan, H., Zhang, X., Guan, N., Tao, D., Huang, X., Luo, Z.: Two-dimensional Euler PCA for face recognition. In: The Proceedings of the International Conference on Multimedia Modeling, pp. 548–559 (2015)

  28. 28.

    Turk, M., Pentland, A.: Face recognition using eigenfaces. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp. 586–591 (1991)

  29. 29.

    Vinay, A., Hebbar, D., Shekhar, V.S., Murthy, K.B., Natarajan, S.: Two novel detector-descriptor based approaches for face recognition using sift and surf. Procedia Comput. Sci. 70, 185–197 (2015)

  30. 30.

    Vinay, A., Rao, A.S., Shekhar, V.S., Kumar, A., Murthy, K.B., Natarajan, S.: Feature extraction using ORB-RANSAC for face recognition. Procedia Comput. Sci. 70, 174–184 (2015)

  31. 31.

    Wang, D.: Effect of subject’s age and gender on face recognition results. J. Vis. Commun. Image Represent. 60, 116–122 (2019)

  32. 32.

    Wang, W., Yang, J., Xiao, J., Li, S., Zhou, D.: Face recognition based on deep learning. In: International Conference on Human Centered Computing. Springer, Cham, pp. 812–820 (2014)

  33. 33.

    Wang, Z., Miao, Z., Wu, Q.J., Wan, Y., Tang, Z.: Low-resolution face recognition: a review. Vis. Comput. 30(4), 359–386 (2014)

  34. 34.

    Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning approach for deep face recognition. In: The Proceedings of the European Conference on Computer Vision, pp. 499–515 (2016)

  35. 35.

    Werghi, N., Tortorici, C., Berretti, S., Del, B.A.: Boosting 3D LBP-based face recognition by fusing shape and texture descriptors on the mesh. IEEE Trans. Inf. Forensics Secur. 11(5), 964–979 (2016)

  36. 36.

    Yan, W.J., Li, X., Wang, S.J., Zhao, G., Liu, Y.J., Chen, Y.H.: CASME II: an improved spontaneous micro-expression database and the baseline evaluation. PLoS ONE 9:e86041 (2014). https://doi.org/10.1371/journal.pone.0086041

  37. 37.

    Zhang, C., Zhang, Z.: Improving multiview face detection with multi-task deep convolutional neural networks. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1036–1041 (2014)

Download references

Author information

Correspondence to Munish Kumar.

Ethics declarations

Conflict of interest

The authors declared that they have no conflict of interest.

Human and animal rights

Authors have presented an efficient approach for human face recognition using SIFT and SURF features. For the experimental results, authors have considered five public datasets like FACE 94, Yale2B, ORL, FERET, and M2VTS datasets.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gupta, S., Thakur, K. & Kumar, M. 2D-human face recognition using SIFT and SURF descriptors of face’s feature regions. Vis Comput (2020). https://doi.org/10.1007/s00371-020-01814-8

Download citation

Keywords

  • Face recognition
  • SURF
  • SIFT
  • Decision tree
  • Random forest