Advertisement

Landmarks-SIFT Face Representation for Gender Classification

  • Yomna Safaa El-Din
  • Mohamed N. Moustafa
  • Hani Mahdi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8157)

Abstract

Existing methods for gender classification from facial images mostly rely on either shape or texture cues. This paper presents a novel face representation that combines both shape and texture information for gender classification. We propose extracting the Scale Invariant Feature Transform (SIFT) descriptors at specific facial landmarks positions, hence encoding both the face shape and local-texture information. Moreover, we propose a decision-level fusion framework combining this Landmarks-SIFT with Local Binary Patterns (LBP) descriptor extracted for the whole face image. LBP is known of being tolerant against uncontrolled image capturing conditions. Competitive correct classification rates for both controlled (97% for FERET) and uncontrolled (95% and 94% for LFW and KinFace) benchmark datasets were achieved using our proposed decision-level fusion.

Keywords

gender classification SIFT facial landmarks LBP fusion 

References

  1. 1.
    Ross Beveridge, J., Givens, G.H., Jonathon Phillips, P., Draper, B.A.: Factors that influence algorithm performance in the face recognition grand challenge. In: Computer Vision and Image Understanding, pp. 750–762 (2009)Google Scholar
  2. 2.
    Makinen, E., Raisamo, R.: An experimental comparison of gender classification methods. Pattern Recognition Letters 29, 1544–1556 (2008)CrossRefGoogle Scholar
  3. 3.
    Li, B., Lian, X.-C., Lu, B.-L.: Gender classification by combining clothing, hair and facial component classifiers. Neurocomputing 76(1), 18–27 (2012)CrossRefGoogle Scholar
  4. 4.
    Ojala, T., Pietikainen, M.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 971–987 (2002)CrossRefGoogle Scholar
  5. 5.
    Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET Evaluation Methodology for Face-Recognition algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 1090–1104 (2000)CrossRefGoogle Scholar
  6. 6.
    Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07-49, University of Massachusetts, Amherst (October 2007)Google Scholar
  7. 7.
    Xia, S., Shao, M., Luo, J., Fu, Y.: Understanding kin relationships in a photo. IEEE Transactions on Multimedia (2012)Google Scholar
  8. 8.
    Lian, H.-C., Lu, B.-L.: Multi-view gender classification using local binary patterns and support vector machines. In: Wang, J., Yi, Z., Żurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3972, pp. 202–209. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Ylioinas, J., Hadid, A., Pietikäinen, M.: Combining contrast information and local binary patterns for gender classification. In: Heyden, A., Kahl, F. (eds.) SCIA 2011. LNCS, vol. 6688, pp. 676–686. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  10. 10.
    Alexandre, L.A.: Gender recognition: A multiscale decision fusion approach. Pattern Recogn. Lett. 31(11), 1422–1427 (2010)CrossRefGoogle Scholar
  11. 11.
    Shan, C.: Learning local binary patterns for gender classification on real-world face images. Pattern Recognition Letters 33(1), 431–437 (2012)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Piccirilli, M., Adjeroh, D., Bourlai, T., Ross, A., Cao, D., Chen, C.: Can facial metrology predict gender? In: Proc. of International Joint Conference on Biometrics (IJCB) (2011)Google Scholar
  13. 13.
    Wang, J.G., Li, J., Yau, W.Y., Sung, E.: Boosting dense sift descriptors and shape contexts of face images for gender recognition. In: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 96–102 (2010)Google Scholar
  14. 14.
    Wang, J.G., Li, J., Yau, W.Y., Sung, E.: Dense sift and gabor descriptors-based face representation with applications to gender recognition. In: Proceedings of the 2010 11th International Conference on Control Automation Robotics and Vision (ICARCV), pp. 1860–1864 (2010)Google Scholar
  15. 15.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  16. 16.
    Milborrow, S., Nicolls, F.: Locating facial features with an extended active shape model. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 504–513. Springer, Heidelberg (2008), http://www.milbo.users.sonic.net/stasm CrossRefGoogle Scholar
  17. 17.
    El-Din, Y.S., Moustafa, M.N., Mahdi, H.: A Mixture of Two Gender Classification Experts. In: 25th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 245–251. IEEE Computer Society, Brazil (2012)CrossRefGoogle Scholar
  18. 18.
    Zheng, J., Lu, B.-L.: A support vector machine classifier with automatic confidence and its application to gender classification. Neurocomputing 74(11), 1926–1935 (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Yomna Safaa El-Din
    • 1
  • Mohamed N. Moustafa
    • 2
  • Hani Mahdi
    • 1
  1. 1.Department of Computer and Systems EngineeringAin Shams UniversityCairoEgypt
  2. 2.Department of Computer Science and EngineeringAmerican University in CairoNew CairoEgypt

Personalised recommendations