Advertisement

Feature Disentangling Machine - A Novel Approach of Feature Selection and Disentangling in Facial Expression Analysis

  • Ping Liu
  • Joey Tianyi Zhou
  • Ivor Wai-Hung Tsang
  • Zibo Meng
  • Shizhong Han
  • Yan Tong
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8692)

Abstract

Studies in psychology show that not all facial regions are of importance in recognizing facial expressions and different facial regions make different contributions in various facial expressions. Motivated by this, a novel framework, named Feature Disentangling Machine (FDM), is proposed to effectively select active features characterizing facial expressions. More importantly, the FDM aims to disentangle these selected features into non-overlapped groups, in particular, common features that are shared across different expressions and expression-specific features that are discriminative only for a target expression. Specifically, the FDM integrates sparse support vector machine and multi-task learning in a unified framework, where a novel loss function and a set of constraints are formulated to precisely control the sparsity and naturally disentangle active features. Extensive experiments on two well-known facial expression databases have demonstrated that the FDM outperforms the state-of-the-art methods for facial expression analysis. More importantly, the FDM achieves an impressive performance in a cross-database validation, which demonstrates the generalization capability of the selected features.

Keywords

Feature Selection Facial Expression Local Binary Pattern Facial Expression Recognition Target Expression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

978-3-319-10593-2_11_MOESM1_ESM.pdf (93 kb)
Electronic Supplementary Material(93 KB)

References

  1. 1.
    Bartlett, M.S., Littlewort, G., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.R.: Recognizing facial expression: Machine learning and application to spontaneous behavior. In: CVPR, vol. 2, pp. 568–573 (2005)Google Scholar
  2. 2.
    Bociu, I., Pitas, I.: A new sparse image representation algorithm applied to facial expression recognition. In: MLSP, pp. 539–548. IEEE (2004)Google Scholar
  3. 3.
    Cohn, J.F., Zlochower, A.: A computerized analysis of facial expression: Feasibility of automated discrimination. American Psychological Society (1995)Google Scholar
  4. 4.
    Dahmane, M., Meunier, J.: Emotion recognition using dynamic grid-based HoG features. In: FG (March 2011)Google Scholar
  5. 5.
    Ekman, P., Friesen, W.V., Hager, J.C.: Facial Action Coding System: the Manual. Research Nexus, Div., Network Information Research Corp., Salt Lake City (2002)Google Scholar
  6. 6.
    Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: Liblinear: A library for large linear classification. J. Machine Learning Research 9, 1871–1874 (2008)zbMATHGoogle Scholar
  7. 7.
    Grace, A., Works, M.: Optimization Toolbox: For Use with MATLAB: User’s Guide. Math Works (2013)Google Scholar
  8. 8.
    Mahoor, M.H., Mu, Z., Veon, K.L., Mohammad, M.S., Cohn, J.F.: Facial action unit recognition with sparse representation. In: FG, pp. 336–342. IEEE (2011)Google Scholar
  9. 9.
    Hu, Y., Zeng, Z., Yin, L., Wei, X., Zhou, X., Huang, T.S.: Multi-view facial expression recognition. In: FG, pp. 1–6 (2008)Google Scholar
  10. 10.
    Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: FG, pp. 46–53 (2000)Google Scholar
  11. 11.
    Kelley Jr., J.E.: The cutting-plane method for solving convex programs. Journal of the Society for Industrial & Applied Mathematics 8(4), 703–712 (1960)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Kyperountas, M., Tefas, A., Pitas, I.: Salient feature and reliable classifier selection for facial expression classification. Pattern Recognition 43(3), 972–986 (2010)CrossRefzbMATHGoogle Scholar
  13. 13.
    Liang, D., Yang, J., Zheng, Z., Chang, Y.: A facial expression recognition system based on supervised locally linear embedding. Pattern Recognition Letters 26(15), 2374–2389 (2005)CrossRefGoogle Scholar
  14. 14.
    Lin, Y., Song, M., Quynh, D., He, Y., Chen, C.: Sparse coding for flexible, robust 3d facial-expression synthesis. Computer Graphics and Applications 32(2), 76–88 (2012)CrossRefGoogle Scholar
  15. 15.
    Liu, P., Han, S., Tong, Y.: Improving facial expression analysis using histograms of log-transformed nonnegative sparse representation with a spatial pyramid structure. In: FG, pp. 1–7. IEEE (2013)Google Scholar
  16. 16.
    Liu, W., Song, C., Wang, Y.: Facial expression recognition based on discriminative dictionary learning. In: ICPR, pp. 1839–1842. IEEE (2012)Google Scholar
  17. 17.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): A complete expression dataset for action unit and emotion-specified expression. In: CVPR Workshops, pp. 94–101 (2010)Google Scholar
  18. 18.
    Lyons, M.J., Budynek, J., Akamatsu, S.: Automatic classification of single facial images. IEEE T-PAMI 21(12), 1357–1362 (1999)CrossRefGoogle Scholar
  19. 19.
    Pantic, M., Pentland, A., Nijholt, A., Huang, T.S.: Human computing and machine understanding of human behavior: A survey. In: Huang, T.S., Nijholt, A., Pantic, M., Pentland, A. (eds.) AI for Human Computing. LNCS (LNAI), vol. 4451, pp. 47–71. Springer, Heidelberg (2007)Google Scholar
  20. 20.
    Rakotomamonjy, A., Bach, F.R., Canu, S., Grandvalet, Y.: SimpleMKL. J. Machine Learning Research 9(11) (2008)Google Scholar
  21. 21.
    Ranzato, M., Susskind, J., Mnih, V., Hinton, G.: On deep generative models with applications to recognition. In: CVPR, pp. 2857–2864. IEEE (2011)Google Scholar
  22. 22.
    Sénéchal, T., Rapp, V., Salam, H., Seguier, R., Bailly, K., Prevost, L., et al.: Combining LGBP histograms with AAM coefficients in the multi-kernel SVM framework to detect facial action units, pp. 860–865 (2011)Google Scholar
  23. 23.
    Shan, C., Gong, S., McOwan, P.: Facial expression recognition based on Local Binary Patterns: A comprehensive study. J. IVC 27(6), 803–816 (2009)CrossRefGoogle Scholar
  24. 24.
    Tan, M., Wang, L., Tsang, I.W.: Learning sparse svm for feature selection on very high dimensional datasets. In: ICML, pp. 1047–1054 (2010)Google Scholar
  25. 25.
    Tian, Y.I., Kanade, T., Cohn, J.F.: Evaluation of gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity, pp. 229–234. IEEE (2002)Google Scholar
  26. 26.
    Valstar, M.F., Mehu, M., Jiang, B., Pantic, M., Scherer, K.: Meta-analysis of the first facial expression recognition challenge. IEEE T-SMC-B 42(4), 966–979 (2012)Google Scholar
  27. 27.
    Whitehill, J., Bartlett, M.S., Littlewort, G., Fasel, I., Movellan, J.R.: Towards practical smile detection. IEEE T-PAMI 31(11), 2106–2111 (2009)CrossRefGoogle Scholar
  28. 28.
    Wolsey, L.A.: Integer programming. IIE Transactions 32(273-285), 2–58 (2000)Google Scholar
  29. 29.
    Xue, Y., Liao, X., Carin, L., Krishnapuram, B.: Multi-task learning for classification with dirichlet process priors. J. Machine Learning Research 8, 35–63 (2007)zbMATHMathSciNetGoogle Scholar
  30. 30.
    Yang, P., Liu, Q., Metaxas, D.N.: Boosting coded dynamic features for facial action units and facial expression recognition. In: CVPR, pp. 1–6 (June 2007)Google Scholar
  31. 31.
    Ying, Z.-L., Wang, Z.-W., Huang, M.-W.: Facial expression recognition based on fusion of sparse representation. In: Huang, D.-S., Zhang, X., Reyes García, C.A., Zhang, L. (eds.) ICIC 2010. LNCS (LNAI), vol. 6216, pp. 457–464. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  32. 32.
    Zafeiriou, S., Petrou, M.: Nonlinear non-negative component analysis algorithms. IEEE T-IP 19(4), 1050–1066 (2010)CrossRefMathSciNetGoogle Scholar
  33. 33.
    Zafeiriou, S., Petrou, M.: Sparse representations for facial expressions recognition via L1 optimization. In: CVPR Workshops, pp. 32–39 (2010)Google Scholar
  34. 34.
    Zafeiriou, S., Pitas, I.: Discriminant graph structures for facial expression recognition. IEEE T-Multimedia 10(8), 1528–1540 (2008)CrossRefGoogle Scholar
  35. 35.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE T-PAMI 31(1), 39–58 (2009)CrossRefGoogle Scholar
  36. 36.
    Zhang, Y., Ji, Q.: Active and dynamic information fusion for facial expression understanding from image sequences. IEEE T-PAMI 27(5), 699–714 (2005)CrossRefGoogle Scholar
  37. 37.
    Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: FG, pp. 454–459 (1998)Google Scholar
  38. 38.
    Zhao, G., Pietiäinen, M.: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE T-PAMI 29(6), 915–928 (2007)CrossRefGoogle Scholar
  39. 39.
    Zhi, R., Flierl, M., Ruan, Q., Kleijn, W.: Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE T-SMC-B (99), 1–15 (2010)Google Scholar
  40. 40.
    Zhong, L., Liu, Q., Yang, P., Liu, B., Huang, J., Metaxas, D.: Learning active facial patches for expression analysis. In: CVPR (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Ping Liu
    • 1
  • Joey Tianyi Zhou
    • 2
  • Ivor Wai-Hung Tsang
    • 3
  • Zibo Meng
    • 1
  • Shizhong Han
    • 1
  • Yan Tong
    • 1
  1. 1.Department of Computer ScienceUniversity of South CarolinaUSA
  2. 2.Center for Computational IntelligenceNanyang Technology UniversitySingapore
  3. 3.Center for Quantum Computation and Intelligent SystemsUniversity of TechnologyAustralia

Personalised recommendations