Advertisement

A Compositional Exemplar-Based Model for Hair Segmentation

  • Nan Wang
  • Haizhou Ai
  • Shihong Lao
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6494)

Abstract

Hair is a very important part of human appearance. Robust and accurate hair segmentation is difficult because of challenging variation of hair color and shape. In this paper, we propose a novel Compositional Exemplar-based Model (CEM) for hair style segmentation. CEM generates an adaptive hair style (a probabilistic mask) for the input image automatically in the manner of Divide-and-Conquer, which can be divided into decomposition stage and composition stage naturally. For the decomposition stage, we learn a strong ranker based on a group of weak similarity functions emphasizing the Semantic Layout similarity (SLS) effectively; in the composition stage, we introduce the Neighbor Label Consistency (NLC) Constraint to reduce the ambiguity between data representation and semantic meaning and then recompose the hair style using alpha-expansion algorithm. Final segmentation result is obtained by Dual-Level Conditional Random Fields. Experiment results on face images from Labeled Faces in the Wild data set show its effectiveness.

Keywords

Face Image Segmentation Result Decomposition Stage Local Patch Normalize Discount Cumulative Gain 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Paris, S., Briceo, H.M., Sillion, F.X.: Capture of hair geometry from multiple images. In: SIGGRAPH, Los Angeles, CA, United states, vol. 23, pp. 712–719 (2004)Google Scholar
  2. 2.
    Paris, S., Chang, W., Kozhushnyan, O.I., Jarosz, W., Matusik, W., Zwicker, M., Durand, F.: Hair photobooth: Geometric and photometric acquisition of real hairstyles. In: SIGGRAPH, vol. 27 (2008)Google Scholar
  3. 3.
    Ward, K., Bertails, F., Kim, T.Y., Marschner, S.R., Cani, M.P., Lin, M.C.: A survey on hair modeling: Styling, simulation, and rendering. IEEE Transactions on Visualization and Computer Graphics 13, 213–233 (2007)CrossRefGoogle Scholar
  4. 4.
    Yacoob, Y., Davis, L.S.: Detection and analysis of hair. PAMI 28, 1164–1169 (2006)CrossRefGoogle Scholar
  5. 5.
    chih Lee, K., Anguelov, D., Sumengen, B., Gokturk, S.B.: Markov random field models for hair and face segmentation. In: AFG, Amsterdam, pp. 1–6 (2008)Google Scholar
  6. 6.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 1–15. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Borenstein, E., Ullman, S.: Combined top-down/bottom-up segmentation. PAMI 30, 2109–2125 (2007)CrossRefGoogle Scholar
  8. 8.
    Wang, X., Tang, X.: Face photo-sketch synthesis and recognition. PAMI 31, 1955–1967 (2009)CrossRefGoogle Scholar
  9. 9.
    Jojic, N., Perina, A., Cristani, M., Murino, V., Frey, B.: Stel component analysis: Modeling spatial correlations in image class structure. In: CVPR (2009)Google Scholar
  10. 10.
    Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? PAMI 26, 147–159 (2004)CrossRefGoogle Scholar
  11. 11.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. PAMI 23, 1222–1239 (2001)CrossRefGoogle Scholar
  12. 12.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision. PAMI 26, 1124–1137 (2004)CrossRefzbMATHGoogle Scholar
  13. 13.
    Zhang, W., Shan, S., Gao, W., Chen, X., Zhang, H.: Local gabor binary pattern histogram sequence (lgbphs): A novel non-statistical model for face representation and recognition. In: ICCV (2005)Google Scholar
  14. 14.
    Freund, Y., Iyer, R., Schapire, R.E., Singer, Y.: An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research 4, 933–969 (2004)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Kohli, P., Ladick, L., Torr, P.H.S.: Robust higher order potentials for enforcing label consistency. In: CVPR, Anchorage, AK, United states (2008)Google Scholar
  16. 16.
    Larlus, D., Jurie, F.: Combining appearance models and markov random fields for category level object segmentation. In: CVPR, Anchorage, AK, pp. 1–7 (2008)Google Scholar
  17. 17.
    Pantofaru, C., Schmid, C., Hebert, M.: Object recognition by integrating multiple image segmentations. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part III. LNCS, vol. 5304, pp. 481–494. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  18. 18.
    Deng, Y., Manjunath, B.S.: Unsupervised segmentation of color-texture regions in images and video. PAMI 23, 800–810 (2001)CrossRefGoogle Scholar
  19. 19.
    Huang, G.B., Berg, T., Ramesh, M.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments, University of Massachusetts, Amherst, Technical Report (2007)Google Scholar
  20. 20.
    Huang, C., Ai, H., Li, Y., Lao, S.: High-performance rotation invariant multiview face detection. PAMI 29, 671–686 (2007)CrossRefGoogle Scholar
  21. 21.
    Zhang, L., Ai, H., Xin, S., Huang, C., Tsukiji, S., Lao, S.: Robust face alignment based on local texture classifiers. In: ICIP, vol. 2, pp. 354–357 (2005)Google Scholar
  22. 22.
    Jarvelin, K., Kekalainen, J.: Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems 20, 422–446 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Nan Wang
    • 1
  • Haizhou Ai
    • 1
  • Shihong Lao
    • 2
  1. 1.Computer Science & Technology DepartmentTsinghua UniversityBeijingChina
  2. 2.Core Technology CenterOmron CorporationKyotoJapan

Personalised recommendations