Advertisement

A Unified Contour-Pixel Model for Figure-Ground Segmentation

  • Ben Packer
  • Stephen Gould
  • Daphne Koller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6315)

Abstract

The goal of this paper is to provide an accurate pixel-level segmentation of a deformable foreground object in an image. We combine state-of-the-art local image segmentation techniques with a global object-specific contour model to form a coherent energy function over the outline of the object and the pixels inside it. The energy function includes terms from a variant of the TextonBoost method, which labels each pixel as either foreground or background. It also includes terms over landmark points from a LOOPS model [1], which combines global object shape with landmark-specific detectors. We allow the pixel-level segmentation and object outline to inform each other through energy potentials so that they form a coherent object segmentation with globally consistent shape and appearance. We introduce an inference method to optimize this energy that proposes moves within the complex energy space based on multiple initial oversegmentations of the entire image. We show that this method achieves state-of-the-art results in precisely segmenting articulated objects in cluttered natural scenes.

Keywords

Joint Model Appearance Model Foreground Object Landmark Point Foreground Segmentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Heitz, G., Elidan, G., Packer, B., Koller, D.: Shape-based object localization for descriptive classification. In: NIPS (2008)Google Scholar
  2. 2.
    Cootes, T., Edwards, G., Taylor, C.: Active appearance models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, p. 484. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  3. 3.
    Thayananthan, A., Stenger, B., Torr, P., Cipolla, R.: Shape context and chamfer matching in cluttered scenes. In: CVPR (2003)Google Scholar
  4. 4.
    Shotton, J., Blake, A., Cipolla, R.: Contour-based learning for object detection. In: ICCV (2005)Google Scholar
  5. 5.
    Ferrari, V., Jurie, F., Schmid, C.: Accurate object detection with deformable shape models learnt from images. In: CVPR (2007)Google Scholar
  6. 6.
    Leibe, B., Leonardis, A., Schiele, B.: Combined object categorization and segmentation with an implicit shape model. In: ECCV Workshop on Statistical Learning in Computer Vision (2004)Google Scholar
  7. 7.
    Kumar, M.P., Torr, P., Zisserman, A.: OBJ CUT. In: CVPR (2005)Google Scholar
  8. 8.
    Levin, A., Weiss, Y.: Learning to combine bottom-up and top-down segmentation. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 581–594. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Gould, S., Fulton, R., Koller, D.: Decomposing a scene into geometric and semantically consistent regions. In: ICCV (2009)Google Scholar
  10. 10.
    Winn, J., Jojic, N.: Locus: Learning object classes with unsupervised segmentation. In: ICCV (2005)Google Scholar
  11. 11.
    Ramanan, D.: Learning to parse images of articulated objects. In: NIPS (2006)Google Scholar
  12. 12.
    Bray, M., Kohli, P., Torr, P.H.S.: Posecut: Simultaneous segmentation and 3d pose estimation of humans using dynamic graph-cuts. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 642–655. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Chen, Y., Zhu, L., Yuille, A.L., Zhang, H.: Unsupervised learning of probabilistic object models (poms) for object classification, segmentation and recognition using knowledge propagation. PAMI (2009)Google Scholar
  14. 14.
    Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. In: CVPR (2004)Google Scholar
  15. 15.
    Fink, M., Ullman, S.: From aardvark to zorro: A benchmark of mammal images. In: IJCV (2008)Google Scholar
  16. 16.
    Torralba, A., Murphy, K., Freeman, W.: Contextual models for object detection using boosted random fields. In: NIPS (2005)Google Scholar
  17. 17.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: Textonboost: Joint appearance, shape and context modeling for mulit-class object recognition and segmentation. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 1–15. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  18. 18.
    Greig, D.M., Porteous, B.T., Seheult, A.H.: Exact maximum a posteriori estimation for binary images. In: Royal Stats. Society (1989)Google Scholar
  19. 19.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the em algorithm. In: Royal Stats. Society (1977)Google Scholar
  20. 20.
    Comaniciu, D., Meer, P.: Mean shift: A robust approach toward feature space analysis. PAMI (2002)Google Scholar
  21. 21.
    Elidan, G., McGraw, I., Koller, D.: Residual belief propagation: Informed scheduling for async. message passing. In: UAI (2006)Google Scholar
  22. 22.
    Komodakis, N., Paragios, N., Tziritas, G.: Mrf optimization via dual decomposition: Message-passing revisited. In: ICCV (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Ben Packer
    • 1
  • Stephen Gould
    • 2
  • Daphne Koller
    • 1
  1. 1.Computer Science DepartmentStanford UniversityUSA
  2. 2.Electric Engineering DepartmentStanford UniversityUSA

Personalised recommendations