Advertisement

Improved Reconstruction of Deforming Surfaces by Cancelling Ambient Occlusion

  • Thabo Beeler
  • Derek Bradley
  • Henning Zimmer
  • Markus Gross
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7572)

Abstract

We present a general technique for improving space-time reconstructions of deforming surfaces, which are captured in an video-based reconstruction scenario under uniform illumination. Our approach simultaneously improves both the acquired shape as well as the tracked motion of the deforming surface. The method is based on factoring out surface shading, computed by a fast approximation to global illumination called ambient occlusion. This allows us to improve the performance of optical flow tracking that mainly relies on constancy of image features, such as intensity. While cancelling the local shading, we also optimize the surface shape to minimize the residual between the ambient occlusion of the 3D geometry and that of the image, yielding more accurate surface details in the reconstruction. Our enhancement is independent of the actual space-time reconstruction algorithm. We experimentally measure the quantitative improvements produced by our algorithm using a synthetic example of deforming skin, where ground truth shape and motion is available. We further demonstrate our enhancement on a real-world sequence of human face reconstruction.

Keywords

Optical Flow Input Shape Uniform Illumination Photometric Stereo Brightness Change 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    de Aguiar, E., Theobalt, C., Stoll, C., Seidel, H.-P.: Marker-less deformable mesh tracking for human shape and motion capture. In: CVPR (2007)Google Scholar
  2. 2.
    Bradley, D., Popa, T., Sheffer, A., Heidrich, W., Boubekeur, T.: Markerless garment capture. ACM Trans. Graphics (Proc. SIGGRAPH), 99 (2008)Google Scholar
  3. 3.
    de Aguiar, E., Stoll, C., Theobalt, C., Ahmed, N., Seidel, H.P., Thrun, S.: Performance capture from sparse multi-view video. ACM Trans. Graphics (Proc. SIGGRAPH), 98 (2008)Google Scholar
  4. 4.
    Vlasic, D., Baran, I., Matusik, W., Popović, J.: Articulated mesh animation from multi-view silhouettes. ACM Trans. Graphics (Proc. SIGGRAPH), 97 (2008)Google Scholar
  5. 5.
    Bradley, D., Heidrich, W., Popa, T., Sheffer, A.: High resolution passive facial performance capture. ACM Trans. Graphics (Proc. SIGGRAPH) (2010)Google Scholar
  6. 6.
    Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R.W., Gross, M.: High-quality passive facial performance capture using anchor frames. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 75 (2011)Google Scholar
  7. 7.
    Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: CVPR (2006)Google Scholar
  8. 8.
    Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. IJCV 92, 1–31 (2011)CrossRefGoogle Scholar
  9. 9.
    Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High Accuracy Optical Flow Estimation Based on a Theory for Warping. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Molnár, J., Chetverikov, D., Fazekas, S.: Illumination-robust variational optical flow using cross-correlation. CVIU 114, 1104–1114 (2010)Google Scholar
  11. 11.
    Seitz, S.M., Baker, S.: Filter flow. In: ICCV (2009)Google Scholar
  12. 12.
    Gennert, M., Negahdaripour, S.: Relaxing the brightness constancy assumption in computing optical flow. Technical Report A.I. Memo No. 975. MIT (1987)Google Scholar
  13. 13.
    Haussecker, H.W., Fleet, D.J.: Computing optical flow with physical models of brightness variation. TPAMI 23, 661–673 (2001)CrossRefGoogle Scholar
  14. 14.
    Zimmer, H., Bruhn, A., Weickert, J.: Optic flow in harmony. IJCV 93(3), 368–388 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Wedel, A., Pock, T., Zach, C., Bischof, H., Cremers, D.: An Improved Algorithm for TV-L 1 Optical Flow. In: Cremers, D., Rosenhahn, B., Yuille, A.L., Schmidt, F.R. (eds.) Statistical and Geometrical Approaches to Visual Motion Analysis. LNCS, vol. 5604, pp. 23–45. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  16. 16.
    Stockham, T.G.: Image processing in the context of a visual model. Proc. IEEE 60 (1972)Google Scholar
  17. 17.
    Horn, B.K.P.: Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View. PhD thesis. MIT (1970)Google Scholar
  18. 18.
    Woodham, R.J.: Photometric method for determining surface orientation from multiple images. Optical Engineering 19(1), 139–144 (1980)Google Scholar
  19. 19.
    Joshi, N., Kriegman, D.: Shape from varying illumination and viewpoint. In: ICCV (2007)Google Scholar
  20. 20.
    Hernandez, C., Vogiatzis, G., Cipolla, R.: Multiview photometric stereo. TPAMI 30 (2008)Google Scholar
  21. 21.
    Wu, C., Liu, Y., Dai, Q., Wilburn, B.: Fusing multiview and photometric stereo for 3d reconstruction under uncalibrated illumination. TVCG 17, 1082–1095 (2011)Google Scholar
  22. 22.
    Wu, C., Wilburn, B., Matsushita, Y., Theobalt, C.: High-quality shape from multi-view stereo and shading under general illumination. In: CVPR (2011)Google Scholar
  23. 23.
    Hernandez, C., Vogiatzis, G., Brostow, G.J., Stenger, B., Cipolla, R.: Non-rigid photometric stereo with colored lights. In: ICCV (2007)Google Scholar
  24. 24.
    Ahmed, N., Theobalt, C., Dobrev, P., Seidel, H.-P., Thrun, S.: Robust fusion of dynamic shape and normal capture for high-quality reconstruction of time-varying geometry. In: CVPR (2008)Google Scholar
  25. 25.
    Vlasic, D., Peers, P., Baran, I., Debevec, P., Popović, J., Rusinkiewicz, S., Matusik, W.: Dynamic shape capture using multi-view photometric stereo. ACM Trans. on Graphics 28(5), 174 (2009)CrossRefGoogle Scholar
  26. 26.
    Popa, T., Zhou, Q., Bradley, D., Kraevoy, V., Fu, H., Sheffer, A., Heidrich, W.: Wrinkling captured garments using space-time data-driven deformation. CGF (Proc. Eurographics) 28(2), 427–435 (2009)CrossRefGoogle Scholar
  27. 27.
    Wu, C., Varanasi, K., Liu, Y., Seidel, H.-P., Theobalt, C.: Shading-based dynamic shape refinement from multi-view video under general illumination. In: ICCV (2011)Google Scholar
  28. 28.
    Beeler, T., Bickel, B., Sumner, R., Beardsley, P., Gross, M.: High-quality single-shot capture of facial geometry. ACM Trans. Graphics (Proc. SIGGRAPH) (2010)Google Scholar
  29. 29.
    Zhukov, S., Iones, A., Kronin, G.: An ambient light illumination model. In: Proc. of Eurographics Workshop on Rendering, pp. 45–55 (1998)Google Scholar
  30. 30.
    Méndez-Feliu, A., Sbert, M.: From obscurances to ambient occlusion: A survey. Visual Computer 25(2), 181–196 (2009)CrossRefGoogle Scholar
  31. 31.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI, pp. 674–679 (1981)Google Scholar
  32. 32.
    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981)CrossRefGoogle Scholar
  33. 33.
    Werlberger, M., Pock, T., Bischof, H.: Motion estimation with non-local total variation regularization. In: CVPR (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Thabo Beeler
    • 1
    • 2
  • Derek Bradley
    • 1
  • Henning Zimmer
    • 2
  • Markus Gross
    • 1
    • 2
  1. 1.Disney Research ZurichSwitzerland
  2. 2.ETH ZurichSwitzerland

Personalised recommendations