Advertisement

Illumination Recovery for Realistic Fluid Re-simulation

  • Hongyan QuanEmail author
  • Zilong Song
  • Xinquan Zhou
  • Shishan Xue
  • Changbo Wang
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 946)

Abstract

Previous studies in fluid re-simulation have devoted to reducing computational complexity, and little attention has been paid to realistic aspects. This paper presents a linear approach to estimate illumination from video examples for coherent photorealistic re-simulation. Compared with the previous study of light detection, it couples the reconstructed fluid geometry with surface appearance and linearly estimates illumination parameters, which avoids much higher computational cost from tedious optimization. The parameters in Blinn-Phong shading model (BSM) are recovered hierarchically. Based on fitting the ambient and diffuse components through the particles with lower intensities, reflectance can be clustered from the observations of high-intensity particles surface. We demonstrate its effectiveness for both steps by extensive quantitative and qualitative evaluation through relighting on the fluid surface from ground truth fluid video, as well as from re-simulation. Photorealistic coherently illuminated visual effects consistent with fluid surface geometry are obtained.

Keywords

Illumination Blinn-Phong model Reflectance Fluid re-simulation 

Notes

Acknowledgements

We thank Dyntex Dataset to support rich fluid video for our study, and special thanks to the reviewers for their valuable comments and suggestions.

Funding

This study was funded by NSFC Grant No. 61672237, 61532002, National High-tech R&D Program of China (863 Program) under Grant 2015AA016404.

References

  1. 1.
    Land, E.H., Mccann, J.J.: Lightness and retinex theory. J. Opt. Soc. Am. 61(1), 1–11 (1971)CrossRefGoogle Scholar
  2. 2.
    Horn, B.K.P.: Determining lightness from an image. Comput. Graph. Image Process. 3(4), 277–299 (1974)CrossRefGoogle Scholar
  3. 3.
    Barrowm, H., Tenenbaum, J.: Recovering intrinsic scene characteristics from images. Comput. Vis. Syst. 2, 3–26 (1978)Google Scholar
  4. 4.
    Fischler, M.A.: Recovering intrinsic scene characteristics from images. Southwest Research Inst Report (1981)Google Scholar
  5. 5.
    Shen, L., Tan, P., Lin, S.: Intrinsic image decomposition with non-local texture cues. In: Computer Vision and Pattern Recognition, pp. 1–7. IEEE (2008)Google Scholar
  6. 6.
    Grosse, R., Johnson, M.K., Adelson, E.H., Freeman, W.T.: Ground truth dataset and baseline evaluations for intrinsic image algorithms. In: International Conference on Computer Vision, vol. 30, pp. 2335–2342. IEEE (2010)Google Scholar
  7. 7.
    Shen, L., Yeo, C.: Intrinsic images decomposition using a local and global sparse representation of reflectance. In: Computer Vision and Pattern Recognition, vol. 32, pp. 697–704. IEEE (2011)Google Scholar
  8. 8.
    Ogino, S., Migita, T., Shakunaga, T.: Simultaneous recovery of reflectance property, shape and light position based on torrance-sparrow model. Technical Report Image Engineering, vol. 107, pp. 557–564 (2008)Google Scholar
  9. 9.
    Nielsen, J.B., Frisvad, J.R., Conradsen, K.: Addressing grazing angle reflections in Phong models. In: SIGGRAPH Asia, vol. 1. ACM (2014)Google Scholar
  10. 10.
    Wu, T.P., Sun, J., Tang, C.K., Shum, H.Y.: Interactive normal reconstruction from a single image. ACM Trans. Graph. (TOG) 27(5), 1–9 (2008)CrossRefGoogle Scholar
  11. 11.
    Schoeneman, C., Dorsey, J., Smits, B., Arvo, B., Greenberg, D.: Painting with light. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 143–146 (1993)Google Scholar
  12. 12.
    Weyrich, T., Lawrence, J., Lensch, H.P.A., et al.: Principles of Appearance Acquisition and Representation. ACM SIGGRAPH, New York (2008)CrossRefGoogle Scholar
  13. 13.
    Robert, J.W.: Photometric method for determining surface orientation from multiple images. Opt. Eng. 19(1), 191139 (1980)Google Scholar
  14. 14.
    Stephen, L., Ko, N.: Reflectance and illumination recovery in the wild. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 129–141 (2016)CrossRefGoogle Scholar
  15. 15.
    Nayar, S.K., Krishnan, G., Raskar, R.: Fast separation of direct and global components of a scene using high frequency illumination. ACM SIGGRAPH 25, 935–944 (2006)CrossRefGoogle Scholar
  16. 16.
    Thiago, P., Emilio, V.B., Ives, M., Mario, C.S., Luiz, H.D.F., Luiz, V.: Sketch-based warping of rgbn images. Graph. Model. 73(4), 97–110 (2011)CrossRefGoogle Scholar
  17. 17.
    Yu, M.Q., Quan, H.Y.: Fluid surface reconstruction based on specular reflection model. Comput. Animat. Virtual Worlds 24(5), 497–510 (2013)CrossRefGoogle Scholar
  18. 18.
    Peteri, R., Fazekas, S., Huiskes, M.J.: Dyntex: a comprehensive database of dynamic textures. Pattern Recognit. Lett. 31(12), 1627–1632 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Hongyan Quan
    • 1
    Email author
  • Zilong Song
    • 2
  • Xinquan Zhou
    • 3
  • Shishan Xue
    • 1
  • Changbo Wang
    • 1
  1. 1.The School of Computer Science and Software EngineeringEast China Normal UniversityShanghaiChina
  2. 2.Harbin No. 1 High SchoolHarbinChina
  3. 3.The College of BusinessCity University of Hong KongHong KongChina

Personalised recommendations