Adaptive Pixel-wise and Block-wise Stereo Matching in Lighting Condition Changes

  • Yong-Jun Chang
  • Yo-Sung HoEmail author


The depth information performs a very important role in a production of three-dimensional (3D) video content. One way to acquire this information is a stereo matching method. The stereo matching method searches for correspondences from a stereo image that has two different viewpoints. Subsequently, it estimates the depth information by calculating a disparity value between two corresponding points. Generally, a relatively accurate result is obtained from the correspondence search when the stereo image is captured under uniform illumination and exposure conditions. However, it is difficult to estimate accurate correspondences if each viewpoint image is captured under varying illumination and exposure conditions. In this paper, we analyze conventional pixel-wise and block-wise stereo matching methods that are robust to lighting condition changes. Subsequently, we propose an adaptive pixel-wise and block-wise stereo matching method based on the analysis result.


Stereo matching Disparity map Lighting condition Pixel-wise matching Block-wise matching 



This work was supported by the ‘Civil-Military Technology Cooperation Program’ grant funded by the Korea government.


  1. 1.
    Qian, N. (1997). Binocular disparity and the perception of depth. Neuron, 18(3), 359–368.CrossRefGoogle Scholar
  2. 2.
    Kim, S. Y., Cho, J. H., & Koschan, A. (2010). 3D video generation and service based on a TOF depth sensor in MPEG-4 multimedia framework. IEEE Transactions on Consumer Electronics, 56(3), 1730–1738.CrossRefGoogle Scholar
  3. 3.
    Gokturk, S. B., Yalcin, H., & Bamji C. (2004). A time-of-flight depth sensor-system description, issues and solutions. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition Workshop, Washington, pp. 1–9.Google Scholar
  4. 4.
    Premebida, C., Garrote, L., Asvadi, A., Ribeiro, A. P., & Nunes, U. (2016). High-resolution LIDAR-based depth mapping using bilateral filter. In: Proc. IEEE Conf. on Intelligent Transportation Systems, Rio de Janeiro, pp. 2469–2474.Google Scholar
  5. 5.
    Izadi, S., Kim, D., Molyneaux, O., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., & Fitzgibbon, A. (2011). KinecFusion: real-time 3D reconstruction and interaction using a moving depth camera. In Proc. 24 th ACM User Interface Software and Technology Symposium, Santa Barbara, pp. 559–568.Google Scholar
  6. 6.
    Lee, S. B., Kwon, S., & Ho, Y. S. (2013). Discontinuity adaptive depth upsampling for 3D video acquisition. Electronics Letters, 49(25), 1612–1614.CrossRefGoogle Scholar
  7. 7.
    Lee, E. K., & Ho, Y. S. (2010). Generation of multi-view video using a fusion camera system for 3D displays. IEEE Transactions on Consumer Electronics, 56(4), 2797–2805.CrossRefGoogle Scholar
  8. 8.
    Park, J., Kim, H., Tai, Y. W., Brown, M. S., & Kweon, I. (2011). High quality depth map upsampling for 3d-tof cameras. In Proc. IEEE Conf. on Computer Vision, Barcelona, pp. 1623–1630.Google Scholar
  9. 9.
    Sun, J., Zheng, N. N., & Shum, H. Y. (2003). Stereo matching using belief propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(7), 787–800.CrossRefGoogle Scholar
  10. 10.
    Boykov, Y., Veksler, O., & Zabih, R. (1998). Markov random fields with efficient approximations. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Santa Barbara, pp. 648–655.Google Scholar
  11. 11.
    Zhang, K., Fang, Y., Min, D., Sun, L., Yang, S., & Yan, S. (2014). Cross-scale cost aggregation for stereo matching. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, pp. 1590–1597.Google Scholar
  12. 12.
    Scharstein, D., & Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1), 7–42.CrossRefGoogle Scholar
  13. 13.
    Li, R., Ham, B., Oh, C., & Sohn, K. (2013). Disparity search range estimation based on dense stereo matching. In Proc. IEEE Conf. on Industrial Electronics and Applications, Melbourne, pp. 753–759.Google Scholar
  14. 14.
    Min, D., Yea, S., Arican, Z., & Vetro, A. (2010). Disparity search range estimation: enforcing temporal consistency. In Proc. IEEE Conf. on Acoustics Speech and Signal Processing, Dallas, pp. 2366–2369.Google Scholar
  15. 15.
    Heo, Y. S., Lee, K. M., & Lee, S. U. (2008). Illumination and camera invariant stereo matching. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, pp. 1–8.Google Scholar
  16. 16.
    Chang, Y. J., & Ho, Y. S. (2017). Pixel-based adaptive normalized cross correlation for illumination invariant stereo matching. Electronic Imaging, San Francisco, pp. 1–6.Google Scholar
  17. 17.
    Chang, Y. J., & Ho, Y. S. (2016). Robust stereo matching to radiometric variation using binary information of census transformation. In Proc. The Korean Institute of Broadcast and Media Engineers Fall Conference, Seoul, pp. 1–2.Google Scholar
  18. 18.
    Chang, Y. J., & Ho, Y. S. (2016). Fast cost computation using binary information for illumination invariant stereo matching. IEEE Seoul Section Student Paper Contest, Paper C04.Google Scholar
  19. 19.
    Finlayson, G. D., & Xu, R. (2003). Illumination and gamma comprehensive normalization in log RGB space. Pattern Recognition Letters, 24(11), 1679–1690.CrossRefGoogle Scholar
  20. 20.
    Tomasi, C., & Manduchi R. (1998). Bilateral filtering for gray and color images. In Proc. IEEE Conf. on Computer Vision, Bombay, pp. 839–846.Google Scholar
  21. 21.
    Gonzalez, R. C., & Woods, R. E. (2002). Color image processing. In Digital Image Processing, 2nd ed., New Jersey: Prentice Hall, pp. 282–348.Google Scholar
  22. 22.
    Zabih, R., & Woodfill, J. (1994). Non-parametric local transforms for computing visual correspondence. In: Proc. European Conf. on Computer Vision, Stockholm, Sweden, pp. 151–158.Google Scholar
  23. 23.
    Van Lint, J. H. (1992). Linear codes. In Introduction to Coding Theory, 2nd ed., Berlin: Springer-Verlag, pp. 31–41.Google Scholar
  24. 24.
    Rhemann, C., Hosni, A., Bleyer, M., Rother, C., & Gelautz, M. (2011). Fast cost-volume filtering for visual correspondence and beyond. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Colorado Springs, pp, 3017–3024.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Gwangju Institute of Science and Technology (GIST)GwangjuRepublic of Korea

Personalised recommendations