Robust Structured Light System Against Subsurface Scattering Effects Achieved by CNN-Based Pattern Detection and Decoding Algorithm

  • Ryo FurukawaEmail author
  • Daisuke Miyazaki
  • Masashi Baba
  • Shinsaku Hiura
  • Hiroshi Kawasaki
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11129)


To reconstruct 3D shapes of real objects, a structured-light technique has been commonly used especially for practical purposes, such as inspection, industrial modeling, medical diagnosis, etc., because of simplicity, stability and high precision. Among them, oneshot scanning technique, which requires only single image for reconstruction, becomes important for the purpose of capturing moving objects. One open problem of oneshot scanning technique is its instability, when captured pattern is degraded by some reasons, such as strong specularity, subsurface scattering, inter-reflection and so on. One of important targets for oneshot scan is live animal, which includes human body or tissue of organ, and has subsurface scattering. In this paper, we propose a learning-based approach to solve pattern degradation caused by subsurface scattering for oneshot scan. Since patterns are significantly blurred by subsurface scattering, robust decoding technique is required, which is effectively achieved by separating the decoding process into two parts, such as pattern detection and ID recognition in our technique; both parts are implemented by CNN. To efficiently achieve robust pattern detection, we convert a line detection into segmentation problem. For robust ID recognition, we segment all the region into each ID using U-Net. In the experiments, it is shown that our technique is robust against strong subsurface scattering compared to state of the art technique.



This work was supported by JSPS/KAKENHI 16H02849, 16KK0151, 18H04119, 18K19824, and MSRA CORE14.


  1. 1.
    Carr, J.C., et al.: Reconstruction and representation of 3D objects with radial basis functions. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2001, pp. 67–76. ACM, New York (2001)Google Scholar
  2. 2.
    Ciresan, D., Giusti, A., Gambardella, L.M., Schmidhuber, J.: Deep neural networks segment neuronal membranes in electron microscopy images. In: Advances in Neural Information Processing Systems, pp. 2843–2851 (2012)Google Scholar
  3. 3.
    Dai, J., He, K., Sun, J.: Convolutional feature masking for joint object and stuff segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3992–4000 (2015)Google Scholar
  4. 4.
    Furukawa, R., Morinaga, H., Sanomura, Y., Tanaka, S., Yoshida, S., Kawasaki, H.: Shape acquisition and registration for 3D endoscope based on grid pattern projection. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 399–415. Springer, Cham (2016). Scholar
  5. 5.
    Gupta, M., Agrawal, A., Veeraraghavan, A., Narasimhan, S.G.: A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus. Int. J. Comput. Vision 102, 33–55 (2013)CrossRefGoogle Scholar
  6. 6.
    Gupta, M., Nayar, S.K.: Micro phase shifting. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, June 2012Google Scholar
  7. 7.
    Gupta, M., Yin, Q., Nayar, S.K.: Structured light in sunlight. In: The IEEE International Conference on Computer Vision (ICCV), December 2013Google Scholar
  8. 8.
    Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 297–312. Springer, Cham (2014). Scholar
  9. 9.
    Horn, B.K.P.: Obtaining shape from shading information. In: Shape From Shading, pp. 123–171. MIT Press, Cambridge (1989)Google Scholar
  10. 10.
    Ikeuchi, K.: Determining surface orientations of specular surfaces by using the photometric stereo method. IEEE Trans. Pattern Anal. Mach. Intell. 6, 661–669 (1981)CrossRefGoogle Scholar
  11. 11.
    Inoshita, C., Mukaigawa, Y., Matsushita, Y., Yagi, Y.: Shape from single scattering for translucent objects. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7573, pp. 371–384. Springer, Heidelberg (2012). Scholar
  12. 12.
    Kawasaki, H., Furukawa, R., Sagawa, R., Yagi, Y.: Dynamic scene shape reconstruction using a single structured light pattern. In: CVPR, pp. 1–8, 23–28 June 2008Google Scholar
  13. 13.
    Kawasaki, H., Ono, S., Horita, Y., Shiba, Y., Furukawa, R., Hiura, S.: Active one-shot scan for wide depth range using a light field projector based on coded aperture. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3568–3576 (2015)Google Scholar
  14. 14.
    Koninckx, T.P., Van Gool, L.: Real-time range acquisition by adaptive structured light. IEEE Trans. Pattern Anal. Mach. Intell. 28(3), 432–445 (2006)CrossRefGoogle Scholar
  15. 15.
    Mesa Imaging AG. SwissRanger SR-4000 (2011).
  16. 16.
    Microsoft. Xbox 360 Kinect (2010).
  17. 17.
    O’Toole, M., Achar, S., Narasimhan, S.G., Kutulakos, K.N.: Homogeneous codes for energy-efficient illumination and imaging. ACM Trans. Graph. 34(4), 35:1–35:13 (2015)CrossRefGoogle Scholar
  18. 18.
    Proesmans, M., Van Gool, L.: One-shot 3D-shape and texture acquisition of facial data. In: Bigün, J., Chollet, G., Borgefors, G. (eds.) AVBPA 1997. LNCS, vol. 1206, pp. 411–418. Springer, Heidelberg (1997). Scholar
  19. 19.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  20. 20.
    Ryan Fanello, S., et al.: HyperDepth: learning depth from structured light without matching. In: CVPR, June 2016Google Scholar
  21. 21.
    Sagawa, R., Ota, Y., Yagi, Y., Furukawa, R., Asada, N., Kawasaki, H.: Dense 3D reconstruction method using a single pattern for fast moving object. In: ICCV, pp. 1779–1786 (2009)Google Scholar
  22. 22.
    Salvi, J., Pages, J., Batlle, J.: Pattern codification strategies in structured light systems. Pattern Recognit. 37(4), 827–849 (2004)CrossRefGoogle Scholar
  23. 23.
    Sato, K., Inokuchi, S.: Three-dimensional surface measurement by space encoding range imaging. J. Rob. Syst. 2, 27–39 (1985)Google Scholar
  24. 24.
    Song, L., Tang, S., Song, Z.: A robust structured light pattern decoding method for single-shot 3D reconstruction. In: 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 668–672, July 2017Google Scholar
  25. 25.
    Taguchi, Y., Agrawal, A., Tuzel, O.: Motion-aware structured light using spatio-temporal decodable patterns. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 832–845. Springer, Heidelberg (2012). Scholar
  26. 26.
    Ulusoy, A., Calakli, F., Taubin, G.: One-shot scanning using de bruijn spaced grids. In: Proceedings of the 2009 IEEE International Workshop on 3-D Digital Imaging and Modeling (2009)Google Scholar
  27. 27.
    Wang, J., Sankaranarayanan, A.C., Gupta, M., Narasimhan, S.G.: Dual structured light 3D using a 1D sensor. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 383–398. Springer, Cham (2016). Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ryo Furukawa
    • 1
    Email author
  • Daisuke Miyazaki
    • 1
  • Masashi Baba
    • 1
  • Shinsaku Hiura
    • 1
  • Hiroshi Kawasaki
    • 2
  1. 1.Hiroshima City UniversityHiroshimaJapan
  2. 2.Kyushu UniversityFukuokaJapan

Personalised recommendations