Advertisement

Adaptively Merging Large-Scale Range Data with Reflectance Properties

  • Ryusuke Sagawa
  • Ko Nishino
  • Katsushi Ikeuchi

In this chapter, we tackle the problem of geometric and photometric modeling of large intricately-shaped objects. Typical target objects we consider are cultural heritage objects. When constructing models of such objects, we are faced with several important issues that have not been addressed in the past – issues that mainly arise due to the large amount of data that has to be handled. We propose two novel approaches to efficiently handle such large amounts of data: a highly adaptive algorithm for merging range images and an adaptive nearest neighbor search to be used with the algorithm. We construct an integrated mesh model of the target object in adaptive resolution, taking into account the geometric and/or photometric attributes associated with the range images. We use surface curvature for the geometric attributes and (laser) reflectance values for the photometric attributes. This adaptive merging framework leads to a significant reduction in the necessary amount of computational resources. Furthermore, the resulting adaptive mesh models can be of great use for applications such as texture mapping, as we will briefly demonstrate. Additionally, we propose an additional test for the k-d tree nearest neighbor search algorithm. Our approach successfully omits back-tracking, which is controlled adaptively depending on the distance to the nearest neighbor. Since the main consumption of computational cost lies in the nearest neighbor search, the proposed algorithm leads to a significant speed-up of the whole merging process. In this chapter, we present the theories and algorithms of our approaches with pseudo code and apply them to several real objects, including large-scale cultural assets.

Keywords

Signed Distance Mesh Model Neighbor Point Range Image Query Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    K. Ikeuchi, Y. Sato, K. Nishino, R. Sagawa, T. Nishikawa, T. Oishi, I. Sato, J. Takamatsu, and D. Miyazaki, “Modeling cultural heritage through observation,” in Proc. of IEEE First Pacific-Rim Conference on Multimedia, Dec. 2000.Google Scholar
  2. [2]
    D. Miyazaki, T. Ooishi, T. Nishikawa, R. Sagawa, K. Nishino, T. Tomomatsu, Y. Takase, and K. Ikeuchi, “The great buddha project: Modelling cultural heritage through observation,” in Proceedings of 6th International Conference on Virtual Systems and MultiMedia, Gifu, 2000, pp. 138-145.Google Scholar
  3. [3]
    F. Bernardini, I. Martin, J. Mittleman, H. Rushmeier, and G. Taubin, “Building a digital model of michelangelo’s florentine pieta’,” IEEE Computer Graphics & Applications, vol. 22, no. 1, pp. 59-67, Jan/Feb 2002.CrossRefGoogle Scholar
  4. [4]
    M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk, “The digital michelangelo project: 3D scanning of large statues,” in Proc. SIGGRAPH 2000, 2000, pp. 131-144.Google Scholar
  5. [5]
    J.-A. Beraldin, M. Picard, S. El-Hakim, G. Godin, V. Valzano, A. Bandiera, and C. Latouche, “Virtualizing a byzantine crypt by combining high-resolution textures with laser scanner 3d data,” in Proc. 8th International Conference on Virtual Systems and MultiMedia, Korea, Sept. 2002.Google Scholar
  6. [6]
    F. Stein and G. Medioni, “Structural indexing: efficient 3-d object recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 125-145, 1992.CrossRefGoogle Scholar
  7. [7]
    A. Johnson and M. Hebert, “Surface registration by matching oriented points,” in Proc. Int. Conf. On Recent Advances in 3-D Digital Imaging and Modeling, May 1997, pp. 121-128.Google Scholar
  8. [8]
    P. Besl and N. McKay, “A method for registration of 3-d shapes,” IEEE Trans. Patt. Anal. Machine Intell., vol. 14, no. 2, pp. 239-256, Feb 1992.CrossRefGoogle Scholar
  9. [9]
    C. Dorai, G. Wang, A. Jain, and C. Mercer, “From images to models: Automatic 3d object model construction from multiple views,” in Proc. of the 13th IAPR International Conference on Pattern Recognition, 1996, pp. 770-774.Google Scholar
  10. [10]
    Y. Chen and G. Medioni, “Object modeling by registration of multiple range images,” Image and Vision Computing, vol. 10, no. 3, pp. 145-155, Apr 1992.CrossRefGoogle Scholar
  11. [11]
    M. D. Wheeler and K. Ikeuchi, “Sensor modeling, probabilistic hypothesis generation, and robust localization for object recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 3, pp. 252-265, March 1995.CrossRefGoogle Scholar
  12. [12]
    P. Neugebauer, “Geometrical cloning of 3d objects via simultaneous registration of multiple range images,” in Proc. Int. Conf. on Shape Modeling and Application, Mar 1997, pp. 130-139.Google Scholar
  13. [13]
    K. Nishino and K. Ikeuchi, “Robust simultaneous registration of multiple range images,” in Proc. Fifth Asian Conference on Computer Vision ACCV ’02, Jan. 2002, pp. 454-461.Google Scholar
  14. [14]
    K. Pulli, “Multiview registration for large data sets,” in Second Int. Conf. on 3D Digital Imaging and Modeling, Oct 1999, pp. 160-168.Google Scholar
  15. [15]
    D. Eggert, A. Fitzgibbon, and R. Fisher, “Simultaneous registration of multiple range views for use in reverse engineering,” Dept. of Artificial Intelligence, University of Edinburgh, Tech. Rep. 804, 1996.Google Scholar
  16. [16]
    R. Bergevin, M. Soucy, H. Gagnon, and D. Laurendeau, “Towards a general multi-view registration technique,” IEEE Trans. Patt. Anal. Machine Intell., vol. 18, no. 5, pp. 540-547, May 1996.CrossRefGoogle Scholar
  17. [17]
    D. Huber and M. Hebert, “Fully automatic registration of multiple 3d data sets,” Image and Vision Computing, vol. 21, no. 7, pp. 637-650, July 2003.CrossRefGoogle Scholar
  18. [18]
    G. Turk and M. Levoy, “Zippered polygon meshes from range images,” in Proc. SIGGRAPH’94, Jul 1994, pp. 311-318.Google Scholar
  19. [19]
    M. Soucy and D. Laurendeau, “A general surface approach to the integration of a set of range views,” IEEE Trans. Patt. Anal. Machine Intell., vol. 17, no. 4, pp. 344-358, April 1995.CrossRefGoogle Scholar
  20. [20]
    W. Lorensen and H. Cline, “Marching cubes: a high resolution 3d surface construction algorithm,” in Proc. SIGGRAPH’87. ACM, 1987, pp. 163-170.Google Scholar
  21. [21]
    H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, “Surface reconstruction from unorganized points,” in Proc. SIGGRAPH’92. ACM, 1992, pp. 71-78.Google Scholar
  22. [22]
    B. Curless and M. Levoy, “A volumetric method for building complex models from range images,” in Proc. SIGGRAPH’96. ACM, 1996, pp. 303-312.Google Scholar
  23. [23]
    A. Hilton, A. Stoddart, J. Illingworth, and T. Windeatt, “Reliable surface reconstruction from multiple range images,” in Proceedings of European Conference on Computer Vision, Springer-Verlag, 1996, pp. 117-126.Google Scholar
  24. [24]
    R. Whitaker, “A level-set approach to 3d reconstruction from range data,” International Journal of Computer Vision, vol. 29, no. 3, pp. 203-231, October 1998.CrossRefGoogle Scholar
  25. [25]
    H.-K. Zhao, S. Osher, and R. Fedkiw, “Fast surface reconstruction using the level set method,” in Proc. First IEEE Workshop on Variational and Level Set Methods, in conjunction with Proc. ICCV ’01. IEEE, 2001, pp. 194-202.Google Scholar
  26. [26]
    J. Sethian, Level Set Methods. Cambridge University Press, 1996.Google Scholar
  27. [27]
    M. D. Wheeler, “Automatic modeling and localization for object recognition,” Ph.D. dissertation, School of Computer Science, Carnegie Mellon University, 1996.Google Scholar
  28. [28]
    M. Wheeler, Y. Sato, and K. Ikeuchi, “Consensus surfaces for modeling 3d objects from multiple range images,” in Proc. International Conference on Computer Vision, January 1998.Google Scholar
  29. [29]
    K. Pulli, T. Duchamp, H. Hoppe, J. McDonald, L. Shapiro, and W. Stuetzle, “Robust meshes from range maps,” in Proc. Int. Conf. on Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Canada, May 1997, pp. 205-211.CrossRefGoogle Scholar
  30. [30]
    H. J. Wolfson and I. Rigoutsos, “Geometric hashing: An overview,” IEEE Computational Science & Engineering, vol. 4, no. 4, pp. 10-21, 1997.CrossRefGoogle Scholar
  31. [31]
    A. Califano and R. Mohan, “Multidimensional indexing for recognizing visual shapes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 16, pp. 373-392, 1994.CrossRefGoogle Scholar
  32. [32]
    J. Friedman, J. Bentley, and R. Finkel, “An algorithm for finding best matches in logarithmic expected time,” ACM Transactions on Mathematical Software, vol. 3, no. 3, pp. 209-226, 1977.MATHCrossRefGoogle Scholar
  33. [33]
    H. Samet, “The quadtree and related hierarchical data structure,” ACM Computing Surveys, vol. 16, no. 2, pp. 187-260, 1984.CrossRefMathSciNetGoogle Scholar
  34. [34]
    A. Guttman, “R-trees: A dynamic index structure for spatial searching,” in Proc. ACM SIGMOD Int. Conf. on Management of Data, 1984, pp. 47-54.Google Scholar
  35. [35]
    S. Lavallee and R. Szeliski, “Recovering the position and orientation of free-form objects from image contours using 3-d distance maps,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 4, pp. 378-390, April 1995.CrossRefGoogle Scholar
  36. [36]
    Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” International Journal of Computer Vision, vol. 13, no. 2, pp. 119-152, 1994.CrossRefGoogle Scholar
  37. [37]
    M. Greenspan and M. Yurick, “Approximate k-d tree search for efficient icp,” in Proc. 3DIM 2003, 2003, pp. 442-448.Google Scholar
  38. [38]
    R. Sagawa, T. Masuda, and K. Ikeuchi, “Effective nearest neighbor search for aligning and merging range images,” in Proc. 3DIM 2003, 2003, pp. 79-86.Google Scholar
  39. [39]
    R. Sagawa and K. Ikeuchi, “Taking consensus of signed distance field for complementing unobservable surface,” in Proc. 3DIM 2003, 2003, pp. 410-417.Google Scholar
  40. [40]
    R. Kurazume, K. Nishino, Z. Zhang, and K. Ikeuchi, “Simultaneous 2d images and 3d geometric model registration for texture mapping utilizing reflectance attribute,” in Proc. The 5th Asian Conference on Computer Vision, vol. 1, January 2002, pp. 99-106.Google Scholar
  41. [41]
    I. Stamos and P. Allen, “Registration of 3d with 2d imagery in urban environments,” in Proc. the Eighth International Conference on Computer Vision, to appear, Vancouver, Canada, 2001.Google Scholar
  42. [42]
    P. Neugebauer and K. Klein, “Texturing 3d models of real world objects from multiple unregistered photographic views,” in Proc. Eurographics ’99, 1999, pp. 245-256.Google Scholar
  43. [43]
    J. Foley, A. van Dam, S. Feiner, and J. F. Hughes, Computer Graphics: Principles and Practice in C, 2nd ed. Addison Wesley Professional, 1995, ISBN:0-201-84840-6.Google Scholar
  44. [44]
    G. Nielson and B. Hamann, “The asymptotic decider: resolving the ambiguity in marching cubes,” in Proceedings of Visualization’91. IEEE, 1991, pp. 83-91.Google Scholar
  45. [45]
    M. Garland and P. Heckbert, “Simplifying surfaces with color and texture using quadric error metrics,” in Proc. IEEE Visualization 1998, 1998.Google Scholar
  46. [46]
    H. Hoppe, “Progressive meshes,” in Computer Graphics (SIGGRAPH 1996 Proceedings), 1996, pp. 99-108.Google Scholar
  47. [47]
    ——, “New quadric metric for simplifying meshes with appearance attributes,” in Proc. IEEE Visualization 1999, 1999, pp. 59-66.Google Scholar
  48. [48]
    R. Shekhar, E. Fayyad, R. Yagel, and J. Cornhill, “Octree-based decimation of marching cubes surfaces,” in Proc. Visualization’96, 1996, pp. 335-342.Google Scholar
  49. [49]
    R. Shu, Z. Chen, and M. Kankanhalli, “Adaptive marching cubes,” The Visual Computer, vol. 11, pp. 202-217, 1995.Google Scholar
  50. [50]
    S. Frisken, R. Perry, A. Rockwood, and T. Jones, “Adaptively sampled distance fields: A general representation of shape for computer graphics,” in Proc. SIGGRAPH2000. ACM, July 2000, pp. 249-254.Google Scholar
  51. [51]
    S. F. F. Gibson, “Using distance maps for accurate surface representation in sampled volumes,” in IEEE Symposium on Volume Visualization, 1998, pp. 23-30.Google Scholar
  52. [52]
    R. Sagawa, T. Oishi, A. Nakazawa, R. Kurazume, and K. Ikeuchi, “Iterative refinement of range images with anisotropic error distribution,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2002, pp. 79-85.Google Scholar
  53. [53]
    Cyra Technologies, Inc. [Online]. Available: http://www.cyra.com
  54. [54]
    K. Nishino, Y. Sato, and K. Ikeuchi, “Eigen-texture method: Appearance compression based on 3d model,” in Proc. of Computer Vision and Pattern Recognition ’99, vol. 1, Jun. 1999, pp. 618-624.Google Scholar
  55. [55]
    R. Sagawa, K. Nishino, M. Wheeler, and K. Ikeuchi, “Parallel processing of range data merging,” in Proc. 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, Oct. 2001, pp. 577-583.Google Scholar
  56. [56]
    R. Sagawa, “Geometric and photometric merging for large-scale objects,” Ph.D. dissertation, Graduate School of Engineering, The University of Tokyo, 2003.Google Scholar
  57. [57]
    “The Stanford 3D Scanning Repository.” [Online]. Available: http://www-graphics.stanford.edu/data/3Dscanrep/
  58. [58]
    P. Cignoni, C. Rocchini, and R. Scopigno, “Metro: measuring error on simplified surfaces,” Computer Graphics Forum, vol. 17, no. 2, pp. 167-174, June 1998.CrossRefGoogle Scholar
  59. [59]
    F. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. 8, no. 6, pp. 679-698, 1986.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Ryusuke Sagawa
  • Ko Nishino
  • Katsushi Ikeuchi
    • 1
  1. 1.Institute of Industrial ScienceThe University of TokyoMeguro-kuJapan

Personalised recommendations