Advertisement

Visual Data Fusion for Objects Localization by Active Vision

  • Grégory Flandin
  • François Chaumette
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2353)

Abstract

Visual sensors provide exclusively uncertain and partial knowledge of a scene. In this article, we present a suitable scene knowledge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochastic and set membership models. We consider that, for a large class of applications, an approximated representation is sufficient to build a preliminary map of the scene. Our approximation mainly results in ellipsoidal calculus by means of a normal assumption for stochastic laws and ellipsoidal over or inner bounding for uniform laws. These approximations allow us to build an efficient estimation process integrating visual data on line. Based on this estimation scheme, optimal exploratory motions of the camera can be automatically determined. Real time experimental results validating our approach are finally given.

Keywords

Object Localization Camera Motion Visual Data Active Vision Visual Servoing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    T. Arbel and F. Ferrie. Viewpoint selection by navigation through entropy maps. 7th IEEE Int. Conf. on Computer Vision, Vol. I, pages 248–254, Los Alamitos, September 1999.Google Scholar
  2. 2.
    N. Ayache. Artificial Vision for Mobile Robots. The MIT Press, Cambridge, MA, 1991.Google Scholar
  3. 3.
    J. D. Boissonnat. Representing 2D and 3D shapes with the Delaunay triangulation. 7th Int. Conf. on Pattern Recognition, pages 745–748, Montreal, Canada, 1984.Google Scholar
  4. 4.
    C. Connolly. The determination of next best views. In IEEE Int. Conf. on Robotics and Automation, Vol. 2, pages 432–435, St Louis, Missouri, March 1995.Google Scholar
  5. 5.
    H. F. Durrant-Whyte. Integration, Coordination, and Control of Multi-Sensor Robot Systems. Kluwer Academic Publishers, Boston, 1987.CrossRefGoogle Scholar
  6. 6.
    O. D. Faugeras, F. Lustman, and G. Toscani. Motion and structure from motion from point and line matches. In First Int. Conf. on Computer Vision, pages 25–34, Washington DC, 1987.Google Scholar
  7. 7.
    G. Flandin, F. Chaumette, E. Marchand. Eye-in-hand / eye-to-hand cooperation for visual servoing. IEEE Int. Conf. Robotics and Automation, Vol. 3, pages 2741–2746, San Fransisco, April 2000.Google Scholar
  8. 8.
    G. Flandin, F. Chaumette. Visual Data Fusion: Application to Objects Localization and Exploration. IRISA Research report, No 1394, April 2001.Google Scholar
  9. 9.
    K. N. Kutulakos, C. R. Dyer, and V. J. Lumelsky. Provable strategies for vision-guided exploration in three dimensions. IEEE Int. Conf. Robotics and Automation, pages 1365–1372, Los Alamitos, CA, 1994.Google Scholar
  10. 10.
    S. Lacroix and R. Chatila. Motion and perception strategies for outdoor mobile robot navigation in unknown environments. Lecture Notes in Control and Information Sciences, 223. Springer-Verlag, New York, 1997.Google Scholar
  11. 11.
    D. G. Maksarov and J. P. Norton. State bounding with ellipsoidal set description of the uncertainty. Int. Journal on Control, 65(5):847–866, 1996.MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    E. Marchand and F. Chaumette. Active vision for complete scene reconstruction and exploration. IEEE Trans. on Pattern Analysis and Machine Intelligence, 21(1):65–72, January 1999.Google Scholar
  13. 13.
    D. Marr and K. Nishihara. Representation and recognition of the spatial organization of three-dimensional shapes. Proc. Royal Soc. London Bulletin, pages 269–294, 1977.Google Scholar
  14. 14.
    J.M. Odobez and P. Bouthemy. Robust multiresolution estimation of parametric motion models. Journal of Visual Communication and Image Representation, 6(4):348–365, 1995.CrossRefGoogle Scholar
  15. 15.
    F. C. Schweppe. Recursive state estimation: unknown but bounded errors and system inputs. IEEE Trans. on Automatic Control, AC-13:22–28, 1968.CrossRefGoogle Scholar
  16. 16.
    G. Slabaugh, B. Culbertson, T. Malzbender, and R. Schafer. A survey of methods for volumetric scene reconstruction from photographs. Technical report, Center for Signal and Image Processing, Georgia Institute of Technology, 2001.Google Scholar
  17. 17.
    K. A. Tarabanis, P. K. Allen, and R. Y. Tsai. A survey of sensor planning in computer vision. IEEE Trans. on Robotics and Automation, 11(1):86–104, February 1995.Google Scholar
  18. 18.
    E. Welzl. Smallest enclosing disks (balls and ellipsoids). Lecture Notes in Computer Science, 555:359–370, 1991.Google Scholar
  19. 19.
    P. Whaite and F. P. Ferrie. Autonomous exploration: Driven by uncertainty. Int. Conf. on Computer Vision and Pattern Recognition, pages 339–346, Los Alamitos, CA, June 1994.Google Scholar
  20. 20.
    H. S. Witsenhausen. Sets of possible states of linear systems given perturbed observations. IEEE Trans. on Automatic Control, AC-13:556–558, 1968.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Grégory Flandin
    • 1
  • François Chaumette
    • 1
  1. 1.IRISA/INRIA RennesRennes cedexFrance

Personalised recommendations