Advertisement

Self-localisation and Map Building for Collision-Free Robot Motion

  • Oytun Akman
  • Pieter Jonker
Chapter

Abstract

Robotic manipulation for order picking is one of the big challenges for future warehouses. In every phase of this picking process (object detection and recognition, object grasping, object transport, and object deposition), avoiding collisions is crucial for successful operation. This is valid for different warehouse designs, in which robot arms and autonomous vehicles need to know their 3D pose (position and orientation) in their environment to perform their tasks, using collision-free path planning and visual servoing. On-line 3D map generation of this immediate environment makes it possible to adapt a standard static map to dynamic environments. In this chapter, a novel framework for pose tracking and map building for collision-free robot and autonomous vehicle motion in context-free environments is presented. First the system requirements and related work are presented, whereafter a description of the developed system and its experimental results follow. In the final section, conclusions are drawn and future research directions are discussed.

Keywords

Path Planning Automate Guide Vehicle Stereo Pair Visual Servoing Order Picking 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Akman O, Bayramoglu N, Alatan AA, Jonker P (2010) Utilization of spatial information for point cloud segmentation. In: 3DTV-conference: the true vision—capture, transmission and display of 3D video (3DTV-CON), pp 1–4Google Scholar
  2. 2.
    Akman O, Jonker P (2010) Computing saliency map from spatial information in point cloud data. In: Advanced concepts for intelligent vision systems. Lecture notes in computer science, vol 6474, Springer, Berlin, pp 290–299Google Scholar
  3. 3.
    Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110:346–359CrossRefGoogle Scholar
  4. 4.
    Bayramoglu N, Akman O, Alatan AA, Jonker P (2009) Integration of 2D images and range data for object segmentation and recognition. In: Proceedings of the twelfth international conference on climbing and walking robots and the support technologies for mobile machines, pp 927–933Google Scholar
  5. 5.
    Caarls J, Jonker P, Kolstee Y, Rotteveel J, van Eck W (2009) Augmented reality for art, design and cultural heritage—system design and evaluation. EURASIP J Image Video Process. Article ID 716,160Google Scholar
  6. 6.
    Davison AJ, Reid ID, Molton ND, Stasse O (2007) MonoSLAM: Real-time single camera SLAM. IEEE Trans Pattern Anal Mach Intel 29:1052–1067CrossRefGoogle Scholar
  7. 7.
    Eade E, Drummond T (2006) Scalable monocular SLAM. In: 2006 IEEE computer society conference on computer vision and pattern recognition, pp 469–476Google Scholar
  8. 8.
    Gallup D, Frahm JM, Pollefeys M (2010) Piecewise planar and non-planar stereo for urban scene reconstruction. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR), pp 1418–1425Google Scholar
  9. 9.
    Habbecke M, Kobbelt L (2007) A surface-growing approach to multi-view stereo reconstruction. In: IEEE conference on computer vision and pattern recognition, CVPR ’07, pp 1–8Google Scholar
  10. 10.
    Hirschmuller H (2008) Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intel 30:328–341CrossRefGoogle Scholar
  11. 11.
    Klein G, Murray D (2007) Parallel tracking and mapping for small AR workspaces. In: 6th IEEE and ACM international symposium on mixed and augmented reality, ISMAR 2007, pp 1–10Google Scholar
  12. 12.
    Lee YJ, Song JB (2009) Visual SLAM in indoor environments using autonomous detection and registration of objects. In: Multisensor fusion and integration for intelligent systems. Lecture notes in electrical engineering, vol 35. Springer, Dordrecht, pp 301–314Google Scholar
  13. 13.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110CrossRefGoogle Scholar
  14. 14.
    Merrell P, Akbarzadeh A, Wang L, Mordohai P, Frahm JM, Yang R, Nister D, Pollefeys M (2007) Real-time visibility-based fusion of depth maps. In: IEEE 11th international conference on computer vision, ICCV 2007, pp 1–8Google Scholar
  15. 15.
    Newcombe RA, Davison AJ (2010) Live dense reconstruction with a single moving camera. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR), pp 1498–1505Google Scholar
  16. 16.
    Rosten E, Drummond T (2006) Machine learning for high-speed corner detection. In: Computer vision—ECCV 2006. Lecture notes in computer science, vol 3951. Springer, Berlin, pp 430–443Google Scholar
  17. 17.
    Sarkis M, Zia W, Diepold K (2010) Fast depth map compression and meshing with compressed tritree. In: Computer vision—ACCV 2009. Lecture notes in computer science, vol 5995. Springer, Berlin, pp 44–55Google Scholar
  18. 18.
    Se S, Lowe D, Little J (2002) Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks. Int J Robot Res 21:735–758CrossRefGoogle Scholar
  19. 19.
    Triggs B, McLauchlan P, Hartley R, Fitzgibbon A (2000) Bundle adjustment—a modern synthesis. In: Vision algorithms: theory and practice. Lecture notes in computer science, vol 1883. Springer, Berlin, pp 153–177Google Scholar

Copyright information

© Springer-Verlag London Limited  2012

Authors and Affiliations

  1. 1.Faculty of Mechanical, Maritime and Materials EngineeringDelft University of TechnologyDelftThe Netherlands

Personalised recommendations