Exploratory Research on Application of Different Vision System on Warehouse Robot Using Selective Algorithm

  • Wan Chew TanEmail author
  • Kian Meng Yap
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10645)


Warehouse robots rely on navigation algorithm to maneuver in the warehouse. One of the available navigation algorithm includes the use of vision technology. Yet, the technology requires depth cameras to act as “eyes” of the robot. It is known that cameras depend on lighting factor to operate. A sole example includes failure of vision-powered warehouse robots in extreme lighting conditions. Thus, this paper discusses the approach to enable warehouse robot in in-tense lighting conditions with implementation of two vision technology. In this paper, two depth cameras that function on different technology were used. The cameras chosen are stereoscopic camera and infrared based time-of-flight camera. This paper first studies the lighting factor that affects the performance of both cameras. Next, both camera were simulated in extreme lighting condition. The results obtained are further analyzed and constructed into a selective algorithm. This exploratory study is an important fundamental contribution to complete robot functioning warehouse in future.


Infrared based time-of-flight vision Stereoscopic vision Warehouse robots 



This work was supported by the Small Grant Scheme – Sunway Lancaster (Grant No: SGSSL-FST-DCIS-0115-08) at Sunway University, Malaysia.


  1. 1.
    Bhattacharya, A.: Amazon is just beginning to use robots in its warehouses and they’re already making a huge difference (2016). Accessed 27 Apr 2017
  2. 2.
    Tracy: Meet ‘Little Orange’, the cutest warehouse worker: self-charging robots can sort 20,000 parcels an hour at a Chinese courier firm (2017). Accessed 4 Sept 2017
  3. 3.
    Banker, S.: Robots in the Warehouse: It’s Not Just Amazon (2016). Accessed 27 Apr 2017
  4. 4.
    Bélanger-Barrette, M.: Robotic vision systems: what is doable? (2016). Accessed 17 May 2017
  5. 5.
    Elberink, S.O., Khoshelham, K.: Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12(2), 1437–1454 (2012). doi: 10.3390/s120201437 Google Scholar
  6. 6.
    Arafa, O., Amer, A.I., Zaki, A.M.: Microcontroller-based mobile robot positioning and obstacle avoidance. JESIT 1(1), 58–71 (2014). doi: 10.1016/j.jesit.2014.03.009 Google Scholar
  7. 7.
    Lorente, M.T., Villarroel, J.L., Montano, L., et al.: Seamless robot localization and navigation in indoors-outdoors for logistics in warehouses (2015)Google Scholar
  8. 8.
    Akupati, C.R.: Intelligent outdoor navigation of a mobile robot platform using a low cost high precision RTK-GPS and obstacle avoidance system (2015)Google Scholar
  9. 9.
    Endres, F., Hess, J., Sturm, J., et al.: Real-time 3D visual SLAM with a hand-held RGB-D camera (2011)Google Scholar
  10. 10.
    Chang, H.Y., Sarkar, S., Chen, M.N., et al.: An integrated system for 3D pose estimation in cluttered environments. In: 29th Conference on Neural Information Processing Systems (2016)Google Scholar
  11. 11.
    Zhang, J., Tai, L., Boedecker, J., et al.: Neural SLAM. In: 1st Conference on Robot Learning (2016)Google Scholar
  12. 12.
    Lee, S., Choi, O., Horaud, R., et al.: An overview of depth cameras and range scanners based on time-of-flight technologies. MVA 27(7), 1005–1020 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Faculty of Science and TechnologySunway UniversityPetaling JayaMalaysia

Personalised recommendations