Advertisement

An Active Robotic Vision System with a Pair of Moving and Stationary Cameras

  • S. Pourya Hoseini A.Email author
  • Janelle Blankenburg
  • Mircea Nicolescu
  • Monica Nicolescu
  • David Feil-Seifer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11845)

Abstract

Vision is one of the main potential sources of information for robots to understand their surroundings. For a vision system, a clear and close enough view of objects or events, as well as the viewpoint angle can be decisive in obtaining useful features for the vision task. In order to prevent performance drops caused by inefficient camera orientations and positions, manipulating cameras, which falls under the domain of active perception, can be a viable option in a robotic environment.

In this paper, a robotic object detection system is proposed that is capable of determining the confidence of recognition after detecting objects in a camera view. In the event of a low confidence, a secondary camera is moved toward the object and performs an independent detection round. After matching the objects in the two camera views and fusing their classification decisions through a novel transferable belief model, the final detection results are obtained. Real world experiments show the efficacy of the proposed approach in improving the object detection performance, especially in the presence of occlusion.

Keywords

Active perception Active vision Robotics PR2 Dual-camera Transferable belief model Dempster-Shafer Occlusion 

Notes

Acknowledgment

This work has been supported by Office of Naval Research Award #N00014-16-1-2312.

References

  1. 1.
    Barzilay, Q., Zelnik-Manor, L., Gutfreund, Y., Wagner, H., Wolf, A.: From biokinematics to a robotic active vision system. Bioinspiration Biomimetics 12(5), 056004 (2017)CrossRefGoogle Scholar
  2. 2.
    Atanasov, N., Sankaran, B., Le Ny, J., Pappas, G.J., Daniilidis, K.: Nonmyopic view planning for active object classification and pose estimation. IEEE Trans. Rob. 30(5), 1078–1090 (2014)CrossRefGoogle Scholar
  3. 3.
    Zhang, G., Kontitsis, M., Filipe, N., Tsiotras, P., Vela, P.A.: Cooperative relative navigation for space rendezvous and proximity operations using controlled active vision. J. Field Robot. 33(2), 205–228 (2016)CrossRefGoogle Scholar
  4. 4.
    Mattamala, M., Villegas, C., Yáñez, J.M., Cano, P., Ruiz-del-Solar, J.: A dynamic and efficient active vision system for humanoid soccer robots. In: Almeida, L., Ji, J., Steinbauer, G., Luke, S. (eds.) RoboCup 2015. LNCS (LNAI), vol. 9513, pp. 316–327. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-29339-4_26CrossRefGoogle Scholar
  5. 5.
    Chen, X., Jia, Y.: Adaptive leader-follower formation control of non-holonomic mobile robots using active vision. IET Control Theory Appl. 9(8), 1302–1311 (2015)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Sanket, N.J., Singh, C.D., Ganguly, K., Fermuller, C., Aloimonos, Y.: GapFlyt: active vision based minimalist structure-less gap detection for quadrotor flight. IEEE Robot. Autom. Lett. 3(4), 2799–2806 (2018)CrossRefGoogle Scholar
  7. 7.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, San Diego (2005)Google Scholar
  8. 8.
    CIE 015:2018: Colorimetry. 4th edn., International Commission on IlluminationGoogle Scholar
  9. 9.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Forward-backward error: Automatic detection and tracking failures. In: International Conference on Pattern Recognition, Istanbul (2010)Google Scholar
  10. 10.
    Hoseini A., S.P., Nicolescu, M., Nicolescu, M.: Active object detection through dynamic incorporation of Dempster-Shafer fusion for robotic applications. In: International Conference on Vision, Image and Signal Processing (ICVISP), Las Vegas (2018)Google Scholar
  11. 11.
    Hoseini A., S.P., Nicolescu, M., Nicolescu, M.: Handling ambiguous object recognition situations in a robotic environment via dynamic information fusion. In: IEEE Conference on Cognitive and Computational Aspects of Situation Management, Boston (2018)Google Scholar
  12. 12.
    Smets, P.: The combination of evidence in the transferable belief model. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 447–458 (1990)CrossRefGoogle Scholar
  13. 13.
    Smets, P., Kennes, R.: The transferable belief model. Artif. Intell. 66(2), 191–243 (1994)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Denoeux, T.: A neural network classifier based on Dempster-Shafer theory. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(2), 131–150 (2000)Google Scholar
  15. 15.
    Blankenburg, J., et al.: A distributed control architecture for collaborative multi-robot task allocation. In: International Conference on Humanoid Robotics, Birmingham (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of NevadaRenoUSA

Personalised recommendations