An Evidential Filter for Indoor Navigation of a Mobile Robot in Dynamic Environment

  • Quentin LaboureyEmail author
  • Olivier Aycard
  • Denis Pellerin
  • Michèle Rombaut
  • Catherine Garbay
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 610)


Robots are destined to live with humans and perform tasks for them. In order to do that, an adapted representation of the world including human detection is required. Evidential grids enable the robot to handle partial information and ignorance, which can be useful in various situations. This paper deals with an audiovisual perception scheme of a robot in indoor environment (apartment, house..). As the robot moves, it must take into account its environment and the humans in presence. This article presents the key-stages of the multimodal fusion: an evidential grid is built from each modality using a modified Dempster combination, and a temporal fusion is made using an evidential filter based on an adapted version of the generalized bayesian theorem. This enables the robot to keep track of the state of its environment. A decision can then be made on the next move of the robot depending on the robot’s mission and the extracted information. The system is tested on a simulated environment under realistic conditions.


Active multimodal perception Evidential filtering Mobile robot 


  1. 1.
    Elfes, A.: Using occupancy grids for mobile robot perception and navigation. Computer 22(6), 46–57 (1989)CrossRefGoogle Scholar
  2. 2.
    Kurdej, M.: Map-aided fusion using evidential grids for mobileperception in urban environment. In: Denoeux, T., Masson, M.-H. (eds.) Belief Functions: Theory and Appl. AISC, vol. 164, pp. 343–350. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  3. 3.
    Labourey, Q., et al.: Audiovisual data fusion for successive speakers tracking. In: VISIGRApp (2014)Google Scholar
  4. 4.
    Moras, J., et al.: Credibilist occupancy grids for vehicle perception in dynamic environments. In: ICRA (2011)Google Scholar
  5. 5.
    Ramasso, E.: State sequence recognition based on the Transferable BeliefModel. Theses, Université Joseph-Fourier - Grenoble I (2007)Google Scholar
  6. 6.
    Siagian, C., et al.: Autonomous mobile robot localization and navigation using a hierarchical map representation primarily guided by vision. J. Field Robot. 31, 408–440 (2014)CrossRefGoogle Scholar
  7. 7.
    Smets, P.: Belief functions: The disjunctive rule of combination and the generalized bayesian theorem. In: Yager, R.R., Liu, L. (eds.) Classic Works of the Dempster-Shafer Theory of Belief Functions. STUDFUZZ, vol. 219, pp. 633–664. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  8. 8.
    Thrun, S., et al.: Probabilistic Robotics. The MIT Press, Cambridge (2005)zbMATHGoogle Scholar
  9. 9.
    Zaraki, A., et al.: Designing and evaluating a social gaze-control system for a humanoid robot. Trans. Human-Mach. Syst. 44(2), 157–168 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Quentin Labourey
    • 1
    Email author
  • Olivier Aycard
    • 1
  • Denis Pellerin
    • 1
  • Michèle Rombaut
    • 1
  • Catherine Garbay
    • 1
  1. 1.Univ. Grenoble AlpesGrenobleFrance

Personalised recommendations