M-ROSE: A Multi Robot Simulation Environment for Learning Cooperative Behavior

  • Sebastian Buck
  • Michael Beetz
  • Thorsten Schmitt


The development of high-performance autonomous multi robot control systems requires intensive experimentation in controllable, repeatable, and realistic robot settings. The need for experimentation is even higher in applications where the robots should automatically learn substantial parts of their controllers. We propose to solve such learning tasks as a three step process. First, we learn a simulator of the robots’ dynamics. Second, we perform the learning tasks using the learned simulator. Third, we port the learned controller to the real robot and cross validate the performance gains obtained by the learned controllers. In this paper, we describe M-ROSE, our learning simulator, and provide empirical evidence that it is a powerful tool for learning of sophisticated control modules for real robots.


Mobile Robot Real Robot Robot Behavior Robot Soccer Current State Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    M. Beetz, S. Buck, R. Hanek, T. Schmitt, and B. Radig: The Agilo Autonomous Robot Soccer Team: Computational Principles, Experiences, and Perspectives. International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS) 2002.Google Scholar
  2. 2.
    T. Belker and M. Beetz: Learning To Execute Navigation Plans in F. Baader, G. Brewka and T. Eiter (eds): Lecture Notes in Artificial Intelligence, vol. 2174.Google Scholar
  3. 3.
    R.A. Brooks and M.J. Mataric: Real Robots, Real Learning Problems, in Robot Learning, Jonathan H. Connell and Sridhar Mahadevan, eds., Kluwer Academic Press, 193–213, 1993.CrossRefGoogle Scholar
  4. 4.
    R.A. Brooks: Artificial Life and Real Robots, in F. J. Varela and P. Bourgine, editors, Proceedings of the First European Conference on Artificial Life, pp.3–10, 1992.Google Scholar
  5. 5.
    S. Buck, T. Schmitt, and M. Beetz: Reliable Multi Robot Coordination Using Minimal Communication and Neural Prediction, Seminar on Plan-based Control of Robotic Agents 2001, Schloss Dagstuhl, Lecture Notes in Artificial Intelligence, 2001, Springer Verlag.Google Scholar
  6. 6.
    S. Buck, U. Weber, M. Beetz, and T. Schmitt: Multi Robot Path Planning for Dynamic Environments: A Case Study. Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 2001.Google Scholar
  7. 7.
    J. Hertz, A. Krogh, and R. G. Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley, 1991.Google Scholar
  8. 8.
    N. Jakobi, P. Husbands and I. Harvey: Noise and the Reality Gap: The Use of Simulation in Evolutionary Robotics, Third European Conference on Artificial Life (ECAL95), pp.704–720, Springer Verlag, 1995.Google Scholar
  9. 9.
    M.L. Jugel and A. Sydow: Parallelity in High-Level Simulation Architectures, in Transaction of the Society for Computer Simulation International, Vol. 15, No. 3, pp.101–103, 1998.Google Scholar
  10. 10.
    H-Ul. Kobialka, P. Schoell, and A. Bredenfeld: Tools for Assessing RoboCup Behavior RoboCup Workshop, RoboCup-Euro 2000, Amsterdam, May 28th -June 2nd, 2000Google Scholar
  11. 11.
    T. Lee, U. Nehmzow, and R. Hubbold: Computer Simulation of Learning Experiments with Autonomous Mobile Robots, Proceedings of TIMR 99, Towards Intelligent Mobile Robots, Bristol, 1999.Google Scholar
  12. 12.
    T. Lee, U. Nehmzow, and R. Hubbold: Mobile Robot Simulation by Means of Acquired Neural Network Models, European Simulation Multiconference, Manchester 1998.Google Scholar
  13. 13.
    U. Mehlhaus and W. Rausch: Distributed simulation of robot tasks, ESS’93 European Simulation Symposium, pages 433–438, Delft, Holland, October 1993.Google Scholar
  14. 14.
    I. Noda, H. Matsubara, K. Hiraki, and I. Frank: Soccer Server: A Tool for Research on Multiagent Systems. Applied Artificial Intelligence, 12, 2–3, pp.233–250, 1998.Google Scholar
  15. 15.
    Pioneer Mobile Robots, Operation Manual, 2nd edition, Active Media, 1998.Google Scholar
  16. 16.
    M. Riedmiller and A. Merke: Karlsruhe Brainstormers — a reinforcement learning approach to robotic soccer II. In 5th International Workshop on RoboCup, Lecture Notes in Artificial Intelligence, 2001, Springer Verlag.Google Scholar
  17. 17.
    M. Riedmiller and H. Braun: A direct adaptive method for faster backpropaga-tion learning: the Rprop algorithm, Proceedings of the ICNN, San Francisco, 1993.Google Scholar
  18. 18.
    S. Thrun, A. Bucken, W. Burgard, D. Fox, T. Frohlinghaus, D. Hennig, T. Hofmann, M. Krell, and T. Schimdt: Map learning and high-speed navigation in RHINO MIT/AAAI Press, Cambridge, MA, 1998.Google Scholar

Copyright information

© Springer-Verlag Tokyo 2002

Authors and Affiliations

  • Sebastian Buck
    • 1
  • Michael Beetz
    • 1
  • Thorsten Schmitt
    • 1
  1. 1.Department of Computer Science IXMunich University of TechnologyMunichGermany

Personalised recommendations