Advertisement

Learning to Drive and Simulate Autonomous Mobile Robots

  • Alexander Gloye
  • Cüneyt Göktekin
  • Anna Egorova
  • Oliver Tenchio
  • Raúl Rojas
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3276)

Abstract

We show how to apply learning methods to two robotics problems, namely the optimization of the on-board controller of an omnidirectional robot, and the derivation of a model of the physical driving behavior for use in a simulator.

We show that optimal control parameters for several PID controllers can be learned adaptively by driving an omni directional robot on a field while evaluating its behavior, using an reinforcement learning algorithm. After training, the robots can follow the desired path faster and more elegantly than with manually adjusted parameters.

Secondly, we show how to learn the physical behavior of a robot. Our system learns to predict the position of the robots in the future according to their reactions to sent commands. We use the learned behavior in the simulation of the robots instead of adjusting the physical simulation model whenever the mechanics of the robot changes. The updated simulation reflects then the modified physics of the robot.

References

  1. 1.
    Aeström, K.J., Hägglund, T., Hang, C., Ho, W.: Automatic tuning and adaptation for PID controllers—A survey. In: Dugard, L., M’Saad, M., Landau, I.D. (eds.) Adaptive Systems in Control and Signal Processing, pp. 371–376. Pergamon Press, Oxford (1992)Google Scholar
  2. 2.
    Aeström, K.J., Hägglund, T.: PID Controllers: Theory, Design, and Tuning, 2nd edn., Research Triangle Park, NC, Instrument Society of America (1995)Google Scholar
  3. 3.
    Behnke, S., Frötschl, B., Rojas, R., Ackers, P., Lindstrot, W., de Melo, M., Schebesch, A., Simon, M., Sprengel, M., Tenchio, O.: Using Hierarchical Dynamical Systems to Control Reactive Behavior. In: Veloso, M.M., Pagello, E., Kitano, H. (eds.) RoboCup 1999. LNCS (LNAI), vol. 1856, pp. 186–195. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  4. 4.
    Behnke, S., Egorova, A., Gloye, A., Rojas, R., Simon, M.: Predicting away the Delay. In: Polani, D., Browning, B., Bonarini, A., Yoshida, K. (eds.) RoboCup 2003. LNCS (LNAI), vol. 3020, pp. 712–719. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Brooks, R.A., Mataric, M.J.: Real robots, real learning problems. In: Connell, J.H., Mahadevan, S. (eds.) Robot Learning. Kluwer Academic Publishers, Dordrecht (1993)Google Scholar
  6. 6.
    Callender, A., Stevenson, A.B.: Automatic Control of Variable Physical Characteristics U.S. patent 2,175,985. Issued October 10, in the United States (1939)Google Scholar
  7. 7.
    Egorova, A., Gloye, A., Liers, A., Rojas, R., Schreiber, M., Simon, M., Tenchio, O., Wiesel, F.: FU-Fighters 2003 (Global Vision). In: Polani, D., Browning, B., Bonarini, A., Yoshida, K. (eds.) RoboCup-2003: Robot Soccer World Cup VII. Springer, Heidelberg (2004)Google Scholar
  8. 8.
    Gloye, A., Simon, M., Egorova, A., Wiesel, F., Tenchio, O., Schreiber, M., Behnke, S., Rojas, R.: Predicting away robot control latency., Technical Report B08-03, Free University Berlin (2003)Google Scholar
  9. 9.
    Gloye, A., Simon, M., Egorova, A., Wiesel, F., Rojas, R.: Plug & Play: Fast Automatic Geometry andColor Calibration for a Camera Tracking Mobile Robots. In: Institut für Informatik, February 2004, Freie Universität, Berlin (2004) (paper under review)Google Scholar
  10. 10.
    Janusz, B., Riedmiller, M.: Self-Learning neural control of a mobile robot. In: Proceedings of the IEEE ICNN 1995, Perth, Australia (1995)Google Scholar
  11. 11.
    Kleiner, A., Dietl, M., Nebel, B.: Towards a Life-Long Learning Soccer Agent. In: Polani, D., Kaminka, G., Lima, P., Rojas, R. (eds.) RoboCup 2002. LNCS (LNAI), vol. 2752, pp. 119–127. Springer, Heidelberg (2003)Google Scholar
  12. 12.
    Kohl, N., Stone, P.: Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion. Department of Computer Science, November 2003, The University of Texas, Austin (2003) (paper under review)Google Scholar
  13. 13.
    Koza, J.R., Keane, M.A., Streeter, M.J., Mydlowec, W., Yu, J., Lanza, G.: Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Kluwer Academic Publishers, Dordrecht (2003)zbMATHGoogle Scholar
  14. 14.
    Riedmiller, M., Schoknecht, R.: Einsatzmöglichkeiten selbständig lernender neuronaler Regler im Automobilbereich. In: Proceedings of the VDI-GMA Aussprachetag, Berlin (March 1998)Google Scholar
  15. 15.
    Simon, M., Behnke, S., Rojas, R.: Robust Real Time Color Tracking. In: Stone, P., Balch, T., Kraetzschmar, G.K. (eds.) RoboCup 2000. LNCS (LNAI), vol. 2019, pp. 239–248. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  16. 16.
    Stone, P., Sutton, R., Singh, S.: Reinforcement Learning for 3 vs. 2 Keepaway. In: Stone, P., Balch, T., Kraetzschmar, G.K. (eds.) RoboCup 2000. LNCS (LNAI), vol. 2019, pp. 249–258. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  17. 17.
    Tan, K.K., Wang, Q.G., Hang, C.C., Hägglund, T.: Advances in PID Control. In: Advances in Industrial Control Series. Springer, London (1999)Google Scholar
  18. 18.
    Ziegler, J.G., Nichols, N.B.: Optimum settings for automatic controllers. Transactions of ASME 64, 759–768 (1942)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Alexander Gloye
    • 1
  • Cüneyt Göktekin
    • 1
  • Anna Egorova
    • 1
  • Oliver Tenchio
    • 1
  • Raúl Rojas
    • 1
  1. 1.Freie Universität BerlinBerlinGermany

Personalised recommendations