Real-data-based Car-following with Adaptive Neural Control

  • Walter Weber
  • Jost Bernasch
Conference paper


Our goal is to train a neural network to control the longitudinal dynamics of a real car. Input signals to the neural controller are its own speed, its own acceleration, distance and change of distance to the car ahead. Net output are control signals for throttle and brake. The acceleration of our own car and the distance to the car ahead are computed in discrete time-steps. Therefore all networks were trained with discrete-time data.

We performed experiments with several architectures including Jordan net, Elman net, a fully recurrent network and a recurrent network with a slightly different architecture (using ‘normal’ hidden units without recurrent connections as well) to analyze different behaviour and the efficiency of these neural network types. We used standard back-propagation to train the Jordan- and Elman-style networks and the backpropagation through time algorithm to train the recurrent nets.

Currently, we are working on a two-level predictor-hierarchy, where a higher-level network is intended to solve problems the lower-level net can not. This is achieved by restricting the highlevel

network’s input to those data items in time in the case of which the low-level net had difficulties to produce a matching output.

The most promising results have been achieved using an algorithm which is a kind of backpropagation through time, in the case of which the time to look back is restricted to a specific amount of time steps. We call this backpropagation through truncated time.

Throughout our experiments we have been observing that the way of encoding data has an important influence on the performance of a network. Several runnings confirmed that this has a stronger impact than all of the well-known parameters (e.g. learning rate, momentum) of backpropagation. It could be shown that backpropagation through truncated time and our way of encoding data both lead to far better results than conventional approaches.


Hide Layer Hide Unit Input Node Recurrent Network Neural Controller 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Arr92]
    Michael Karl Arras. Forwiss Artificial Neural Network Simulation Toolbox, version 0.9.8 beta edition, December 1992.Google Scholar
  2. [Bar90]
    Andrew G. Barto. Connectionist learning for control: An overview. In W. Thomas Miller III, Richard S. Sutton, and Paul J. Werbos, editors, Neural Networks for Control, chapter I, pages 5-59. MIT Press, 1990.Google Scholar
  3. [Elm88]
    Jeffrey L. Elman. Finding structure in time. Technical Report CRL 8801, Center for Research in Language, University of California, San Diego, 1988.Google Scholar
  4. [Fah88]
    Scott E. Fahlman. A empirical study of learning speed in back-propagation networks. Technical Report cmu-cs-88-162, 1988.Google Scholar
  5. [GB92]
    Dr. Hans Geiger and Dr. Lilian Bungarten. Netz-werkzeugkasten. Elektronik, 2:87–91, 1992.Google Scholar
  6. [GSR91]
    Aaron Gordon, John P. H. Steele, and Kathleen Rossmiller. Predicting trajectories using recurrent neural networks. In Cihan H. Dagli, Soundar R. T. Kumara. and Yung C. Shin, editors, Intelligent Engineering Systems Trough Artificial Neural Networks, pages 365–370. ASME Press, New York, 1991.Google Scholar
  7. [Jor86]
    M. I. Jordan. Serial order: a parallel distributed processing approach. Technical Report ICS Report 8604, Institute for Cognitive Science, University of California, 1986.Google Scholar
  8. [LDL92]
    Jens Ludmann, Rainer Diekamp, and Georg Lerner. Pelops — ein programmsystem zur Untersuchung neuer laengsdynamikkonzepte im Verkehrsfluss. In VDI-Tagung: Berechnung im Automobilbau, pages 191–207, 1992.Google Scholar
  9. [Pea90]
    Barak A. Pearlmutter. Dynamic recurrent neural networks. Technical Report cmu-cs-88-196, School of Computer Science, Carnegie Mellon University, 1990.Google Scholar
  10. [Sch91]
    Juergen Schmidhuber. Neural sequence chunkers. Technical ReportGoogle Scholar
  11. FKI-148-91.
    Institut fuer Informatik, Technische Universitaet Muenchen, 1991.Google Scholar
  12. [SSR92]
    Gary M. Scott, Jude W. Shavlik, and W. Harmon Ray. Refining pid controllers using neural networks. In John E. Moody, Steve J. Hanson, and Richard P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 555–562. Morgan Kaufmann Publishers, San Mateo, 1992.Google Scholar
  13. [Wil90]
    Ronald J. Williams. Adaptive state representation and estimation using recurrent connectionist networks. In W. Thomas Miller III, Richard S. Sutton, and Paul J. Werbos, editors, Neural Networks for Control, chapter I, pages 97-114. MIT Press, 1990.Google Scholar
  14. [WZ88]
    R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent networks. Technical Report ICS Report 8805, University of California, San Diego, La Jolla, 1988.Google Scholar
  15. [WZ89]
    Ronald J. Williams and David Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection Science, 1: 87–111, 1989.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag/Wien 1993

Authors and Affiliations

  • Walter Weber
    • 1
  • Jost Bernasch
    • 1
  1. 1.Bavarian Research Center of Knowledge Based Systems (FORWISS)Munich 40Germany

Personalised recommendations