Abstract
In this paper, we attempt to construct a planning mechanism composed of distributed agents for autonomous vehicle navigation in an unknown workspace. Each agent decides a direction that the vehicle should move without any communication to the other agents, by only observing the workspace and the other agents. The agent includes the reinforcement learning mechanism for generating rules for the navigation. However, the rules depend on a state observation method. For generalizing the rules, an inductive decision tree is introduced to the agent. In a new workspace, the agent plans a path efficiently by learning a specific rule to the new workspace, and using the generalized rules. Some computational simulations have been carried out for verifying the proposed agent.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
C. J. Watkins, and P. Dayan, Technical Note Q-Learning, Machine Learning 8, 279–292, 1992.
J. R. Quinlan, Induction of Decision Trees, Machine Learning 1, 81–106.
R. S. Sutton, Integrated Architecture for Learning, Planning, and Reacting Based on Approximating Dynamic Programming, Proceedings of the seventh international conference on Machine Learning, 216–224, 1990.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1994 Springer-Verlag Tokyo
About this paper
Cite this paper
Naruse, K., Kakazu, Y. (1994). Rule Generation and Generalization by Inductive Decision Tree and Reinforcement Learning. In: Asama, H., Fukuda, T., Arai, T., Endo, I. (eds) Distributed Autonomous Robotic Systems. Springer, Tokyo. https://doi.org/10.1007/978-4-431-68275-2_9
Download citation
DOI: https://doi.org/10.1007/978-4-431-68275-2_9
Publisher Name: Springer, Tokyo
Print ISBN: 978-4-431-68277-6
Online ISBN: 978-4-431-68275-2
eBook Packages: Springer Book Archive