Skip to main content

Design of Adaptive Self-Navigated Airship in Simulated Environment

  • Chapter
Operations Research/Management Science at Work

Abstract

The final goal of this research is to realize a small airship robot that can automatically achieve a given task. The airship is subjected to strong inertial forces and air resistance. Although reinforcement learning methods could be expected to control a small airship, the unstable property of the airship prevents the learning methods from achieving control of it.

In order to design an automatically controlled airship, sensory information is especially important. We assume using like ultrasonic transducers which have been widely used as a cheap and light way to provide mobile robots with accurate range finders. This paper verifies the difference in control performance of the airship between a variety of sensory setup. We simulated a small airship with the Cerebellar Model Articulation Controller (CMAC) as a reinforcement learning method which is enabled to deal with generalization problems, on the assumption that we use ultrasonic transducers afterward.

The experimental results showed that the learning performance was not always proportional to the amount of sensory information, and different behavior was acquired according to differences in the sensory setup.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Lin, L. -J. Scaling Up Reinforcement Learning for Robot Control. Proceedings of the 10th International Conference on Machine Learning 1993: 182–189

    Google Scholar 

  • H. Murao, I. Kitagawa and S. Kitamura. Adaptive State Segmentation for Q-learning (in Japanese). Proceedings of SICE00′97 1997: 45–48

    Google Scholar 

  • N. Ono and Y. Fukuta. Learning to Coordinate in a Continuous Environment. Proceedings of the Second International Conference on Multi-agent Systems, 1996

    Google Scholar 

  • Singh, S. P. Reinforcement Learning with Replacing Eligibility Traces. Machine Learning 1996;22,l/2/3: 123–158

    Google Scholar 

  • Sutton, R. S. Learning to predict by the methods of temporal differences. Machine Learning 1988;3: 9–44

    Google Scholar 

  • Sutton, R. S. Generalization in reinforcement learning: Successful examples using sparce coarse coding. In D.S.Touretzky, M.C.Mozer and M.E.Hasselmo(eds.), Advances in Neural Information Processing Systems 1996: 1038–1044

    Google Scholar 

  • Y. Takahashi, M. Asada and K. Hosoda. Reasonable Performance in Less Learning Time by Real Robot Based on Incremental State Space Segmentation. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems 1996: 1038–1044

    Google Scholar 

  • Ming Tan. Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents. Proceedings of the Tenth International Conference on Machine Learning 1993: 330–337

    Google Scholar 

  • Wiering, M. and Schmidhuber, J. Speeding up Q(λ)-learning. Proceedings of the Tenth European Conference on Machine Learning 1998: 352–363

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Erhan Kozan Azuma Ohuchi

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer Science+Business Media New York

About this chapter

Cite this chapter

Motoyama, K., Suzuki, K., Kawamura, H., Yamamoto, M., Ohuchi, A. (2002). Design of Adaptive Self-Navigated Airship in Simulated Environment. In: Kozan, E., Ohuchi, A. (eds) Operations Research/Management Science at Work. International Series in Operations Research & Management Science, vol 43. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0819-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-0819-9_13

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-5254-9

  • Online ISBN: 978-1-4615-0819-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics