Skip to main content

On Refractory Parameter of Chaotic Neurons in Incremental Learning

  • Conference paper
Knowledge-Based Intelligent Information and Engineering Systems (KES 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3214))

Abstract

This paper develops the incremental learning by using chaotic neurons, which is called “on-demand learning” at its developing time. The incremental learning unites the learning process and the recall process in the associative memories. This learning method uses the features of the chaotic neurons which were first developed by Prof. Aihara. The features include the spatio-temporal sum of the inputs and the refractoriness in neurons. Because of the temporal sum of the inputs, the network learns from inputs with noises. But, it is not obvious that the refractoriness is needed to the incremental learning. In this paper, the computer simulations investigate how the refractoriness takes an important part in the incremental learning. The results of the simulations, show that the refractoriness is an essential factor, but that strong refractoriness causes failures to learn patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asakawa, S., Deguchi, T., Ishii, N.: On-Demand Learning in Neural Network. In: Proc. of the ACIS 2nd Intl. Conf. on Software Engineering, Artificial Intelligence, Networking & Parallel/Distributed Computing, pp. 84–89 (2001)

    Google Scholar 

  2. Watanabe, M., Aihara, K., Kondo, S.: Automatic learning in chaotic neural networks. In: Proc. of 1994 ieee symposium on emerging technologies and factory automation, pp. 245–248 (1994)

    Google Scholar 

  3. Aihara, K., Tanabe, T., Toyoda, M.: Chaotic neural networks. Phys. Lett. A 144(6,7), 333–340 (1990)

    Article  MathSciNet  Google Scholar 

  4. Osana, Y., Hagiwara, M.: Successive learning in hetero-associative memories using chaotic neural networks. Intl. Journal of Neural Systerms 9(4), 285–299 (1999)

    Article  Google Scholar 

  5. Deguchi, T., Ishii, N.: Simulation results on the rate of success in chaotic search of patterns in neural networks. Intl. Journal of Chaos Theory and Applications 2(1), 47–57 (1997)

    Google Scholar 

  6. Deguchi, T., Ishii, N.: Search of general patterns in the chaotic neural network by using pattern translation. Intl. Journal of Knowledge-Based Intelligent Engineering Systems 3(4), 205–214 (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Deguchi, T., Ishii, N. (2004). On Refractory Parameter of Chaotic Neurons in Incremental Learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2004. Lecture Notes in Computer Science(), vol 3214. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30133-2_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30133-2_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23206-3

  • Online ISBN: 978-3-540-30133-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics