Skip to main content

On Appropriate Refractoriness and Weight Increment in Incremental Learning

  • Conference paper
Adaptive and Natural Computing Algorithms (ICANNGA 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7824))

Included in the following conference series:

Abstract

Neural networks are able to learn more patterns with the incremental learning than with the correlative learning. The incremental learning is a method to compose an associate memory using a chaotic neural network. The capacity of the network is found to increase along with its size which is the number of the neurons in the network and to be larger than the one with correlative learning. In former work, the capacity was over the direct proportion to the network size with suitable pairs of the refractory parameter and the learning parameter. In this paper, the refractory parameter and the learning parameter are investigated through the computer simulations changing these parameters. Through the computer simulations, it turns out that the appropriate parameters lie near the origin with some relation between them.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asakawa, S., Deguchi, T., Ishii, N.: On-Demand Learning in Neural Network. In: Proc. of the ACIS 2nd Intl. Conf. on Software Engineering, Artificial Intelligence, Networking & Parallel/Distributed Computing, pp. 84–89 (2001)

    Google Scholar 

  2. Deguchi, T., Ishii, N.: On Refractory Parameter of Chaotic Neurons in Incremental Learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004, Part II. LNCS (LNAI), vol. 3214, pp. 103–109. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  3. Watanabe, M., Aihara, K., Kondo, S.: Automatic learning in chaotic neural networks. In: Proc. of 1994 IEEE Symposium on Emerging Technologies and Factory Automation, pp. 245–248 (1994)

    Google Scholar 

  4. Aihara, K., Tanabe, T., Toyoda, M.: Chaotic neural networks. Phys. Lett. A 144(6,7), 333–340 (1990)

    Article  MathSciNet  Google Scholar 

  5. Deguchi, T., Matsuno, K., Ishii, N.: On Capacity of Memory in Chaotic Neural Networks with Incremental Learning. In: Lovrek, I., Howlett, R.J., Jain, L.C. (eds.) KES 2008, Part II. LNCS (LNAI), vol. 5178, pp. 919–925. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Deguchi, T., Matsuno, K., Kimura, T., Ishii, N.: Error Correction Capability in Chaotic Neural Networks. In: 21st IEEE International Conference on Tools with Artificial Intelligence, Newark, New Jersey, USA, pp. 687–692 (2009)

    Google Scholar 

  7. Matsuno, K., Deguchi, T., Ishii, N.: On Influence of Refractory Parameter in Incremental Learning. In: Lee, R. (ed.) Computer and Information Science 2010. SCI, vol. 317, pp. 13–21. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Deguchi, T., Fukuta, J., Ishii, N. (2013). On Appropriate Refractoriness and Weight Increment in Incremental Learning. In: Tomassini, M., Antonioni, A., Daolio, F., Buesser, P. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2013. Lecture Notes in Computer Science, vol 7824. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37213-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37213-1_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37212-4

  • Online ISBN: 978-3-642-37213-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics