Advertisement

The weights initialization methodology of unsupervised neural networks to improve clustering stability

  • Seongchul ParkEmail author
  • Sanghyun Seo
  • Changhoon Jeong
  • Juntae Kim
Article

Abstract

A study on initialization of connection weights of neural networks is expected to be needed because various deep neural networks based on deep learning have attracted much attention recently. However, studies on the relation between the output value of the active function and the learning performance of the neural network with respect to the connection weight value have been conducted mainly on the supervised learning model. This paper focused on improving the efficiency of autonomous neural network model by studying the connection weight initialization as the neural network model of supervised learning. Adaptive resonance theory (ART) is a major model of autonomous neural network that tries to solve the stability–plasticity dilemma by using bottom-up weights and top-down weights. The conventional weights initialization method of ART was to uniformly set all weights, but the proposed method is to initialize by using pre-trained weights. Experiments show that the ART, which initializes the connectivity weights through the proposed method, performs clustering more reliably.

Keywords

Unsupervised neural network Transfer learning Weights initialization Adaptive resonance theory Self-organizing map 

Notes

Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07049988) and by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (NRF-2017M3C4A7083279).

References

  1. 1.
    Abreu T, Amorim AJ, Santos-Junior CR, Lotufo AD, Minussi CR (2018) Multinodal load forecasting for distribution systems using a fuzzy-artmap neural network. Appl Soft Comput 71:307–316CrossRefGoogle Scholar
  2. 2.
    Aggarwal V, Ahlawat AK, Pandey BN (2013) A weight initialization approach for training self organizing maps for clustering applications, In: 3rd IEEE International Conference on Advance Computing Conference (IACC), IEEE, India, pp 1000–1005Google Scholar
  3. 3.
    Fan W, Watanabe T, Asakura K (2010) Mining underlying correlated-clusters in high-dimensional data streams. Int J Soc Humanist Comput 1(3):282–299CrossRefGoogle Scholar
  4. 4.
    Gavrylenko S, Babenko O, Ignatova E (2018) Development of the disable software reporting system on the basis of the neural network. In: Journal of Physics: Conference Series, vol 998, No. 1. IOP Publishing, p 012009Google Scholar
  5. 5.
    Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Italy, pp 249–256Google Scholar
  6. 6.
    Grossberg S (2013) Adaptive resonance theory: how a brain learns to consciously attend, learn, and recognize a changing world. Neural Netw 37:1–47CrossRefGoogle Scholar
  7. 7.
    He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, IEEE, Chile, pp 1026–1034Google Scholar
  8. 8.
    Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Kim DH, Cha EY (2003) Hierarchical ann classification model combined with the adaptive searching strategy. J KIISE Softw Appl 30(7):649–658Google Scholar
  10. 10.
    Kim JS, Kim JH, Kim SJ, Jeon HT (2003) Memory information extension model using adaptive resonance theory, In: International Conference of Korea Institute of Intelligent Systems, pp 652–655Google Scholar
  11. 11.
    Kim WM, Lee SH (1998) A study of initial weight determination for performance enhancement in backpropagation. In: 1998 Autumn Conference of Korean Information Science Society, Korea, vol 25, No. 2, pp 333–335Google Scholar
  12. 12.
    Kohonen T (1990) The self-organizing map. Proc IEEE 78(9):1464–1480CrossRefGoogle Scholar
  13. 13.
    Koturwar S, Merchant S (2017) Weight initialization of deep neural networks (DNNs) using data statistics. arXiv preprint arXiv:1710.10570
  14. 14.
    Mittal M (2018) Energy evaluation of sensor protocols based on artificial neural network approach. In: Editorial Board, p 12Google Scholar
  15. 15.
    Pacheco AG, Krohling RA, da Silva CA (2018) Restricted Boltzmann machine to determine the input weights for extreme learning machines. Expert Syst Appl 96:77–85CrossRefGoogle Scholar
  16. 16.
    Pan SJ, Qiang Y (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRefGoogle Scholar
  17. 17.
    Park GM, Kim JH (2016) Deep adaptive resonance theory for learning biologically inspired episodic memory. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, Canada, pp 5174–5180Google Scholar
  18. 18.
    Ramos EZ, Nakakuni M, Yfantis E (2017) Quantitative measures to evaluate neural network weight initialization strategies. In: 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, USA, pp. 1–7Google Scholar
  19. 19.
    Satpute K, Kumar R (2018) Optimization of adaptive resonance theory neural network using particle swarm optimization technique. In: Advances in Machine Learning and Data Science. Springer, Singapore, pp 1–7Google Scholar
  20. 20.
    Sutskever I, Hinton GE, Taylor GW (2009) The recurrent temporal restricted boltzmann machine, In: Twenty-Third Annual Conference on Neural Information Processing Systems, Canada, pp 1601–1608Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Dongguk UniversitySeoulKorea

Personalised recommendations