Advertisement

Hidden Node Pruning of Multilayer Perceptrons Based on Redundancy Reduction

  • Sang-Hoon Oh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6935)

Abstract

Among many approaches to choosing the proper size of neural networks, one popular approach is to start with an oversized network and then prune it to a smaller size so as to attain better performance with less computational complexity. In this paper, a new hidden node pruning method is proposed based on the redundancy reduction among hidden nodes. The redundancy information is given by correlation coefficients among hidden nodes and this can save computational complexity. Experimental results demonstrate the effectiveness of the proposed method.

Keywords

Multilayer perceptron hidden node pruning redundancy reduction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Xu, J., Ho, D.W.C.: A new training and pruning algorithm based on node dependence and Jacobian rank deficiency. Neurocomputing 70, 544–558 (2006)CrossRefGoogle Scholar
  2. 2.
    Zhang, L., Jiang, J.-H., Liu, P., Liang, Y.-Z., Yu, R.-Q.: Multivariate nonlinear modeling of fluorescence data by neural network with hidden node pruning algorithm. Analytica Chimica Acta 344, 29–39 (1997)CrossRefGoogle Scholar
  3. 3.
    Engelbrecht, A.P.: A new pruning heuristic based on variance analysis of sensitivity information. IEEE Trans. Neural Networks 12, 1386–1399 (2001)CrossRefGoogle Scholar
  4. 4.
    Zeng, X., Yeung, D.S.: Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neurocomputing 69, 825–837 (2006)CrossRefGoogle Scholar
  5. 5.
    Lauret, P., Fock, E., Mara, T.A.: A node pruning algorithm based on a Fourier amplitude sensitivity test method. IEEE Trans. Neural Networks 17, 273–293 (2006)CrossRefGoogle Scholar
  6. 6.
    Sietsma, J., Dow, R.J.F.: Creating artificial neural networks that generalize. Neural Networks 4, 67–79 (1991)CrossRefGoogle Scholar
  7. 7.
    Hagiwara, M.: Removal of hidden units and weights for back propagation networks. In: Int. Joint Conf. Neural Networks, pp. 351–354 (1993)Google Scholar
  8. 8.
    Rumelhart, D.E., McClelland, J.L.: Parallel Distributed Processing. MIT Press, Cambridge (1986)Google Scholar
  9. 9.
    Hull, J.J.: A database for handwritten text recognition research. IEEE Trans. Pat. Ana. Mach. Int. 16, 550–554 (1994)CrossRefGoogle Scholar
  10. 10.
    Oh, S.-H.: Improving the error back-propagation algorithm with a modified error function. IEEE Trans. Neural Networks 8, 799–803 (1997)CrossRefGoogle Scholar
  11. 11.
    Lee, Y., Oh, S.-H., Kim, M.W.: An analysis of premature saturation in back propagation learning. Neural Networks 6, 719–728 (1993)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Sang-Hoon Oh
    • 1
  1. 1.Mokwon UniversityDaejonKorea

Personalised recommendations