Abstract
The use of entropy as a cost function in the neural network learning phase usually implies that, in the back-propagation algorithm, the training is done in batch mode. Apart from the higher complexity of the algorithm in batch mode, we know that this approach has some limitations over the sequential mode. In this paper we present a way of combining both modes when using entropic criteria. We present some experiments that validates the proposed method and we also show some comparisons of this proposed method with the single batch mode algorithm.
This work was supported by the Portuguese Fundação para a Ciência e Tecnologia(project POSI/EIA/56918/2004).
An erratum to this chapter can be found at http://dx.doi.org/10.1007/11550907_163 .
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Santos, J.M., Alexandre, L.A., de Sá, J.M.: The Error Entropy Minimization Algorithm for Neural Network Classification. In: Int. Conf. on Recent Advances in Soft Computing, pp. 92–97 (2004)
Erdogmus, D., Príncipe, J.: An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems. Trans. on Signal Processing 50(7), 1780–1786 (2002)
Santos, J.M., de Sá, J.M., Alexandre, L.A., Sereno, F.: Optimization of the Error Entropy Minimization Algorithm for Neural Network Classification. In: Dagli, C.H., Buczak, A.L., Enke, D.L., Embrechts, M.J., Ersoy, O. (eds.) Intelligent Engineering Systems Through Artificial Neural Networks, vol. 14, pp. 81–86. ASME Press (2004)
Silva, L.M., de Sá, J.M., Alexandre, L.A.: Neural Network Classification using Shannon’s Entropy. In: European Symposium on Artificial Neural Networks (Accepted for publication) (2005)
Xu, D., Princípe, J.: Training mlps layer-by-layer with the information potential. In: Intl. Joint Conf. on Neural Networks, pp. 1716–1720 (1999)
Santos, J.M., de Sá, J.M., Alexandre, L.A.: Neural Networks Trained with the EEM Algorithm: Tuning the Smoothing Parameter. In: 6th WSEAS Int. Conf. on Neural Networks (2005) (accepted)
Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice-Hall, New Jersey (1999)
Silva, F., Almeida, L.: Speeding up backpropagation. In: Eckmiller, R. (ed.) Advanced Neural Computers, pp. 151–158 (1990)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Santos, J.M., de Sá, J.M., Alexandre, L.A. (2005). Batch-Sequential Algorithm for Neural Networks Trained with Entropic Criteria. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Formal Models and Their Applications – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3697. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550907_15
Download citation
DOI: https://doi.org/10.1007/11550907_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28755-1
Online ISBN: 978-3-540-28756-8
eBook Packages: Computer ScienceComputer Science (R0)