Skip to main content

Neural Network Classification: Maximizing Zero-Error Density

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 3686))

Abstract

We propose a new cost function for neural network classification: the error density at the origin. This method provides a simple objective function that can be easily plugged in the usual backpropagation algorithm, giving a simple and efficient learning scheme. Experimental work shows the effectiveness and superiority of the proposed method when compared to the usual mean square error criteria in four well known datasets.

This work was supported by the Portuguese FCT-Fundação para a Ciência e a Tecnologia (project POSI/EIA/56918/2004). First author is also supported by FCT’s grant SFRH/BD/16916/2004.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Erdogmus, D., Principe, J.C.: An Error-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptive Systems. IEEE Transactions on Signal Processing 50(7), 1780–1786 (2002)

    Article  MathSciNet  Google Scholar 

  2. Erdogmus, D., Principe, J.: Generalized information potential criterion for adaptive system training. IEEE Transactions on Neural Networks 13(5), 1035–1044 (2002)

    Article  Google Scholar 

  3. Santos, J.M., Alexandre, L.A., Marques de Sá, J.: The Error Entropy Minimization Algorithm for Neural Network Classification. In: Int. Conf. on Recent Advances in Soft Computing, Nottingham, United Kingdom (2004)

    Google Scholar 

  4. Santos, J.M., Alexandre, L.A., Marques de Sá, J.: Optimization of the Error Entropy Minimization Algorithm for Neural Network Classification. In: Dagli, C., Buczak, A., Enke, D., Embrechts, M., Ersoy, O. (eds.) Intelligent Engineering Systems through Artificial Neural Networks, vol. 14, pp. 81–86. ASME Press Series (2004)

    Google Scholar 

  5. Silva, L.M., Marques de Sá, J., Alexandre, L.A.: Neural Network Classification using Shannon’s Entropy. In: European Symposium on Artificial Neural Networks (2005)

    Google Scholar 

  6. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  7. Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases, University of California, Irvine, Dept. of Information and Computer Sciences (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Silva, L.M., Alexandre, L.A., de Sá, J.M. (2005). Neural Network Classification: Maximizing Zero-Error Density. In: Singh, S., Singh, M., Apte, C., Perner, P. (eds) Pattern Recognition and Data Mining. ICAPR 2005. Lecture Notes in Computer Science, vol 3686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11551188_14

Download citation

  • DOI: https://doi.org/10.1007/11551188_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28757-5

  • Online ISBN: 978-3-540-28758-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics