Skip to main content

Abstract

The paper describes a new concept for training the neural networks, which can be concretized by using a new modified classical architecture. The idea considered to be the basis of this new architecture was taken over from learning machine, namely that for defining a concept, we need both negative and positive examples. Neural networks are models that are trained only with positive examples and allow the recognition of new examples using the learning machine. Training neural networks with negative examples aims at preventing the development of some specific features of these examples and at obtaining a better recognition of the positive examples. The architecture developed through this method is generic and can be applied to any type of neural network. For simplicity and for the need of obtaining immediate results, a multilayer perceptron was chosen for testing. The results achieved with the help of this network are encouraging and they open new possibilities of study for the future.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. STeodorean Gavril, “Artificial Neural Networks”, Ed. Cluj-Napoca, 1995

    Google Scholar 

  2. Negnevitsky, M. “Artificial Intelligence: A Guide to Intelligent Systems” (2nd Edition), Addison Wesley, England, 2005.

    Google Scholar 

  3. Bratko I. “PROLOG - Programming for Artificial Intelligence” (2nd Edition) Addison Wesley, England,1993.

    Google Scholar 

  4. Russell S., Norvig P. - “Artificial Intelligence : A Modern Approach” (Second Edition), Prentice Hall,2003.

    Google Scholar 

  5. Luger G. - “Artificial Intelligence :Structures and Strategies for Complex Problem Solving” (Fifth Edition) Addison Wesley, 2005.

    Google Scholar 

  6. NIST Handprinted Forms and Characters Database, www.nist.gov/ srd/ nistsd19.htm, 2007.

    Google Scholar 

  7. Mitchell, T.M. - “Version spaces - an approach to concept learning”, Report No. STAN-CS-78-711,Computer Science Dept., Stanford University, 1978.

    Google Scholar 

  8. Mitchell, T.M. - “An analysis of generalization as a search problem”, Proceedings IJCAI,6, 1979.

    Google Scholar 

  9. Mitchell, T.M. - “Generalization as search. Artificial Intelligence”, 18(2):203-226, 1982

    Google Scholar 

  10. Quinlan, J.R. - “Induction of decision trees. Machne Learning”, 1(1):81-106, 1982

    Google Scholar 

  11. Shannon, C. - “A mathematical theory of communication.”, Bell System Technical Journal,27:379-423, 1948

    MathSciNet  MATH  Google Scholar 

  12. Winston, P. H. - “Learning structural descriptions from examples”, In P.H. Winston editor, 1975

    Google Scholar 

  13. Winston, P. H. - “The psychology of Computer Vision ”, New York, McGraw-Hill, 1975

    Google Scholar 

  14. Winston, P. H. - “Artificial Intelligence”, 3rd edition Reading, MA:Addison Wesley, 1992

    Google Scholar 

  15. Sejnowski, T. J. and Rosenberg, C. R.- “Parallel networks that learn to pronounce English text.”, Complex Systems, 1:145-168, 1987

    MATH  Google Scholar 

  16. Hecht-Nielsen, R. - “Counterpropagation networks”, Applied Optics, 26:4979-4984, 1984

    Article  Google Scholar 

  17. Qun Z., Principe J.C. - “Incorporating virtual negative examples to improve SAR ATR”, Proceedings of the SPIE - The International Society for Optical Engineering, v 4053, 2000, 354-360

    Google Scholar 

  18. Principe J.C., Dongxin X., Qun Z. - “Learning from examples with information theoretic criteria”, Journal of VLSI Signal Processing System for Signal, Image and Video Technology, v 26, 2000, 61-77

    MATH  Google Scholar 

  19. Qun Z., Principe J.C. - “Improve ATR performance by incorporating virtual negative examples”, Proceedings of the International Joint Conference on Neural Networks, 1999, 3198-3203

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer Science+Business Media B.V.

About this paper

Cite this paper

CernĂZanu-GlĂvan, C., Holban, Ş. (2008). Improving Neural Network Performances - Training with Negative Examples. In: Sobh, T., Elleithy, K., Mahmood, A., Karim, M.A. (eds) Novel Algorithms and Techniques In Telecommunications, Automation and Industrial Electronics. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-8737-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-1-4020-8737-0_10

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-1-4020-8736-3

  • Online ISBN: 978-1-4020-8737-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics