Skip to main content

Negative Correlation Learning with Difference Learning

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 575))

Abstract

In order to learn a given data set, a learning system often has to learn too much on some data points in the given data set in order to learn well the rest of the given data. Such unnecessary learning might lead to both the higher complexity and overfitting in the learning system. In order to control the complexity of neural network ensembles, difference learning is introduced into negative correlation learning. The idea of difference learning is to let each individual in an ensemble learn to be different to the ensemble on some selected data points when the outputs of the ensemble are too close to the target values of these data points. It has been found that such difference learning could control not only overfitting in an ensemble, but also weakness among the individuals in the ensemble. Experimental results were conducted to show how such difference learning could create rather weak learners in negative correlation learning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2, pp. 524–532. Morgan Kaufmann, San Mateo, CA (1990)

    Google Scholar 

  2. Śmieja, F.J.: Neural network constructive algorithms: trading generalization for learning efficiency? Circuits Syst. Sig. Process. 12(2), 331–374 (1993)

    Article  MATH  Google Scholar 

  3. Kwok, T.-Y., Yeung, D.-Y.: Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Trans. Neural Netw. 8(3), 630–645 (1997)

    Article  Google Scholar 

  4. Mozer, M.C., Smolensky, P.: Skeletonization: a technique for trimming the fat from a network via relevance assessment. Connect. Sci. 1, 3–26 (1989)

    Article  Google Scholar 

  5. Sietsma, J., Dow, R.J.F.: Creating artificial neural networks that generalize. Neural Netw. 4, 67–79 (1991)

    Article  Google Scholar 

  6. LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2, pp. 598–605. Morgan Kaufmann, San Mateo, CA (1990)

    Google Scholar 

  7. Hassibi, B., Stork, D.G.: Second derivatives for network pruning: optimal brain surgeon. In: Hanson, S.J., Cowan, J.D., Giles, C.L. (eds.) Advances in Neural Information Processing Systems 5, pp. 164–171. Morgan Kaufmann, San Mateo, CA (1993)

    Google Scholar 

  8. Tolstrup, N.: Pruning of a large network by optimal brain damage and surgeon: an example from biological sequence analysis. Int. J. Neural Syst. 6(1), 31–42 (1995)

    Article  Google Scholar 

  9. Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5, 197–227 (1990)

    Google Scholar 

  10. Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. Syst. Man Cybern. Part B Cybern. 29(6), 716–725 (1999)

    Article  Google Scholar 

  11. Liu, Y.: A balanced ensemble learning with adaptive error functions. In: Kang, L., Cai, Z., Yan, X., Liu, Y. (eds.) ISICA 2008. LNCS, vol. 5370, pp. 1–8. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  12. Liu, Y.: Balanced learning for ensembles with small neural networks. In: Cai, Z., Li, Z., Kang, Z., Liu, Y. (eds.) ISICA 2009. LNCS, vol. 5821, pp. 163–170. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  13. Liu, Y.: Create weak learners with small neural networks by balanced ensemble learning. In: Proceedings of the 2011 IEEE International Conference on Signal Processing, Communications and Computing (2011)

    Google Scholar 

  14. Liu, Y.: Target shift awareness in balanced ensemble learning. In: Proceedings of the 3rd International Conference on Awareness Science and Technology

    Google Scholar 

  15. Liu, Y.: Balancing ensemble learning through error shift. In: Proceedings of the Fourth International Workshop on Advanced Computational Intelligence

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media Singapore

About this paper

Cite this paper

Liu, Y. (2016). Negative Correlation Learning with Difference Learning. In: Li, K., Li, J., Liu, Y., Castiglione, A. (eds) Computational Intelligence and Intelligent Systems. ISICA 2015. Communications in Computer and Information Science, vol 575. Springer, Singapore. https://doi.org/10.1007/978-981-10-0356-1_27

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-0356-1_27

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-0355-4

  • Online ISBN: 978-981-10-0356-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics