Advertisement

Improving Neural Networks Classification through Chaining

  • Khobaib Zaamout
  • John Z. Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7553)

Abstract

We present a new ensemble technique, namely chaining neural networks, as our efforts to improve neural classification. We show that using predictions of a neural network as input to another neural network trained on the same dataset will improve classification. We propose two variations of this approach, single-link and multi-link chaining. Both variations include predictions of trained neural networks in the construction and training of a new network and then store them for later predictions. In this initial work, the effectiveness of our proposed approach is demonstrated through a series of experiments on real and synthetic datasets.

Keywords

Neural networks classification ensemble chaining 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Zhang, G.P.: Neural networks for classification: a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 30, 451–462 (2000)CrossRefGoogle Scholar
  2. 2.
    Lippmann, R.P.: An introduction to computing with neural nets. IEEE ASSP Magazine 3, 4–22 (1987)CrossRefGoogle Scholar
  3. 3.
    Fahlman, S., Lebiere, C.: The cascade-correlation learning architecture. In: Advances in Neural Information Processing Systems, vol. 2, pp. 524–532 (1990)Google Scholar
  4. 4.
    Frean, M.: The upstart algorithm: A method for constructing and training feedforward neural networks. Neural Computation 2, 198–209 (1990)CrossRefGoogle Scholar
  5. 5.
    Cun, Y.L., Denker, J., Solla, S.: Optimal brain damage. In: Advances in Neural Information Processing Systems, pp. 598–605 (1990)Google Scholar
  6. 6.
    Lu, B., Ito, M.: Task decomposition and module combination based on class relations: a modular neural network for pattern classification. IEEE Transactions on Neural Networks 10, 1244–1256 (1999)CrossRefGoogle Scholar
  7. 7.
    Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence 12, 993–1001 (1990)CrossRefGoogle Scholar
  8. 8.
    Maclin, R.: An empirical evaluation of bagging and boosting. In: Proceedings of the Fourteenth National Conference on Artificial Intelligence, pp. 546–551. AAAI Press (1997)Google Scholar
  9. 9.
    Helmer, T., Ehret, D.L., Bittman, S.: Cropassist, an automated system for direct measurement of greenhouse tomato growth and water use. Computers and Electronics in Agriculture 48, 198–215 (2005)CrossRefGoogle Scholar
  10. 10.
    Abdi, H., Williams, L.J.: Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics 2, 433–459 (2010)CrossRefGoogle Scholar
  11. 11.
    Hall, M.A.: Correlation-based feature selection for discrete and numeric class machine learning, pp. 359–366 (2000)Google Scholar
  12. 12.
    Kira, K., Rendell, L.A.: The feature selection problem: Traditional methods and a new algorithm. In: AAAI, pp. 129–134. AAAI Press and MIT Press (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Khobaib Zaamout
    • 1
  • John Z. Zhang
    • 1
  1. 1.Department of Mathematics and Computer ScienceUniversity of LethbridgeLethbridgeCanada

Personalised recommendations