Skip to main content

Part of the book series: Operations Research/Computer Science Interfaces Series ((ORCS,volume 8))

  • 2381 Accesses

Abstract

The N-Tuple Neural Network (NTNN) is a fast, efficient memory-based neural network capable of performing non-linear function approximation and pattern classification. The random nature of the N-tuple sampling of the input vectors makes precise analysis difficult. Here, the NTNN is considered within a unifying framework of the General Memory Neural Network (GMNN) — a family of networks which include such important types as radial basis function networks. By discussing the NTNN within such a framework, a clearer understanding of its operation and efficient application can be gained. The nature of the intrinsic tuple distances, and the resultant kernel, is also discussed, together with techniques for handling non-binary input patterns. An example of a tuple-based network, which is a simple extension of the conventional NTNN, is shown to yield the best estimate of the underlying regression function, E(Y|x), for a finite training set. Finally, the pattern classification capabilities of the NTNN are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bledsoe W W and Browning I, Pattern recognition and reading by machine, IRE Joint Computer Conference, 1959, 225–232.

    Google Scholar 

  2. Aleksander I, Fused adative circuit which learns by example, Electronics Letters, 1965, 1, 173–174.

    Article  Google Scholar 

  3. Tattersall G D, Foster S and Johnston R D, Single-layer lookup perceptrons, IEE Proceedings — F: Radar and Signal Processing, 1991, 138, 46–54.

    Article  Google Scholar 

  4. Bledsoe W W and Bisson C L, Improved memory matrices for the N-tuple pattern recognition method, IRE Transactions on Electronic Computers, 1962, 11, 414–415.

    Article  Google Scholar 

  5. Broomhead D S and Lowe D, Multivariable functional interpolation and adaptive networks, Complex Systems, 1988, 2, 321–355.

    MathSciNet  MATH  Google Scholar 

  6. Specht DF, A general regression neural network, IEEE Transactions on Neural Networks, 1991, 2, 568–576.

    Article  Google Scholar 

  7. Albus JS, A new approach to manipulator control: the cerebellar model articulation controller (CMAC), Journal of Dynamic Systems, Measurement and Control, 1975, 97, 220–227.

    Article  MATH  Google Scholar 

  8. Moody J, Fast learning in multi-resolution hierarchies, in Advances in Neural Information Processing 1 (Touretzky D S, ed.), 1989, Morgan Kaufmann: San Mateo, CA, 29–39.

    Google Scholar 

  9. Kolcz A and Allinson N M, General Memory Neural Network — extending the properties of basis networks to RAM-based architectures, 1995 IEEE International Conference on Neural Networks, Perth, Western Australia.

    Google Scholar 

  10. Park J and Sandberg, I W, Universal approximation using radial basis function networks, Neural Computation, 1991, 3, 246–257.

    Article  Google Scholar 

  11. Kavli T, ASMOD — an algorithm for adaptive modelling of observational data, International Journal of Control, 1993, 58, 947–967.

    Article  MathSciNet  MATH  Google Scholar 

  12. Kolcz A and Allinson N M, Application of the CMAC-input encoding scheme in the N-tuple approximation network, IEE Proceedings — E Computers and Digital Techniques, 1994, 141, 177–183.

    Article  Google Scholar 

  13. Kolcz A and Allinson N M, Enhanced N-tuple approximators, Proceedings of Weightless Neural Network Workshop 93, 1993, 38–45.

    Google Scholar 

  14. Allinson N M and Kolcz A, The theory and practice of N-tuple neural networks, in Neural Networks (Taylor J G, ed.), 1995, Alfred Waller, 53–70.

    Google Scholar 

  15. Kolcz A and Allinson N M, Euclidean mapping in an N-tuple approximation network, Sixth IEEE Digital Signal Processing Workshop, 1994, 285–289.

    Google Scholar 

  16. Kolcz A and Allinson N M, N-tuple regression network, Neural Networks vol 9 No.5 pp855–870.

    Google Scholar 

  17. Parzen E, On estimation of a probability density function and mode, Annals of Mathematical Statistics, 1962, 33, 1065–1076.

    Article  MathSciNet  MATH  Google Scholar 

  18. Moody J and Darken C J, Fast learning in networks of locally-tuned processing units, Neural Computation, 1989, 1, 281–294.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media New York

About this chapter

Cite this chapter

Allinson, N.M., Kolcz, A.R. (1997). N-Tuple Neural Networks. In: Ellacott, S.W., Mason, J.C., Anderson, I.J. (eds) Mathematics of Neural Networks. Operations Research/Computer Science Interfaces Series, vol 8. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-6099-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-6099-9_1

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7794-8

  • Online ISBN: 978-1-4615-6099-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics