Skip to main content

The Concept and Properties of Sigma-if Neural Network

  • Conference paper
Adaptive and Natural Computing Algorithms

Abstract

Our recent works on artificial neural networks point to the possibility of extending the activation function of a standard artificial neuron model using the conditional signal accumulation technique, thus significantly enhancing the capabilities of neural networks. We present a new artificial neuron model, called Sigma-if, with the ability to dynamically tune the size of the decision space under consideration, resulting from a novel activation function. The paper discusses construction of the proposed neuron as well as training Sigma-if feedforward neural networks for well known sample classification problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Duch, W., Jankowski, N. (1999) Survey of neural transfer functions. Neural Computing Surveys 2: 163–212

    Google Scholar 

  2. Anthony, M., Bartlett, P.L. (1999) Neural networks learning: Theoretical foundations, 1st ed. Cambridge University Press, Cambridge

    Google Scholar 

  3. Hammer, B., Tino, P. (2003) Recurrent neural networks with small weights implement definite memory machines. Neural Computation 15(8): 1897–1929

    Article  Google Scholar 

  4. Murray, A.F., Edwards, P.J. (1992) Synaptic weight noise during MLP learning enhances fault-tolerance, generalisation and learning trajectory. Advances in Neural Information Processing Systems 5: 491–498

    Google Scholar 

  5. Hornik, K., Stinchcombe, M., White, H. (1989) Multilayer feedforward networks are universal approximators. Neural Networks 2: 359–366

    Article  Google Scholar 

  6. Durbin, R., Rumelhart, D. (1989) Product units: A computationally powerful and biologically plausible extension to backpropagation networks. Neural Computation 1: 133–142

    Google Scholar 

  7. Schmitt, M. (2002) On the complexity of computing and learning with multiplicative neural networks. Neural Computation 14: 241–301

    Article  MATH  Google Scholar 

  8. Duch, W., Jankowski, N. (2001) Transfer functions: hidden possibilities for better neural networks. In Verleysen, M. (ed.) 9th European Symposium on Artificial Neural Networks. D-Facto, Brugge, pp. 81–94

    Google Scholar 

  9. Cohen, S., Intrator, N. (2002) A hybrid projection based and radial basis function architecture: initial values and global optimization. Pattern Analysis & Applications 5: 113–120

    Article  MathSciNet  Google Scholar 

  10. Schmitt, M. (2002) Neural networks with local receptive fields and superlinear VC dimension. Neural Computation 14(4): 919–956

    Article  MATH  Google Scholar 

  11. Huk, M. (2004) The Sigma-if neural network as a method of dynamic selection of decision subspaces for medical reasoning systems. Journal of Medical Informatics & Technologies 7: 65–73

    Google Scholar 

  12. Ridella, S., Rovetta, S., Zunino, R. (1999) Representation and generalization properties of Class-Entropy Networks. IEEE Transactions on Neural Networks 10(1): 31–47

    Article  Google Scholar 

  13. Banarer, V., Perwass, C, Sommer, G. (2003) The hypersphere neuron. In: Verleysen, M. (ed.) 11th European Symposium on Artificial Neural Networks. D-Side, Brugge, pp. 469–474

    Google Scholar 

  14. Huk, M. (2003) Determining the relevancy of attributes in classification tasks through nondestructive elimination of interneuronal connections in a neural network (in Polish). Pozyskiwanie Wiedzy z Baz Danych 975: 138–147

    Google Scholar 

  15. Fahlman, S.E., Lebiere, C. (1990) The Cascade-Correlation learning architecture. Advances in Neural Information Processing Systems 2: 524–532

    Google Scholar 

  16. Kaufman, L., (1999) Solving the quadratic pro-gramming problem arising in support vector classification. In: Scholpkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods-Support Vector Learning. MIT Press, Boston, pp. 146–167

    Google Scholar 

  17. Tickle, A., Andrews, R., Golea, M., Diederich, J. (1998) The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks. IEEE Transactions on Neural Networks 9(6): 1057–1068.

    Article  Google Scholar 

  18. Craven, M.W., Shavlik, J.W. (1996) Extracting tree-structured representations of trained networks. Advances in Neural Information Processing Systems 8: 24–30

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag/Wien

About this paper

Cite this paper

Huk, M., Kwasnicka, H. (2005). The Concept and Properties of Sigma-if Neural Network. In: Ribeiro, B., Albrecht, R.F., Dobnikar, A., Pearson, D.W., Steele, N.C. (eds) Adaptive and Natural Computing Algorithms. Springer, Vienna. https://doi.org/10.1007/3-211-27389-1_4

Download citation

  • DOI: https://doi.org/10.1007/3-211-27389-1_4

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-24934-5

  • Online ISBN: 978-3-211-27389-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics