Skip to main content

Method Enabling the First Hidden Layer of Multilayer Perceptrons to Make Division of Space with Various Hypercurves

  • Conference paper
  • First Online:
Book cover Artificial Intelligence and Soft Computing (ICAISC 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9692))

Included in the following conference series:

  • 1136 Accesses

Abstract

In this paper, a method for increasing the number of multilayer perceptron inputs has been proposed. Three kinds of additional input variables have been tested. They make it possible to perform data separation by neurons in the first layer of multilayer perceptrons, with the use of hypercurves having various shapes in the two selected dimensions. By using more inputs, single neurons in the first hidden layer are capable of solving some non-linear separable problems, e.g. the XOR function. In dependence of the weight values of these neurons, they may, in some dimensions, realise the similar transformations as neurons in the hidden layer of RBF networks or separated the data with hyperplanes or hyperparabolas. The use of the proposed procedure does not need to implement, from the very beginning, a new network training algorithm. The classification results on the three very popular UCI benchmarks, which contain the real-world data, are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Nelles, O.: Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models. Springer Science & Business Media, Berlin (2001)

    Book  MATH  Google Scholar 

  2. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39(3), 930–945 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  3. Rusiecki, A., Kordos, M., Kamiński, T., Greń, K.: Training neural networks on noisy data. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014, Part I. LNCS, vol. 8467, pp. 131–142. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  4. Cybenko, G.: Approximations by superpositions of sigmoidal functions. Math. Control Signals Syst. 2(4), 303–314 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  5. Rutkowski, L.: Computational Intelligence: Methods and Techniques. Springer Science & Business Media, Heidelberg (2008)

    Book  MATH  Google Scholar 

  6. Laar, P.V.D., Heskes, T., Gielen, S.: Partial retraining: a new approach to input relevance determination. Int. J. Neural Syst. 9(1), 75–85 (1999)

    Article  MATH  Google Scholar 

  7. Olden, J.D., Joy, M.K., Death, R.G.: An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 178(3), 389–397 (2004)

    Article  Google Scholar 

  8. Fock, E.: Global sensitivity analysis approach for input selection and system identification purposes-a new framework for feedforward neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(8), 1484–1495 (2014)

    Article  Google Scholar 

  9. Murphy, P.M., Aha, D.W.: UCI Repository of machine learning databases, Department of Information and Computer Science, University of California, Irvine, CA (1994). http://www.ics.uci.edu/~mlearn/MLRepository.html

  10. Lichman, M.: UCI Machine Learning Repository, School of Information and Computer Science, University of California, Irvine, CA (2013). http://archive.ics.uci.edu/ml

  11. Möller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 6(4), 525–533 (1993)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krzysztof Halawa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Halawa, K. (2016). Method Enabling the First Hidden Layer of Multilayer Perceptrons to Make Division of Space with Various Hypercurves. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2016. Lecture Notes in Computer Science(), vol 9692. Springer, Cham. https://doi.org/10.1007/978-3-319-39378-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-39378-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-39377-3

  • Online ISBN: 978-3-319-39378-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics