Skip to main content

Generating Linear Regression Rules from Neural Networks Using Local Least Squares Approximation

  • Conference paper
  • First Online:
Book cover Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence (IWANN 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2084))

Included in the following conference series:

Abstract

Neural networks are often selected as the tool for solving regression problem because of their capability to approximate any continuous function with arbitrary accuracy. A major drawback of neural networks is their complex mapping which is not easily understood by a user. This paper describes a method that generates decision rules from Trained neural networks for regression problems. The networks have a Single layer of hidden units with hyperbolic tangent activation function and a single output unit with linear activation function. The crucial step in this method is the approximation of the hidden unit activation function by a 3-piece linear function. This linear function is obtained by minimizing the sum of squared deviations between the hidden unit activation values are generated. The conditions of the rules divide the input space of the data into subspaces, while the consequence of each rule is a linear regression function. Our experimental results indicate that the method generates more accurate rules than those from similar methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blake, C., Keogh, E. and Merz, C. J. (1998) UCI Repository of Machine Learning Databases, Dept. of Information and Computer Science, University of California, Irvine. http://www.ics.uci.edu/_mlearn/MLRepository.html.

    Google Scholar 

  2. Cybenko, G. (1989) Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2, 303–314.

    Article  MATH  MathSciNet  Google Scholar 

  3. Dennis Jr. J. E. and Schnabel, R. E. (1983) Numerical methods for unconstrained optimization and nonlinear equations. Englewood Cliffs, New Jersey: Prentice Halls.

    MATH  Google Scholar 

  4. Setiono, R. and Hui, L. C. K. (1995) Use of quasi-Newton method in a feedforward neural network construction algorithm, IEEE Trans. on Neural Networks, 6(1), 273–277.

    Article  Google Scholar 

  5. Hornik, K. (1991) Approximation capabilities of multilayer feedforward networks. Neural Networks, 4, 251–257.

    Article  Google Scholar 

  6. John, G., Kohavi, R. and Peger, K. (1994) Irrelevant features and the subset selection problem. In Proc. of the 11th ICML Learning, Morgan Kaufman, San Mateo, 121–129.

    Google Scholar 

  7. Khattree, W. and Naik, D. N. (1999) Applied multivariate statistics with SAS software. SAS Institute, Carey, N.C.

    Google Scholar 

  8. Ludl, M-C. and Widmer, G. (2000) Relative unsupervised discretization for regression problems. In Proc. of the 11th ECML, ECML 2000, Lecture Notes in AI 1810, Springer, R. A. Mantaras and E. Plaza (Eds), 246–253, Barcelona.

    Google Scholar 

  9. Quinlan, R. (1993) C4.5: Programs for machine learning. Morgan Kaufmann, San Meteo, CA.

    Google Scholar 

  10. Setiono, R. and Leow, W. K. (2000) Pruned neural networks for regression. In Proc. of the 6th Pacific Rim Conference on Artificial Intelligence, PRICAI 2000, Lecture Notes in AI 1886, Springer, R. Mizoguchi and J. Slaney (Eds), 500–509, Melbourne.

    Google Scholar 

  11. Torgo, L. (1997) Functional models for regression tree leaves. In Proc. of the ICML, ICML-97, Fisher, D. (Ed), Morgan Kaufman, San Mateo, CA.

    Google Scholar 

  12. Torgo, L. and Gama, J. (1997) Search-based class discretization. In Proc. of the 9th European Conference on Machine Learning, ECML-97, Lecture Notes in AI 1224, Springer, M. van Someren and G. Widmer (Eds), 266–273, Prague.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Setiono, R. (2001). Generating Linear Regression Rules from Neural Networks Using Local Least Squares Approximation. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_31

Download citation

  • DOI: https://doi.org/10.1007/3-540-45720-8_31

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42235-8

  • Online ISBN: 978-3-540-45720-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics