Skip to main content

Quantifying Relevance of Input Features

  • Conference paper
  • First Online:
Intelligent Data Engineering and Automated Learning — IDEAL 2002 (IDEAL 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2412))

Abstract

Identifying and quantifying relevance of input features are particularly useful in data mining when dealing with ill-understood real-world data defined problems. The conventional methods, such as statistics and correlation analysis, appear to be less effective because the data of such type of problems usually contains high-level noise and the actual distributions of attributes are unknown. This papers presents a neural-network based method to identify relevant input features and quantify their general and specified relevance. An application to a real-world problem, i.e. osteoporosis prediction, demonstrates that the method is able to quantify the impacts of risk factors, and then select the most salient ones to train neural networks for improving prediction accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Utans, J. and Moody, J. (1991), Selecting neural network architecture via the prediction risk: application to corporate bond rating prediction. in D. S. Tourentzky, ed., ‘Proc. 1st Int. Conf. on AI Applications on Wall Street’, IEEE Computer Society Press, Los Atlamitos, CA.

    Google Scholar 

  2. Gedeon, T. (1997), ‘Data mining of inputs: analysis magnitude and functional measures’, Int. J. of Neural Networks 8, 209–218.

    Google Scholar 

  3. MacKay, D. (1998),’ Bayesian methods for supervised neural networks’. Handbook of mind theory and neural networks, Editor, M. Irbib, MIT press, 144–149.

    Google Scholar 

  4. Wang, W., Jones, P. & Partridge, D. (1998), Ranking pattern recognition features for neural networks. S. Singh, ed., Advances in Patterns Recognation, Springer, 232–241.

    Google Scholar 

  5. Tchaban, T. et al. (1998), ‘Establishing impacts of the inputs in a feedforward network’, Neural Computing and Applications 7, 309–317.

    Article  MATH  Google Scholar 

  6. Wang, W., Jones, P. and Partridge, D. (2000), ‘Assessing the impact of input features in a feedforward network’. Neural Computing and Applications, Vol. 9, 101–112.

    Article  Google Scholar 

  7. Wang, W. et al. (2001), ‘A comparative study of feature salience rankingtechniques’ Neural Computation 13(7), 1602–1623. MIT Press

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wang, W. (2002). Quantifying Relevance of Input Features. In: Yin, H., Allinson, N., Freeman, R., Keane, J., Hubbard, S. (eds) Intelligent Data Engineering and Automated Learning — IDEAL 2002. IDEAL 2002. Lecture Notes in Computer Science, vol 2412. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45675-9_89

Download citation

  • DOI: https://doi.org/10.1007/3-540-45675-9_89

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44025-3

  • Online ISBN: 978-3-540-45675-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics