Abstract
A new method of feature weighting, useful also for feature extraction has been described. It is quite efficient and gives quite accurate results. Weighting algorithm may be used with any kind of learning algorithm. The weighting algorithm with k-nearest neighbors model was used to estimate the best feature base for a given distance measure. Results obtained with this algorithm clearly show its superior performance in several benchmark tests.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
R. O. Duda, P. E. Hart, and D. G. Stork. Patter Classification and Scene Analysis. Wiley, 2nd ed. edition, 1997.
M. Dash and H. Liu. Feature selection for classification. Intelligent Data Analysis, 1 (3), 1997.
M. Fernandez and C. Hernandez. How to select the inputs for a multilayer feedforward by using the training set. In 5th International Work Conference on Artificial an Natural Neural Networks, pages 477–486, Alicante, Spain, 1999.
H. Almuallim and T. G. Dietterich. Efficient algorithms for identifying relevant features. In Proceedings of the Ninth Canadian Conference on Artificial Intelligence,pages 38–45, Vancouver, 1992. Morgan Kaufmann.
D. R. Wilson and T. R. Martinez. Instance-based learning with genetically derived attribute weights. In International Conference on Artificial Intelligence, Expert Systems and Neural Networks, pages 11–14, 1996.
D. R. Wilson. Advances in Instance-Based Learning Algorithms. PhD thesis, Department of Computer Science Brigham Young University, 1997.
T. M. Cover and P. E. Hart. Nearest neighbor pattern classification. Institute of Electrical and Electronics Engineers Transactions on Information Theory,13(1):21–27, jan 1967.
C. J. Merz and P. M. Murphy. UCI repository of machine learning databases, 1998.http://www.ics.uci.edu/’mlearn/MLRepository.html.
W. Duch and K. Grudzinski. Search and global minimization in similarity-based methods. In International Joint Conference on Neural Networks, page 742, Washington, 1999.
W. Duch, R. Adamczak, and K. Grabczewski. Extraction of logical rules from backpropagation networks. Neural Processing Letters, 7: 1–9, 1998.
S.M. Weiss and I. Kapouleas. An empirical comparison of pattern recognition, neural nets and machine learning classification methods. In J.W. Shavlik and T.G. Dietterich, editors, Readings in Machine Learning. Morgan Kauffman, 1990.
K. Grabczewski and Wlodzislaw Duch. The separability of split value criterion. In L. Rutkowski and R. Tadeusiewicz, editors, Neural Networks and Soft Computing, pages 202–208, Zakopane, Poland, June 2000.
N. Jankowski. Ontogenic neural networks and their applications to classification of medical data. PhD thesis, Department of Computer Methods, Nicholas Copernicus University, Torurí, Poland, 1999.
W. Schiffman, M. Joost, and R. Werner. Comparison of optimized backpropagation algorithms. In Proceedings of ESANN’93, pages 97–104, Brussels, 1993.
Frederick Zarndt. A comprehensive case study: An examination of machine learning and connectionist algorithms. Master’s thesis, Department of Computer Science Brigham Young University, 1995.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jankowski, N. (2003). Discrete Quasi-gradient Features Weighting Algorithm. In: Rutkowski, L., Kacprzyk, J. (eds) Neural Networks and Soft Computing. Advances in Soft Computing, vol 19. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1902-1_26
Download citation
DOI: https://doi.org/10.1007/978-3-7908-1902-1_26
Publisher Name: Physica, Heidelberg
Print ISBN: 978-3-7908-0005-0
Online ISBN: 978-3-7908-1902-1
eBook Packages: Springer Book Archive