A Full Explanation Facility for a MLP Network that Classifies Low-Back-Pain Patients and for Predicting its Reliability
This paper presents a full explanation facility that has been developed for any standard MLP network with binary input neurons that performs a classification task. The interpretation of any input case is represented by a non-linear ranked data relationship of key inputs, in both text and graphical forms. The knowledge that the MLP has learned is represented by average ranked class profiles or as a set of rules induced from all training cases. The full explanation facility discovers the MLP knowledge bounds as the hidden layer decision regions containing classified training examples. Novel inputs are detected when the input case is positioned in a decision region outside the knowledge bounds. Results using the facility are presented for a 48-dimensional real-world MLP that classifies low-back-pain patients. Using the full explanation facility, it is shown that the MLP preserves the continuity of the classifications in separate contiguous threads of decision regions across the 48-dimensional input space thereby demonstrating the consistency and predictability of the classifications within the knowledge bounds.
KeywordsHide Layer Hide Neuron Decision Region Novelty Detection Training Case
Unable to display preview. Download preview PDF.
- Vaughn M.L., “Derivation of the Weight Constraints for Direct Knowledge Discovery from the Multilayer Perceptron Network”, Neural Networks Journal, Volume 12, pp.12591271, 1999.Google Scholar
- Vaughn M.L., Cavill S.J., Taylor Si.,. Foy M.A., Fogg A.J.B., “Direct Explanations and Knowledge Extraction from a Multilayer Perceptron Network that Performs Low Back Pain Classification”, Hybrid Neural Systems (S.Wermter and R.Sun, eds), Springer, pp. 270–285, 2000.Google Scholar
- Vaughn M.L., Cavill S.J., Taylor S.J., Foy M.A., Fogg A.J.B., “Direct Explanations for the Development and Use of a Multi-Layer Perceptron Network that Classifies Low Back Pain Patients”, International Journal of Neural Systems, Volume 11, No. 4, 2001.Google Scholar
- Craven M.W. and Shavlik J.W., “Using Sampling and Queries to Extract Rules from Trained Neural Networks”, in Machine Learning. Proc.Eleventh International Conference on Machine Learning, Amherst, MA,USA. Morgan Kaufmann, pp. 73–80, 1994.Google Scholar
- Looney, C.G., Pattern Recognition Using Neural Networks. New York, Oxford University Press, 1997.Google Scholar