Advertisement

A Full Explanation Facility for a MLP Network that Classifies Low-Back-Pain Patients and for Predicting its Reliability

  • M. L. Vaughn
  • S. J. Cavill
  • S. J. Taylor
  • M. A. Foy
  • A. J. B. Fogg
Part of the Advances in Soft Computing book series (AINSC, volume 14)

Abstract

This paper presents a full explanation facility that has been developed for any standard MLP network with binary input neurons that performs a classification task. The interpretation of any input case is represented by a non-linear ranked data relationship of key inputs, in both text and graphical forms. The knowledge that the MLP has learned is represented by average ranked class profiles or as a set of rules induced from all training cases. The full explanation facility discovers the MLP knowledge bounds as the hidden layer decision regions containing classified training examples. Novel inputs are detected when the input case is positioned in a decision region outside the knowledge bounds. Results using the facility are presented for a 48-dimensional real-world MLP that classifies low-back-pain patients. Using the full explanation facility, it is shown that the MLP preserves the continuity of the classifications in separate contiguous threads of decision regions across the 48-dimensional input space thereby demonstrating the consistency and predictability of the classifications within the knowledge bounds.

Keywords

Hide Layer Hide Neuron Decision Region Novelty Detection Training Case 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Vaughn M.L., “Derivation of the Weight Constraints for Direct Knowledge Discovery from the Multilayer Perceptron Network”, Neural Networks Journal, Volume 12, pp.12591271, 1999.Google Scholar
  2. [2]
    Vaughn M.L., Cavill S.J., Taylor Si.,. Foy M.A., Fogg A.J.B., “Direct Explanations and Knowledge Extraction from a Multilayer Perceptron Network that Performs Low Back Pain Classification”, Hybrid Neural Systems (S.Wermter and R.Sun, eds), Springer, pp. 270–285, 2000.Google Scholar
  3. [3]
    Vaughn M.L., “Interpretation and Knowledge Discovery from the Multi Layer Perceptron Network: Opening the Black Box”. Neural Computing & Applications Journal, Vol 4 (2), pp. 72–82, 1996.CrossRefGoogle Scholar
  4. [4]
    Vaughn M.L., Cavill S.J., Taylor S.J., Foy M.A., Fogg A.J.B., “Direct Explanations for the Development and Use of a Multi-Layer Perceptron Network that Classifies Low Back Pain Patients”, International Journal of Neural Systems, Volume 11, No. 4, 2001.Google Scholar
  5. [5]
    Craven M.W. and Shavlik J.W., “Using Sampling and Queries to Extract Rules from Trained Neural Networks”, in Machine Learning. Proc.Eleventh International Conference on Machine Learning, Amherst, MA,USA. Morgan Kaufmann, pp. 73–80, 1994.Google Scholar
  6. [6]
    Looney, C.G., Pattern Recognition Using Neural Networks. New York, Oxford University Press, 1997.Google Scholar
  7. [7]
    Denker, J., Schwarz D., Wittner, B., Solla S., Howard, R., Jackel, L., and Hopfield, J., “Large Automatic Learning, Rule Extraction, and Generalisation”. Complex Systems, 1, 1987, 877–922.MathSciNetMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • M. L. Vaughn
    • 1
  • S. J. Cavill
    • 1
  • S. J. Taylor
    • 2
  • M. A. Foy
    • 2
  • A. J. B. Fogg
    • 2
  1. 1.Cranfield University (RMCS)Shrivenham, SwindonUK
  2. 2.Princess Margaret HospitalSwindonUK

Personalised recommendations