A Posteriori Corrections to Classification Methods

  • Włodzisław Duch
  • Łukasz Itert
Conference paper
Part of the Advances in Soft Computing book series (AINSC, volume 19)


A posteriori corrections are computational inexpensive and may improve accuracy, confidence, sensitivity or specificity of the model, or correct for the differences between a priori training and real (test) class distributions. Such corrections are applicable to neural and any other classification models.


Cost Function Rejection Rate Class Distribution True Class Logical Rule 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Wolpert D.H. (1992) Stacked generalization. Neural Networks 5, 241–259CrossRefGoogle Scholar
  2. 2.
    Saerens M, Lattinne P, Decaestecker C. (2002) Adjusting the outputs of a classifier to new a priori probabilities: a simple procedure. Neural Computations (in press).Google Scholar
  3. 3.
    D. Michie, D.J. Spiegelhalter and C.C. Taylor, “Machine learning, neural and statistical classification”. Elis Horwood, London 1994MATHGoogle Scholar
  4. 4.
    Duch W, Adamczak R. and Grzbczewski K. (2001) Methodology of extraction, optimization and application of crisp and fuzzy logical rules. IEEE Transactions on Neural Networks 12: 277–306CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Włodzisław Duch
    • 1
  • Łukasz Itert
    • 1
  1. 1.Department of InformaticsNicholas Copernicus UniversityToruńPoland

Personalised recommendations