Advertisement

Optimizing Classifiers for Handwritten Digits by Genetic Algorithms

  • J. Schäfer
  • H. Braun

Abstract

We present the first large real-world application for the neural network optimizing genetic algorithm Enzo. Nets had several thousands links and the training data up to over 200,000 patterns. We evolved nets for a classification task that have an order of magnitude free parameters less than commonly used polynomial classifiers while maintaining the same performance.

To achieve this we implemented some significant enhancements and minor improvements of the original algorithm.

It is also shown how to use Enzo as an efficient tool to create nets satisfying task-specific constraints.

Keywords

Genetic Algorithm Handwritten Digit Polynomial Classifier Handwritten Digit Recognition Salient Pattern 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Heinrich Braun and Joachim Weisbrod. Evolving feedforward neural networks. Proceedings of the International Conference on Artificial Neural Nets and Genetic Algorithms, 1993.Google Scholar
  2. [2]
    Heinrich Braun and Peter Zagorski. Enzo-M: A hybrid approach for optimizing neural networks by evolution and learning. In Proceedings of the third parallel problem solving from Nature, Jerusalem, Israel, 1994.Google Scholar
  3. [3]
    R. Allan Wilkinson et al The first census optical character recognition systems conference. NIST ir4912. Available at the NIST-Archive: sequoya.ucsl.nist.gov.Google Scholar
  4. [4]
    John McDonell and Don Waagen. Neural structure design by evolutionary programming, 1993. NCCOSC, RDT & E Division, San Diego, CA 92152.Google Scholar
  5. [5]
    Martin Riedmiller and Heinrich Braun. A direct adaptive method for faster backpropagation learning: The Rprop algorithm. In Proceedings of the ICNN 93, San Francisco, 1993.Google Scholar
  6. [6]
    D.E. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. In D.E. Rumelhart and J.L. McClelland, editors, Parallel Distributed Processing, Vol. I Foundations, pages 318–362. MIT Press, Cambridge, MA, 1986.Google Scholar

Copyright information

© Springer-Verlag/Wien 1995

Authors and Affiliations

  • J. Schäfer
    • 1
    • 2
  • H. Braun
    • 1
    • 2
  1. 1.Institut für Logik, Komplexität und DeduktionssystemeUniversität KarlsruheDeutschland
  2. 2.ILKDUniversität KarlsruheKarlsruheDeutschland

Personalised recommendations