Advertisement

Introduction: Four Periods in the Research of the Learning Problem

  • Vladimir N. Vapnik

Abstract

In the history of research of the learning problem one can extract four periods that can be characterized by four bright events:
  1. (i)

    Constructing the first learning machines,

     
  2. (ii)

    constructing the fundamentals of the theory,

     
  3. (iii)

    constructing neural networks,

     
  4. (iv)

    constructing the alternatives to neural networks.

     

Keywords

Learn Problem Generalization Ability Inductive Inference Statistical Learn Theory Inductive Principle 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Note that discriminant analysis as proposed in the 1930s by Fisher actually did not consider the problem of inductive inference (the problem of estimating the discriminant rules using the examples). This happened later, after Rosenblatt’s work. In the 1930s discriminant analysis was considered a problem of constructing a decision rule separating two categories of vectors using given probability distribution functions for these categories of vectors.Google Scholar
  2. 2.
    V. Vapnik and A. Chervonenkis, Theory of Pattern Recognition (in Russian), Nauka, Moscow, 1974.zbMATHGoogle Scholar
  3. 2a.
    German translation: W. N. Wapnik, A. Ja. Tscherwonenkis, Theorie der Zeichenerkennung, Akademia-Verlag, Berlin, 1979.zbMATHGoogle Scholar
  4. 3.
    V. N. Vapnik, Estimation of Dependencies Based on Empirical Data (in Russian), Nauka, Moscow, 1979. English translation: Vladimir Vapnik, Estimation of Dependencies Based on Empirical Data, Springer, New York, 1982.Google Scholar
  5. 4.
    Convergence in probability to the best possible result. An exact definition of consistency is given in Section 2.1.Google Scholar
  6. 5.
    The back-propagation method was actually found in 1963 for solving some control problems (Brison, Denham and Dreyfuss, 1963) and was rediscovered for Perceptrons.Google Scholar
  7. 6.
    Of course it is very interesting to know how humans can learn. However, this is not necessarily the best way for creating an artificial learning machine. It has been noted that the study of birds flying was not very useful for constructing the airplane.Google Scholar
  8. 7.
    L.G. Valiant, 1984, “A theory of learnability”, Commun. ACM 27(11), 1134–1142.zbMATHCrossRefGoogle Scholar
  9. 8.
    “If the computational requirement is removed from the definition then we are left with the notion of nonparametric inference in the sense of statistics, as discussed in particular by Vapnik.” (L. Valiant, 1991, “A view of computational learning theory”, In the book: “Computation and Cognition”, Society for Industrial and Applied Mathematics, Philadelphia, p. 36.)Google Scholar

Copyright information

© Springer Science+Business Media New York 1995

Authors and Affiliations

  • Vladimir N. Vapnik
    • 1
  1. 1.AT&T Bell LaboratoriesHolmdelUSA

Personalised recommendations