© 1995

The Nature of Statistical Learning Theory


Table of contents

  1. Front Matter
    Pages i-xv
  2. Vladimir N. Vapnik
    Pages 15-32
  3. Vladimir N. Vapnik
    Pages 33-64
  4. Vladimir N. Vapnik
    Pages 119-166
  5. Vladimir N. Vapnik
    Pages 167-175
  6. Back Matter
    Pages 177-188

About this book


The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning from the general point of view of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: - the general setting of learning problems and the general model of minimizing the risk functional from empirical data - a comprehensive analysis of the empirical risk minimization principle and shows how this allows for the construction of necessary and sufficient conditions for consistency - non-asymptotic bounds for the risk achieved using the empirical risk minimization principle - principles for controlling the generalization ability of learning machines using small sample sizes - introducing a new type of universal learning machine that controls the generalization ability.


Statistica algorithms boundary element method construction controlling convergence function functional learning learning algorithm learning theory model proof statistical theory statistics

Authors and affiliations

  1. 1.AT&T Bell LaboratoriesHolmdelUSA

Bibliographic information

Industry Sectors
IT & Software
Finance, Business & Banking
Energy, Utilities & Environment
Oil, Gas & Geosciences


"This interesting book helps a reader to understand the interconnections between various streams in the empirical modeling realm and may be recommended to any reader who feels lost in modern terminology." V.V. Fedorov, Oak Ridge National Laboratory, USA