• Te-Won Lee


This chapter is an introduction to the basics of Bayesian probability theory, information theory, artificial neural networks and statistical signal processing. The material presented here was selected from several textbooks. The goal of this chapter is to cover the basic notions and terminologies used throughout the thesis. It is not intended to give a broad understanding of the theories but rather to recall some definitions and their relations to each other.


Mutual Information Learning Rule Central Moment Conditional Entropy Joint Entropy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    It is available online via his homepage:
  2. 2.
    Note that throught the book, H(.) is referred to as differential entropy. h(x) was merely used in this section to distinguish between the entropies of a discrete random variable (RV) AND A CONTINUOUS RV.Google Scholar
  3. 3.
    The three terms can be used interchangeably in this context.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • Te-Won Lee
    • 1
  1. 1.Computational Neurobiology LaboratoryThe Salk InstituteLa JollaUSA

Personalised recommendations