This chapter is an introduction to the basics of Bayesian probability theory, information theory, artificial neural networks and statistical signal processing. The material presented here was selected from several textbooks. The goal of this chapter is to cover the basic notions and terminologies used throughout the thesis. It is not intended to give a broad understanding of the theories but rather to recall some definitions and their relations to each other.
KeywordsMutual Information Learning Rule Central Moment Conditional Entropy Joint Entropy
Unable to display preview. Download preview PDF.
- 1.It is available online via his homepage: http://wol.ra.phy.cam.ca.uk/mackay/itprnn/#book
- 2.Note that throught the book, H(.) is referred to as differential entropy. h(x) was merely used in this section to distinguish between the entropies of a discrete random variable (RV) AND A CONTINUOUS RV.Google Scholar
- 3.The three terms can be used interchangeably in this context.Google Scholar