## Abstract

In this chapter and the next we develop the basic coding theorems of information theory. As is traditional, we consider two important special cases first and then later form the overall result by combining these special cases. In the first case we assume that the channel is noiseless, but it is constrained in the sense that it can only pass *R* bits per input symbol to the receiver. Since this is usually insufficient for the receiver to perfectly recover the source sequence, we attempt to code the source so that the receiver can recover it with as little distortion as possible. This leads to the theory of *source coding* or *source coding subject to a fidelity criterion* or *data compression*, where the latter name reflects the fact that sources with infinite or very large entropy are “compressed” to fit across the given communication link. In the next chapter we ignore the source and focus on a discrete alphabet channel and construct codes that can communicate any of a finite number of messages with small probability of error and we quantify how large the message set can be. This operation is called *channel coding* or error *control coding*. We then develop *joint source and channel codes* which combine source coding and channel coding so as to code a given source for communication over a given channel so as to minimize average distortion. The *ad hoc* division into two forms of coding is convenient and will permit performance near that of the OPTA function for the codes considered.

## Keywords

Source Code Block Code Block Length OPTA Function Entropy Rate## Preview

Unable to display preview. Download preview PDF.