Skip to main content

Lossy Data Compression and Transmission

  • Chapter
  • First Online:
An Introduction to Single-User Information Theory
  • 1787 Accesses

Abstract

In a number of situations, one may need to compress a source to a rate less than the source entropy, which as we saw in Chap. 3 is the minimum lossless data compression rate.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A twin result to the above Wyner–Ziv lower bound, which consists of an upper bound on the capacity-cost function of channels with stationary additive noise, is shown in [16, Corollary 1]. This result, which is expressed in terms of the nth-order capacity-cost function and the amount of memory in the channel noise, illustrates the “natural duality” between the information rate–distortion and capacity-cost functions originally pointed out by Shannon [345].

  2. 2.

    For example, the boundedness assumption in the theorems can be replaced with assuming that there exists a reproduction symbol \(\hat{z}_0 \in \widehat{\mathcal{Z}}\) such that \(E[\rho (Z,\hat{z}_0)] < \infty \) [42, Theorems 7.2.4 and 7.2.5]. This assumption can accommodate the squared error distortion measure and a source with finite second moment (including continuous-alphabet sources such as Gaussian sources); see also [135, Theorem 9.6.2 and p. 479].

  3. 3.

    The asymptotic tightness of this bound as D approaches zero is studied in [249].

  4. 4.

    Note that as pointed out in Sect. 4.6, n, \(f^{(sc)}\), and \(g^{(sc)}\) are all a function of m.

  5. 5.

    Note that \(\mathcal{Z}\) and \(\widehat{\mathcal{Z}}\) can also be continuous alphabets with an unbounded distortion function. In this case, the theorem still holds under appropriate conditions (e.g., [42, Problem 7.5], [135, Theorem 9.6.3]) that can accommodate, for example, the important class of Gaussian sources under the squared error distortion function (e.g., [135, p. 479]).

  6. 6.

    The channel can have either finite or continuous alphabets. For example, it can be the memoryless Gaussian (i.e., AWGN) channel with input power P; in this case, \(C=C(P)\).

  7. 7.

    In other words, the source emits symbols at a rate of \(1/T_s\) source symbols per second and the channel accepts inputs at a rate of \(1/T_c\) channel symbols per second.

  8. 8.

    If the strict inequality \(R(D) < \frac{1}{R_{sc}} \ C\) always holds, then in this case, the Shannon limit is \(D_{SL} = D_{min} :=E\left[ \min _{\hat{z}\in \hat{\mathcal{Z}}} \rho (Z,\hat{z})\right] \).

  9. 9.

    Other similar quantities used in the literature are the optimal performance theoretically achievable (OPTA) [42] and the limit of the minimum transmission ratio (LMTR) [87].

  10. 10.

    This example appears in various sources including [205, Sect. 11.8], [87, Problem 2.2.16], and [266, Problem 5.7].

  11. 11.

    Source–channel systems with rate \(R_{sc}=1\) are typically referred to as systems with matched source and channel bandwidths (or signaling rates). Also, when \(R_{sc} < 1\) (resp., \(>1\)), the system is said to have bandwidth compression (resp., bandwidth expansion); e.g., cf. [274, 314, 358].

  12. 12.

    Uncoded transmission schemes are also referred to as scalar or single-letter codes.

  13. 13.

    In other words, the code’s encoding and decoding functions, \(f^{(sc)}\) and \(g^{(sc)}\), respectively, are both equal to the identity mapping.

  14. 14.

    Note that in this system, since the source is incompressible, no source coding is actually required. Still the separate coding scheme will consist of a near-capacity achieving channel code.

  15. 15.

    For example, if the Markov source is binary symmetric, then its rate–distortion function is given by (6.3.9) for \(D\le D_c\) and the Shannon limit for sending this source over say a BSC or an AWGN channel can be calculated. If the distortion region \(D>D_c\) is of interest, then (6.3.8) or the right side of (6.3.9) can be used as lower bounds on R(D); in this case, a lower bound on the Shannon limit can be obtained.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fady Alajaji .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Alajaji, F., Chen, PN. (2018). Lossy Data Compression and Transmission. In: An Introduction to Single-User Information Theory. Springer Undergraduate Texts in Mathematics and Technology. Springer, Singapore. https://doi.org/10.1007/978-981-10-8001-2_6

Download citation

Publish with us

Policies and ethics