Abstract
The generic communications system of interest in this book is depicted in Figure 2.1. This model contains most of the elements which will be discussed in this and subsequent chapters. The output of an information source* is first encoded into a digital, usually binary, data stream. The encoding operation may involve several steps. If the source is analog, sampling and quantization by an analog-to-digital (A/D) converter are involved. The next step may be source coding in the form of redundancy removal. Huffman coding and the Lempel-Ziv algorithm, discussed in Section 2.5, are examples of this operation. Redundancy in the form of parity bits may be added in order to guard against channel errors. We shall be discussing redundancy encoding in Chapter 3. Once a digital stream is available, the operation of the encoder may also involve encryption for security. The digital stream may also be scrambled to provide randomization of the transmitted signal to facilitate adaptive equalization and timing recovery in the receiver. Scrambling is discussed in Section 6.7. The function of the modulator is to put the digital data stream into a form which is suitable for transmission over the physical channel. This may involve simply translating a sequence of binary digits into a sequence of pulses. This step may also involve baseband pulse encoding to combat intersymbol interference (see Section 4.6). In the case of passband systems, the data sequence may be used to modulate a carrier, thereby placing the signal into the passband of the channel (see Chapter 5). The receiver reverses the operations performed at the transmitter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
H. L. Van Trees, Detection Estimation and Modulation Theory, Wiley, 1968.
J. M. Wozenkraft and I. M. Jacobs, Principles of Communication Engineering, Wiley, 1967.
J. B. Thomas, Statistical Communication Theory, Wiley, 1969.
G. L. Turin, Notes on Digital Communications, Van Nostrand Reinhold, 1969.
S. Sherman, “Non-Mean Square Error Criteria,” IRE Trans. Information Theory IT-4, No. 13, pp. 125–126, 1958.
D. O. North, “An Analysis of Factors Which Determine Signal/Noise Discrimination in Pulse Carrier Systems,” RCA Report PTC-6C, 1943.
G. L. Turin, “An Introduction to Matched Filters,” IRE Trans. Information Theory, IT-6, pp. 311–329,1960.
C. E. Shannon, “A Mathematical Theory of Communications,” Bell System Technical Journal, Vol. 27, Part I, pp. 379–423, Part II, pp. 623–656.
N. Abramson, Information Theory and Coding, McGraw-Hill, 1963.
F. M. Ingels, Information and Coding Theory, Intx Educational Publishers, 1971.
R. G. Gallager, Information Theory and Reliable Communications, Wiley, 1968.
R. E. Blahut, Principles and Practice of Information Theory, Addison-Wesley, 1987.
R. J. McEliece, The Theory of Information and Coding, Addison-Wesley, 1977.
R. W. Lucky, Silicon Dreams, St. Martin’s Press, 1989.
A. Khinchin, Mathematical Foundations of Information Theory, Dover, 1957.
W. Feller, An Introduction to Probability and Its Applications, Vol. 1, Wiley, 1950.
F. M. Reza, An Introduction to Information Theory, McGraw-Hill, 1961.
D. A. Huffman, “A Method for Constructing Minimum Redundancy Codes,” Proc. IRE, Vol. 40, pp. 1098–1101.
R. Hunter and A. H. Robinson, “International Digital Facsimile Coding Standards,” Proc IEEE, Vol. 68, No. 7, pp. 854–867, July, 1980.
F. Jelinek, “Buffer Overflow in Variable Length Coding of Fixed Rate Sources,” IEEE Trans, on Information Theory, Vol. IT-14, No. 3, pp. 490–501, May, 1968.
J. Ziv and A. Lempel, “A Universal Algorithm for Sequential Data Compression,” IEEE Trans, on Information Theory, Vol. IT-23, No. 3, pp. 337–343, May, 1977.
J. Ziv and A. Lempel, “Compression of Individual Sequences Via Variable-Rate Coding,” IEEE Trans, on Information Theory, Vol. IT-24, No. 5, pp. 530–536, September, 1979.
T. A. Welch, “A Technique for High Performance Data Compression,” IEEE Computer, Vol. 17, No. 6, pp. 8–19, June, 1984.
T. Berger, Rate Distortion Theory, Prentice-Hall, 1971.
T. J. Goblick and J. L. Holsinger, “Analog Source Digitization Measure: A Comparison of Theory and Practice,” IEEE Trans, on Information Theory, Vol. IT-13, pp. 323–326.
S. Shmai (Shitz) and I. Bar-David, “Upper Bounds on Capacity of a Constrained Gaussian Channel,” IEEE Trans, on Information Theory, Vol. 35, No. 9, pp. 1079–1084, September, 1989.
I. Kalet and S. Shmai (Shitz), “On the Capacity of the Twisted-Wire Pair: Gaussian Model,” IEEE Trans, on Information Theory, Vol. 38, No. 3, pp. 379–383, March, 1990.
W. Feller, Introduction to Probability and Its Applications, Wiley, 1950.
A. Papoulis, Probability, Random Variables and Stochastic Processes, McGraw-Hill, 1965.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1992 Springer Science+Business Media New York
About this chapter
Cite this chapter
Gitlin, R.D., Hayes, J.F., Weinstein, S.B. (1992). Theoretical Foundations of Digital Communications. In: Data Communications Principles. Applications of Communications Theory. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3292-7_2
Download citation
DOI: https://doi.org/10.1007/978-1-4615-3292-7_2
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-6448-1
Online ISBN: 978-1-4615-3292-7
eBook Packages: Springer Book Archive