Data compression is achieved by reducing redundancy but this also makes the data less reliable, more prone to errors. Making data more reliable, on the other hand, is done by adding check bits and parity bits (Appendix E), a process that increases the size of the codes, thereby increasing redundancy. Data compression and data reliability are thus opposites, and it is interesting to note that the latter is a rela¬tively recent field, whereas the former existed even before the advent of computers. The sympathetic telegraph, discussed in the Preface, the Braille code of 1820 (Sec¬tion 1.1.1), and the Morse code of 1838 (Table 2.1) use simple forms of compression. It therefore seems that reducing redundancy comes naturally to anyone who works on codes, but increasing it is something that “goes against the grain” in humans. This section discusses simple, intuitive compression methods that have been used in the past. Today these methods are mostly of historical interest, since they are generally inefficient and cannot compete with the modern compression methods developed during the last 15–20 years.
KeywordsCompression Ratio Data Compression Basic Technique Input Stream Output Stream
Unable to display preview. Download preview PDF.