Joint source/channel iterative arithmetic decoding with JPEG 2000 image transmission application
- 2.6k Downloads
Motivated by recent results in Joint Source/Channel coding and decoding, we consider the decoding problem of Arithmetic Codes (AC). In fact, in this article we provide different approaches which allow one to unify the arithmetic decoding and error correction tasks. A novel length-constrained arithmetic decoding algorithm based on Maximum A Posteriori sequence estimation is proposed. The latter is based on soft-input decoding using a priori knowledge of the source-symbol sequence and the compressed bit-stream lengths. Performance in the case of transmission over an Additive White Gaussian Noise channel is evaluated in terms of Packet Error Rate. Simulation results show that the proposed decoding algorithm leads to significant performance gain while exhibiting very low complexity. The proposed soft input arithmetic decoder can also generate additional information regarding the reliability of the compressed bit-stream components. We consider the serial concatenation of the AC with a Recursive Systematic Convolutional Code, and perform iterative decoding. We show that, compared to tandem and to trellis-based Soft-Input Soft-Output decoding schemes, the proposed decoder exhibits the best performance/complexity tradeoff. Finally, the practical relevance of the presented iterative decoding system is validated under an image transmission scheme based on the JPEG 2000 standard and excellent results in terms of decoded image quality are obtained.
KeywordsPacket Error Rate Additive White Gaussian Noise Channel Extrinsic Information Arithmetic Code Iterative Decode
Joint Source/Channel (JSC) coding and decoding have become an area of strong interest because the separation between source and channel coding has turned out to be unjustified in practical systems due to limited block lengths and the residual redundancy in the data bits which remain after source encoding.
On the other hand, the hostile nature of the communication channel requires some form of error protection. The limitation on the bandwidth necessitates the use of efficient entropy-approaching source codes, generally, variable-length codes (Huffman or arithmetic codes--ACs). However, variable-length compressed bit-streams are susceptible to error propagation, and the need for error protection increases. In this scope, JSC decoding for variable-length coding is receiving increasing attention.
Arithmetic Coding [1, 2] is currently being deployed in a growing number of compression standards such as JPEG 2000  for still pictures and H264  for video sequences. Arithmetic coding yields higher compression performance when compared to other lossless compression methods since it can allocate fractional numbers of bits to input symbols. How-ever, the arithmetic decoder has poor resynchronization properties which motivated the development of JSC techniques based on ACs.
The first contributions, classified as resilient entropy coding techniques, tends to prevent error propagation by using proper synchronization markers or considering some framing techniques . A scheme reserving a space for an extra symbol (called forbidden symbol) that is not in the source alphabet, so never transmitted, has been proposed in . The technique has been shown to provide excellent error detection while introducing redundancy in the compressed bit-stream. However, this extra rate is small considering the error detection capability. The forbidden symbol technique was then integrated in different decoding schemes to provide error correction. Associated to an automatic repeat request (ARQ) protocol, the technique was used for error correction in [7, 8].
More recent studies tend to apply soft-input sequence estimation algorithms for arithmetic decoding instead of hard-input classical decoding. The proposed schemes consider finite state representations for the arithmetic decoding machine to apply well-known algorithms used in channel decoding such as Viterbi, List-Viterbi [9, 10, 11, 12, 13, 14, 15]. In , an AC that embeds channel coding is presented to enforce a minimum Hamming distance between encoded sequences, then a Maximum A Posteriori (MAP) estimator is proposed for arithmetic decoding. In [10, 11], sequential decoding schemes were applied on binary trees and path pruning technique was used based on the forbidden symbol error detection. Sayir  used an arithmetic encoder adding redundancy in the compressed bit-stream by introducing gaps in the coding space, and performs sequential decoding. In , authors used Bayesian networks to model the quasi-arithmetic encoder, and considered adding redundancy by introducing synchronization markers. A new three-dimensional bit-synchronized trellis representing the finite precision AC was recently proposed in . A similar trellis was used in , where authors computed bounds on the error probability obtained with an AC using the forbidden symbol technique.
Recent contributions proposed to exploit the efficiency of turbo decoding by integrating the arithmetic decoder in an iterative decoding process [13, 16, 17, 18]. These contributions used finite-state machine representations for the AC to apply Soft-Input Soft-Output (SISO) decoding algorithms. Iterative decoding is then performed by concatenating the ACs with a Recursive Systematic Convolutional Code (RSCC). It is worth pointing out that the techniques introduced in [13, 16] were applied for JPEG 2000 compressed image transmission. We notice that all the proposed techniques rely on specific trellis constructions with eventually pruning, which results in various efficiencies in terms of error correction performance. However, in the proposed trellises the states number increases with the source symbol sequence length L, and the source alphabet size U. Thus, the decoding complexity becomes intractable for large values of L and U. Recently, some contributions considered JSC decoding methods including a Low-Density Parity-Check Code (LDPC) for channel coding with application to image transmission. In , the authors considered rate-compatible LDPC codes to apply unequal error protection on the compressed JPEG 2000 bit-stream. Extra information provided by the error-resilience mode of the JPEG 2000 encoder was delivered to the LDPC decoder and iterative decoding was performed in .
This article is devoted to a different decoding algorithm, with low-complexity, for soft-input decoding of ACs in the case of transmission over a noisy channel. The main objective is the development of a SISO arithmetic decoder that is able to improve the error correction performance with a reasonable complexity and efficient compression behavior. First, we propose a new low-complexity arithmetic decoder that supposes the decoder to know the source symbol sequence length L and the compressed bit-stream size l. Then, the decoding task is based on the search of the MAP sequence among length-valid sequences (bit-streams of length l decoding exactly L symbols). The proposed algorithm is inspired from the Chase algorithm  and called Chase-like arithmetic decoder. The second contribution of this study is a new scheme for SISO arithmetic decoding. The latter is obtained through a slight modification of the Chase-like arithmetic decoder and generates additional bits reliability measure. Results corresponding to iterative decoding in the case of serial concatenation of an AC with a RSCC are presented and compared to the tandem decoding and to a trellis-based iterative decoding scheme [17, 18]. The last major contribution of the article is the implementation of the proposed SISO arithmetic decoder within the JPEG 2000 decoder and the analysis of the improvements obtained by iterative JSC decoding. In fact, the proposed SISO arithmetic decoder is applied to the JPEG 2000 entropy encoding stage which uses an adaptive binary AC (MQ coder).
The article is organized as follows. Section 2 briefly introduces the principles of arithmetic coding. In Section 3, the system model and the MAP sequence decoding metric are reported. The Chase-like arithmetic decoder is also detailed in this section and its performances are compared with a recent solution using a trellis representation of the AC with Viterbi-like decoding . Section 4 addresses a new scheme for low-complexity SISO arithmetic decoding. Numerical results corresponding to iterative JSC are discussed and compared to tandem decoding and trellis-based arithmetic decoding presented in [17, 18]. In Section 5, the application of the proposed iterative decoding approach to a JPEG 2000 image communication system is described. Finally, Section 6 draws our conclusions and offers directions for future work.
2 Overview of arithmetic coding
Once all L symbols have been processed, the output b corresponds to the shortest binary string contained in I(s). Decoding follows the dual process.
For long source sequences, such an algorithm needs an infinite precision machine (Low and High get smaller and smaller while encoding). In , the authors proposed a new implementation that made arithmetic coding feasible in practice. They used integer representations for probabilities with scaling techniques. The initial interval [0, 1) was substituted by [0, W ), where W = 2 p , p >= 2 being the bit size of the initial interval. The scaling is performed by doubling the size of the interval I(s) = [Low, High) when one of the following conditions holds:
E1: 0 ≤ High < W/ 2: We double Low and High and we output 0 followed by U3 ones, then U3 is reset to 0.
E2: W/ 2 ≤ Low < W : We double Low and High after subtracting W/ 2 and we output 1 followed by U3 zeros, then U3 is reset to 0.
E3: W/4 ≤ Low < W/2 ≤ High < 3W/4: We double Low and High after subtracting W/4 and we increase U3 by 1 (no output).
Note that U3 represents the number of the last E3 scalings done, and is initialized to 0.
The described AC is based on the binary source statistics, and it is essential that, for an encoded symbol index i, the encoder and the decoder use the same probabilities p0 and p1. Static arithmetic coding supposes that source statistics were transmitted to the decoder with no error, which results in additional bits and consequently a compression loss. Such situation is outperformed with adaptive arithmetic coding, where p0 and p1 are initialized to 0.5, then, for every symbol encoding step they are updated. Such scheme induces no remarkable compression loss when long source symbol sequences are used . In the following, we address new soft-input decoding scheme which can be applied to both adaptive and static ACs.
To manage AC sensitivity to errors, authors of  proposed to use an extra symbol μ with probability ε > 0 to detect transmission errors. This symbol is introduced in the source alphabet but never transmitted. The forbidden symbol technique implies a reduction of the coding space by a factor of (1-ε), thus, reduces compression efficiency. The amount of added rate redundancy is R ac = -log2 (1-ε) bits/symbol . In the presence of transmission error, due to the low resynchronization probability, the decoder will reveal a forbidden symbol after a delay that is inversely proportional to ε.
3 Low-complexity soft-input arithmetic decoding method
3.1 Soft-input arithmetic decoding
The considered system consists of a finite alphabet source, a classical arithmetic encoder, an Additive White Gaussian Noise (AWGN) channel and an arithmetic decoder. The source generates packets of L symbols s = (s1,..., s L ). Each packet is, then, compressed using the arithmetic encoder and the resulting binary stream b = (b1,..., b l ) is transmitted over an AWGN channel. The latter delivers the sequence r = (r1,..., r l ). A Binary Phase Shift Keying (BPSK) modulation is considered, then r j can be written as r j = h j + n j , j = 1,..., l, where h j is the BPSK modulated value of b j given by where E b is the energy per information bit, and n j is a Gaussian noise sample with zero mean and variance σ2.
where is the bit-stream resulting from the arithmetic encoding of the source symbol sequence
and the function d E (x, y) denotes the Euclidean distance between x and y.
The exhaustive decoding approach calculates the metric M k for all possible pairs (s k , b k ) in order to select the best binary stream. In this study, we assume that the source symbol sequence length L and the compressed bit-stream length l are known at the decoder side. This information is used to detect invalid binary sequences in the search space. Therefore, the search space is limited to streams of l bits that yield exactly L symbols by arithmetic decoding. It is obvious that evaluating the MAP metric (7) for all possible candidates is infeasible. On the one hand, for practical values of l, it is impossible to store all the combinations of l bits, in order to select the valid sequences in terms of length. On the other hand, the evaluation of all valid sequences requires a huge decoding delay because of the large cardinality of the search space.
Thus, it is necessary to use a suboptimal decoding algorithm to reduce the search space size. In the following, we describe a suboptimal soft-input arithmetic decoder, inspired from the Chase algorithm  which was initially proposed for soft-input decoding of linear block codes.
3.2 The proposed decoding algorithm
The proposed decoding algorithm aims to find the compressed bit-stream that is length-valid, and whose corresponding decoded sequence has the best metric M k . We use a Chase II-type algorithm  in order to achieve a low-complexity soft-input suboptimal arithmetic decoding. In fact, this algorithm reduces complexity by restricting the search space to only the Q most probable sequences.
We recall that the arithmetic decoding machine is based on a recursive procedure which terminates when all l bits are processed or when L symbols are obtained in the decoded sequence. However, the proposed decoder has to use information about L to detect erroneous sequences and improve decoding performance. To address this requirement, a proper AC termination strategy is implemented. In fact, the arithmetic encoder terminated each input sequence with an End-of-Block (EoB) symbol. The same rule is enforced at the decoder and only sequences that decode exactly L symbols and whose EoB symbol is determined by the last two bits is considered to be correct. This supplementary error detection tool can ameliorate the arithmetic decoder performance since it reduces the size of the search space and consequently, increases the Hamming distance between candidates.
Evaluate the hard decision vector y = (y1,..., y l ) and the corresponding reliability vector Λ(y) = (|y1|,..., |y l |),
determine the positions of the q least reliable binary elements of y based on Λ(y),
form test patterns t i , 0 < i ≤ Q where Q = 2 q : all the l-elements binary vectors , 0 < j ≤ l having a maximum weight q with all the possible bit combinations in the least reliable positions,
form test sequences z i , 0 < i ≤ Q with , for 0 < j ≤ l, where ⊕ is the XOR function,
decode all the test sequences z i using classical arithmetic decoding. If a sequence z i decode exactly L symbols and its EoB symbol is correct, we compute its metric using (7), and append it to the subset of competitor valid sequences ψ.
Finally, the decoded bit-stream corresponds to the sequence having the best metric M k in the subset ψ.
3.3 Chase-like arithmetic decoding performance
A memoryless quaternary source with entropy = 1
Probability p k
Figure 2 shows that the proposed soft-input arithmetic decoder improves the performance compared to the classical arithmetic decoder. In fact, with one additional AC decoding test sequence (q = 1 bit), we achieve a gain of 1.2 dB at a PER = 10-3. Increasing the value of the test pattern weight q induces an improvement in performance reaching 2 dB at PER = 10 - 2.
We also notice that for low to medium values of signal-to-noise ratio, increasing q enables remarkable performance amelioration. For high values of , a low value of q is sufficient to achieve the maximum performance. In this case, increasing q results in increasing the complexity without improving the performance. In the following, we will use the value of q = 4 bits since it represents a good trade-off between performance and complexity. For a complexity of 16 AC hard decoding, q = 4 bits includes an improvement of 1.6 dB at PER = 10 - 3.
3.4 Comparison with trellis-based arithmetic decoding
In this section, we propose to compare the proposed soft-input arithmetic decoder with a trellis-based arithmetic decoding scheme using the forbidden symbol technique with probability ε.
The simulation results show that for a very noisy channel ( < 7 dB) the arithmetic trellis-based decoding with ε = 0.2 is more efficient than the Chase-like arithmetic decoding. But, for a medium to high values, the Chase-like soft-input arithmetic decoder presents the best performance. In fact, it exhibits a considerable gain of about 1.1 dB over the best configuration of the trellis-based Viterbi decoding with ε = 0.2.
This gain is essentially due to the additional information used by the proposed Chase-like algorithm to perform arithmetic decoding. In fact, in the trellis proposed by Ben-Jamaa et al. , all binary paths of length l are valid candidates; the constraint on L is not considered. However, all possible candidates for Chase-like decoding are length-valid sequences, which results in a greater Hamming distance between competitors, and consequently better performance.
On the other hand, the Chase-like arithmetic decoding complexity is constant for increasing source sequence length L, and alphabet cardinal U. The decoding complexity can be approximated to 2 q arithmetic classical decoding operations and depends only on q. The states number representing the arithmetic encoding machine of  increases for bigger values of L and U. Furthermore, the trellis construction needs the transmission of the source statistics as side information. Consequently, trellis-based decoding is very hard to apply with adaptive AC (the trellis changes with symbol probabilities). The proposed soft-input arithmetic decoder is very simple and can easily be extended to the adaptive context-based ACs.
4 Iterative decoding method for the serially concatenated ACs
In the previous section, we showed that the proposed soft-input arithmetic decoder takes profit from the bits' reliability at the output of the channel, which resulted in reducing the PER with respect to classical arithmetic decoding. In this section, it is modified in order to offer additional information concerning the compressed bit-stream components' reliability. Then, it is applied in an iterative decoding scheme composed of an AC and an RSCC. The system performance are evaluated in terms of PER.
4.1 Concatenated AC and RSCC transmission system description
At the receiver, we apply an iterative decoding based on information exchange between a low-complexity SISO Chase-like arithmetic decoder, detailed below, and the RSCC decoder using the optimal MAP algorithm. Performances of the iterative decoding scheme involving the Chase-like algorithm are evaluated and compared to tandem decoding results. As mentioned, major JSC iterative decoding contributions consider trellis-based algorithms for arithmetic SISO decoding. To evaluate the efficiency of our decoder with respect to such schemes, a comparison to an iterative decoding scheme with an arithmetic trellis-based de-coder is proposed. The reference decoder was presented in [17, 18], and the authors used a bi-dimensional bit-clock trellis to model the arithmetic encoding machine. Then, to generate soft bit-reliability estimates, a modified SOVA  algorithm was proposed.
4.2 Low-complexity SISO arithmetic decoding
As mentioned in the previous section, the proposed soft-input arithmetic decoder does not use a finite-state machine to model the AC. Thus, BCJR  or SOVA  algorithms are not applicable. The main idea is not to generate bits a posteriori LLRs exact estimation but to define bits reliability factor.
Often, contributions dealing with channel decoding assume an AWGN channel and equal a priori probabilities P (b j = 1) = P (b j = 0), and use the expression given in Equation (8). However, in the case of iterative channel and arithmetic decoding, all the components of the decoded sequence do not have the same reliability and consequently the a priori term can be refined.
We have seen in the previous section that the proposed Chase-like arithmetic decoder calculates the J (J ≤ Q) most likely length-valid compressed bit-streams The decoded sequence corresponds to the best sequence, among the J candidates, according to the metric given in Equation (7). It is clear that the positions j where we have the same bit for all possible candidate sequences are the most reliable positions. In the following, we assume such bits as reliably decoded, and assign to them a constant extrinsic information We notify that a similar reliability definition was proposed in  for SISO decoding of linear block codes and that the value of β is determined experimentally. All the other bits are supposed non-reliable and we assign to them an extrinsic information equal to zero. It is worth noticing that in some cases, especially with a relatively noisy channel, the Chase-like arithmetic decoder is not able to find a length-valid codeword among the Q test sequences. In this case, we propose the decoding rule and to assign to all its components an extrinsic information equal to 0.
4.3 Iterative arithmetic decoding performance
Simulation results show that significant gains are obtained with respect to the tandem scheme, for the different considered decoding algorithms. However, the gain varies with the channel signal-to-noise ratio For low values, the Chase-like arithmetic decoder is less efficient than the trellis-based decoder with ε = 0.2. However, for medium to high values, the Chase-like algorithm performance are better than the best configuration of trellis-based decoding using the modified SOVA algorithm. At a PER = 10 - 3, an improvement of 0.3 dB in favor of the List-SOVA algorithm is obtained compared to Chase-like decoding.
Note that the main advantage of Chase-like decoding is that its complexity remains constant for increasing source sequence length L, and alphabet cardinal U. In fact, as mentioned in the previous section, the decoding complexity is determined by the number of hard decoding operations given by 2 q . In the case of the evaluated scheme we have q = 4 bits, and 16 classical arithmetic decoding operations are required, and which results in a reasonable complexity. On the other hand, the Chase-like algorithm is very simple and can be easily implemented for different types of ACs such as the contextual adaptive AC. However, the trellis-based arithmetic decoder is very hard to implement with such type of ACs.
In the next section, we will valid the Chase-like arithmetic decoding advantages in the context of an image transmission system using the JPEG 2000 standard which considers a contextual adaptive AC.
5 Application to JPEG 2000 coded images transmitted over a noisy channel
The objective, in this section, is to improve the performance of an image transmission system using JPEG 2000 standard for compression and a Convolutional Code as a channel code.
The considered coding scheme uses the JPEG 2000 encoder which compresses the source image at Ds bits per pixel (bpp). In JPEG 2000 coding, the 9-7 filters are used for wavelet transform and the number of resolution levels is equal to five. The wavelet domain is divided into rectangular regions called code-blocks. The JPEG 2000 encoding machine defines multiple quality layers that help image reconstruction at different rates (scalability). In the following experiment, we consider a single quality layer for simplicity and the scheme can be generalized to multiple quality layers. The values of Ds = 0.4 bpp and Ds = 1 bpp are considered. The bit-stream generated by the JPEG 2000 encoder is composed of headers describing the coding parameters followed by a sequence of packets containing the encoded data. We assume that the data contained in the headers are transmitted without error. The JPEG 2000 uses a context-based binary adaptive AC called the MQ-coder. The code-blocks are independently encoded by the AC .
The experimental setup is as follows. The test image Lena 512 × 512 initially coded at 8 bpp is considered. By analogy to the system proposed in the previous section, each bit-stream resulting from the compression of P = 4 code-blocks form the message b. The latter is scrambled then, coded by an 8-state RSCC with rate Finally, the coded image is transmitted over an AWGN channel by means of the BPSK modulation. In our simulation, we used an open source implementation of JPEG 2000 called OpenJPEG (J2000 library). More details about the implementation are available in http://www.openjpeg.org.
At the receiver side, we apply the iterative decoding, described in the previous section, based on information transfer between the JPEG 2000 decoder and the channel decoder. The JPEG 2000 decoder uses the proposed SISO arithmetic decoder described in Section 4.2. The test pattern weight q is fixed to 4 bits. The value of the extrinsic information β given by the SISO arithmetic decoder is experimentally optimized and we used, for a value of β = 2.5, and for a β = 4.0. The BCJR  algorithm is applied for RSCC decoding with soft-inputs and outputs.
It can be seen that remarkable gains are obtained in terms of average PSNR. In fact, for D s = 0.4 bpp and at the proposed algorithm exhibits a PSNR gain of 5 dB. The iterative decoding yields a significant gain of 6.2 dB over the tandem scheme. On the other hand, for the same bit rate (Ds = 0.4 bpp), the studied system reaches approximately an average PSNR value of 35 dB at Whereas, the tandem scheme achieves this value at Therefore, the iterative decoding allows a gain of 1.5 dB in terms of
For D s = 1 bpp and at an average PSNR gain of 3.8 dB is obtained at the first iteration with respect to the tandem scheme. This gain increases with iterations to reach the value of 8.5 dB over the tandem scheme. Moreover, from the value of the average PSNR obtained at the third iteration is approximately 39 dB. However, this value is reached by the tandem scheme at Therefore, the iterative decoding enables a gain of 1.5 dB in terms of
In this article, we have proposed novel low-complexity decoding algorithms for ACs based on the Chase algorithm. The schemes were tested for transmission across an AWGN channel with BPSK signaling and shows numerous significant advantages. First, we showed that the soft-input arithmetic decoder achieves good error correction performance, has low complexity and can be easily extended to adaptive ACs, unlike the trellis-based arithmetic decoders. Second, we showed that the Chase-like algorithm can be slightly modified to generate additional information regarding the reliability of the decoded bits. Such improvement allow iterative decoding in the case of the serial concatenation of an AC with a channel code. As a second experiment, the concatenation of an AC with a RSCC was considered and iterative decoding results were investigated. The scheme is a JSC decoding approach where AC embeds compression and error correction in a single stage. Simulation results shows significant performance improvement when compared to tandem decoding and to our previous iterative decoding scheme using a trellis-based SISO arithmetic decoder. Moreover, the presented iterative system has profitably been exploited in the case of JPEG 2000 image transmission. In fact, improvements in terms of average PSNR, and visual quality were observed compared to standard JPEG 2000 decoding.
- 9.Elmasry G, Shi Y: MAP symbol decoding of arithmetic coding with embedded channel coding. Proceedings of the Wireless Communications and Networking Conf.: New Orleans, USA 1999, 2: 988-992.Google Scholar
- 12.Sayir J: Arithmetic coding for noisy channels. Proceedings of the IEEE Information Theory Workshop: Kruger National Park, South Africa 1999, 69-71.Google Scholar
- 14.Dongsheng B, Hoffman W, Sayood K: State machine interpretation of arithmetic codes for joint source and channel coding. Proceedings of the IEEE Data Compression Conference: Snowbird, Utah, USA 2006, 143-152.Google Scholar
- 17.Zribi A, Zaibi S, Pyndiah R, Bouallegue A: Low-complexity joint source/channel turbo decoding of arithmetic codes with image transmission application. Proceedings of the IEEE Data Compression Conference: Snowbird, Utah, USA 2009, 472.Google Scholar
- 18.Zribi A, Zaibi S, Pyndiah R, Bouallegue A: Low-complexity joint source/channel turbo decoding of arithmetic codes. Proceedings of the IEEE intl symp on turbo codes and related topics: Lausanne, Switzerland 2009, 385-389.Google Scholar
- 22.Hagenauer J, Hoeher P: A Viterbi algorithm with soft-decision outputs and its applications. Proceedings of the IEEE Globecom: Dallas, TX, USA 1989, 11-17.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.