# Distributed joint source-channel coding for relay systems exploiting source-relay correlation and source memory

**Part of the following topical collections:**

## Abstract

In this article, we propose a distributed joint source-channel coding (DJSCC) technique that well exploits source-relay correlation as well as source memory structure simultaneously for transmitting binary Markov sources in a one-way relay system. The relay only *extracts* and forwards the source message to the destination, which implies imperfect decoding at the relay. The probability of errors occurring in the source-relay link can be regarded as source-relay correlation. The source-relay correlation can be estimated at the destination node and utilized in the iterative processing. In addition, the memory structure of the Markov source is also utilized at the destination. A modified version of the Bahl, Cocke, Jelinek, and Raviv (BCJR) algorithm is derived to exploit the memory structure of the Markov source. Extrinsic information transfer (EXIT) chart analysis is then performed to investigate convergence property of the proposed technique. Results of simulations conducted to evaluate the bit-error-rate (BER) performance and the EXIT chart analysis show that, by exploiting the source-relay correlation and source memory simultaneously, our proposed technique achieves significant performance gain, compared with the case where the correlation knowledge is not fully used.

## Keywords

Relay Node Convolutional Code Source Memory Relay System Extrinsic Information## Introduction

Wireless mesh and/or sensor networks having great number of low-power consuming wireless nodes (e.g., small relays and/or micro cameras) have attracted a lot of attention of the society, and a variety of its potential applications has been considered recently[1]. The fundamental challenge of wireless mesh and/or sensor networks is how energy-/spectrum-efficiently as well as reliably the multiple sources can transmit their originating information to the multiple destinations. However, such multi-terminal systems have two practical limitations: (1) wireless channel suffers from various impairments, such as interference, distortions and/or deep fading, (2) signal processing complexity as well as transmitting powers has to be as low as possible due to the power, bandwidth, and/or size restrictions of the wireless nodes.

Cooperative communication techniques provide a potential solution to the problems described above, due to its excellent transmit diversity for fading mitigation[2]. One simple form of cooperative wireless communications is a single relay system, which consists of one source, one relay and one destination. The role of the relay is to provide alternative communication route for transmission, hence improving the probability of successful signal reception of source information sequence at the destination. In this relay system, the information sent from the source and the relay nodes are correlated, which in this article is referred to as source-relay correlation. Furthermore, the information collected at the source node contains memory structure, according to the dynamics that governs the temporal behavior of the originator (or sensing target). The source-relay correlation and the memory structure of the transmitted data can be regarded as redundant information which can be used for source compression and/or error correction in distributed joint source-channel coding (DJSCC).

There are many excellent coding schemes which can achieve efficient node cooperative communications, such as[3, 4], where decode-and-forward (DF) relay strategy is adopted and the source-relay link is assumed to be error free. In practice, when the signal-to-noise ratio (SNR) of the source-relay link falls below certain threshold, successful decoding at relay may become impossible. Besides, to completely correct the errors at the relay, strong codes such as turbo codes or low density parity check (LDPC) codes with iterative decoding are required, which will impose heavy computational burden at the relay. As a result, several coding strategies assuming that the relay cannot always decode correctly the information from the source have been presented in[5, 6, 7].

Joint source-channel coding (JSCC) has been widely used to exploit the memory structure inherent within the source information sequence. In the majority of the approaches to JSCC design, variable-length code (VLC) is employed as source encoder and the implicit residual redundancy after source encoding is additionally used for error correction in the decoding process. Some related study can be found in[8, 9, 10, 11]. Also, there are some literatures which focus on exploiting the memory structure of the source directly, e.g., some approaches of combining hidden Markov Model (HMM) or Markov chain (MC) with the turbo code design framework are presented in[12, 13, 14].

In the schemes mentioned above, the exploitation of the source-relay correlation and the source memory structure have been addressed separately. Not much attention has been paid to relay systems exploiting the source-relay correlation and the source memory simultaneously. A similar study can be found in[15], where the memory structure of the source is represented by a very simple model, bit-flipping between the current information sequence and its previous counterpart, which is not reasonable in many practical scenarios. When the exploitation of the source memory having more generic structures, the problem of code design for relay systems exploiting jointly the source-relay correlation and the source memory structure is still open.

In this article, we propose a new DJSCC scheme for transmitting binary Markov source in a one-way single relay system, based on[7, 14]. The proposed technique makes efficient utilization of the source-relay correlation as well as the source memory structure simultaneously to achieve additional coding gain. The rest of this article is organized as follow. Section ‘System model’ introduces the system model. The proposed decoding algorithm is described in Section ‘Proposed decoding scheme’. Section ‘EXIT chart analysis’ shows the results of extrinsic information transfer (EXIT) chart analysis conducted to evaluate the convergence property of the proposed system. Section ‘Convergence analysis and BER performance evaluation’ shows the bit-error-rate (BER) performance of the system based on EXIT chart analysis. The simulation results for image transmission using the proposed technique is presented in Section ‘Application to image transmission’. Finally, conclusions are drawn in Section ‘Conclusion’ with some remarks.

## System model

### One-way single relay system

In this article, a single-source single-relay system is considered where all links are assumed to suffer from Additive White Gaussian Noise (AWGN). The relay system operates in a half-duplex mode. During the first time interval, the source node broadcasts the signal to both the relay and destination nodes. After receiving signals from the source, the relay *extracts* the data even though it may contain some errors, re-encodes, and then transmits the extracted data to the destination node in the second time interval.

*G*

_{ xy }of the link between the node

*x*and

*y*can be defined as

*d*

_{ xy }denotes the distance of the link between the node

*x*and

*y*. The pass loss exponent

*l*is empirically set at 3.52[4]. Note that the geometric-gain of the source-destination link

*G*

_{ sd }is normalized to 1 without the loss of generality.

where **x** and **x**_{ r } represent the symbol vectors transmitted from the source and the relay, respectively. Notations **n**_{ r } and **n**_{ d } represent the zero-mean AWGN noise vectors at the relay and the destination with variances${\sigma}_{r}^{2}$ and${\sigma}_{d}^{2}$, respectively. The SNR of the source-relay and relay-destination links with the three different relay location scenarios, as shown in Figure1, can be decided as: for location A, SNR_{ sr } = SNR_{ rd } = SNR_{ sd }; for location B, SNR_{ sr } = SNR_{ sd } + 21.19 dB and SNR_{ rd } = SNR_{ sd } + 4.4 dB; for location C, SNR_{ sr } = SNR_{ sd } + 4.4 dB and SNR_{ rd } = SNR_{ sd } + 21.19 dB.

### Source-relay correlation

**u**is first encoded by a recursive systematic convolutional (RSC) code, interleaved by

*π*

_{ s }, encoded by a doped accumulator (ACC) with a doping rate

*K*

_{ s }[16] and then modulated using binary-phase shift keying (BPSK) to obtain the coded sequence

**x**. After obtaining the received signal

**y**

_{ sr }from the source, the relay performs the decoding process only once (i.e., no iterative processing at the relay) to retrieve

**u**

_{ r }, which is used as an estimate of

**u**.

**u**

_{ r }is first interleaved by

*π*

_{0}and then encoded following the same encoding process as in the originating node with a doping rate

*K*

_{ r }to generate the coded sequence

**x**

_{ r }.

*Errors may occur between*

**u**

*and*

**u**

_{ r }, as shown in Figure3, compared to the cases where iterative decoding is performed at the relay node. Apparently, with more iterations better BER performances can be achieved at the relay node. However, this advantage becomes negligible in low SNR

_{ sr }scenarios. Instead, the estimate of the source information sequence is simply

*extracted*by performing the corresponding channel decoding process just once. Consequently, the relay complexity can be significantly reduced without causing any significant performance degradation by the proposed algorithm, as detailed in Section ‘Proposed decoding scheme’.

The source-relay correlation indicates the correlation between **u** and **u**_{ r }, which can be represented by a bit-flipping model, as shown in Figure2. **u**_{ r } can be defined as **u**_{ r } = **u** ⊕ **e**, where **e** is an independent binary random variable and ⊕ indicates modulus-2 addition. The correlation between **u** and _{ u r } is characterized by *p*_{ e }, where *p*_{ e } = Pr(**e** = 1) = Pr(**u** ≠ **u**_{ r })[6].

### Markov source

**u**=

*u*

_{1}

*u*

_{2}…

*u*

_{ t }…, of which the transition matrix is:

*a*

_{i,j}is the transition probability defined by

where {*μ*_{ i }} is the stationary state probability.The memory structure of Markov source can be characterized by the state transition probabilities *p*_{1} and *p*_{2}, 0 < *p*_{1}*p*_{2} < 1, with which *p*_{1} = *p*_{2} = 0.5 indicates the memoryless source, while *p*_{1} ≠ 0.5 or *p*_{2} ≠ 0.5, and hence *H*(*S*) < 1 indicate source with memory.

## Proposed decoding scheme

*D*

_{ s }and

*D*

_{ r }denote the decoder of

*C*

_{ s }and

*C*

_{ r }, respectively. In order to exploit the knowledge of the memory structure of the Markov source, the source and

*C*

_{ s }are treated as a single constituent code. Hence, it is reasonable to represent the code structure by a super trellis by combining the trellis diagram of the source and

*C*

_{ s }. A modified version of the BCJR algorithm is derived to jointly perform source and channel decoding over this super trellis at

*D*

_{ s }. However,

*D*

_{ r }cannot exploit the source memory due to the additional interleaver

*π*

_{0}, as shown in Figure2.

At the destination node, the received signals from the source and the relay are first converted to *log-likelihood ratio* (*LLR*) sequences *L*(**y**_{ sd }), *L*(**y**_{ rd }), respectively, and then decoded via two horizontal iterations (*HI*), as shown in Figure4. Then the extrinsic *LLR* s generated from *D*_{ s } and *D*_{ r } in the two *HI* s are further exchanged by several vertical iterations (*VI*) through an *LLR* updating function *f*_{ c }, of which role is detailed in the following section. This process is performed iteratively, until the convergence point is reached. Finally, hard decision is made based on the a posteriori *LLR* s obtained from *D*_{ s }.

*LLR* updating function

*p*

_{ e }is estimated at the destination using the a posteriori

*LLR*s of the uncoded bits,${L}_{p,{D}_{s}}^{u}$ and${L}_{p,{D}_{r}}^{u}$ from the decoders

*D*

_{ s }and

*D*

_{ r }, as

where *N* indicates the number of the a posteriori *LLR* pairs from the two decoders with sufficient reliability. Only the *LLR* s with their absolute values larger than a given threshold can be used in calculating${\widehat{p}}_{e}$.

**u**can be updated from

**u**

_{ r }as

*u*

^{ k }and${u}_{r}^{k}$ denote the

*k*th elements of

**u**and

**u**

_{ r }, respectively. This leads to the

*LLR*updating function[6] for

**u**:

*LLR*updating function for

**u**

_{ r }can be expressed as:

*LLR*updating function

*f*

_{ c }, as shown in Figure4, is given as

*LLR*s. The output of

*f*

_{ c }is the updated

*LLR*s by exploiting${\widehat{p}}_{e}$ as the source-relay correlation. The

*VI*operations of the proposed decoder can be expressed as

where *π*_{0}(·) and${\pi}_{0}^{-1}(\xb7)$ denote interleaving and de-interleaving functions corresponding to *π*_{0}, respectively.${\mathbf{L}}_{{\mathbf{a},D}_{\mathbf{s}}}^{\mathbf{u}}$ and${\mathbf{L}}_{{\mathbf{e},D}_{\mathbf{s}}}^{\mathbf{u}}$ denote the a priori *LLR* s fed into, and extrinsic *LLR* s generated by *D*_{ s }, respectively, both for the uncoded bits. Similar definitions should apply to${\mathbf{L}}_{{\mathbf{a},D}_{\mathbf{r}}}^{\mathbf{u}}$ and${\mathbf{L}}_{{\mathbf{e},D}_{\mathbf{r}}}^{\mathbf{u}}$ for *D*_{ r }.

### Joint decoding of Markov source and channel encoder *C*_{ s }

#### Representation of super trellis

*C*

_{ s }is a memory length

*v*convolutional code. There are 2

^{ v }states in the trellis diagram of this code, which are indexed by

*m*,

*m*= 0,1,…,2

^{ v }−1. The state of

*C*

_{ s }at the time index

*t*is denoted as${S}_{t}^{c}$. Similarly, there are two states in order-1 binary Markov source, and the state at the time index

*t*is denoted as${S}_{t}^{s}$ with${S}_{t}^{s}\in \{0,1\}$. For a binary Markov model described in Section ‘System model’, the source model and its corresponding trellis diagram are illustrated in Figure5a. The output value at a time instant

*t*from the source is the same as the state value of${S}_{t}^{s}$. The trellis branches represent the state transition of which probabilities have been defined by (6). On the other hand, for

*C*

_{ s }, the branches in its trellis diagram indicate input/output characteristics.

At time instant *t*, the state of the source and the state of the *C*_{ s } can be regarded as a new state$({S}_{t}^{s},{S}_{t}^{c})$, which leads to the super trellis diagram. A simple example of combining binary Markov source with a recursive convolutional code (RSC) with generator polynominald (*G*_{ r },*G*) = (3,2)_{8} is depicted in Figure5. At each state$({S}_{t}^{s},{S}_{t}^{c})$, the input to the outer encoder is determined, given the state of the Markov source. Actually, the new trellis branches can be regarded as a combination corresponding to the branches of the Markov source and of the trellis diagram of *C*_{ s }. Hence, the new trellis branches represent both state transition probabilities of the Markov source and input/output characteristics of *C*_{ s }defined in its trellis diagram.

It should be noticed that a drawback of this approach is the exponentially growing number of the states in the super trellis. However, if *C*_{ s }is only a short memory convolutional code, the complexity increase is due mainly to the number of Markov source states. In fact, it is shown in Section ‘Convergence analysis and BER performance evaluation’ that, even with a memory-1 code used as *C*_{ s }can achieve excellent performance. Therefore, the complexity is largely the issue of source modeling depending on applications.

#### Modified BCJR algorithm for super trellis

In this section, we make modifications of the standard BCJR algorithm[18] for the decoding performed over the super trellis constructed in the previous section. Here, we ignore momentarily the serially concatenated structure, and only focus on the decoding process performed over the super trellis diagram. For a convolutional code with memory length *v*, there are 2^{ v } states in its trellis diagram, which is indexed by *m* *m* = 0,1,…,2^{ v }−1. The input sequence to the encoder **u** = *u*_{1}*u*_{2}…*u*_{ t }…*u*_{ L }, which is also a series of the states of Markov source, is assumed to have length *L*. The output of the encoder is denoted as **x**={**x**^{c 1}**x**^{c 2}}. The coded binary sequence is BPSK mapped and then transmitted over AWGN channels. The received signal is a noise-corrupted version of the BPSK mapped sequence, denoted as **y**={**y**^{c 1}**y**^{c 2}}. The received sequence from the time indexes *t*_{1} to *t*_{2} is denoted as${\mathbf{y}}_{{t}_{1}}^{{t}_{2}}={\mathbf{y}}_{{t}_{1}},{\mathbf{y}}_{{t}_{1}+1},\dots ,{\mathbf{y}}_{{t}_{2}}$.

*log-likelihood ratio (LLR)*of the coded bits$\left\{{x}_{t}^{c1}\right\}$, based on the whole received sequence${\mathbf{y}}_{1}^{L}$, which is defined by

where${B}_{t}^{k}$ denotes the sets of states$\left\{\right({S}_{t}^{s}=i,{S}_{t}^{c}=m\left)\right\}$ yielding the systematic output${x}_{t}^{c1}$ of the *C*_{ s }being *k*, *k* = 0,1.

*α*

_{ t }(

*i*,

*m*),

*β*

_{ t }(

*i*,

*m*),

*γ*

_{ t }(

**y**

_{ t },

*i*

^{ ′ },

*m*

^{ ′ },

*i*,

*m*) are found to be functions of both the output of Markov source and the states in the trellis diagram of

*C*

_{ s }. More specifically,

*γ*

_{ t }(

**y**

_{ t },

*i*

^{ ′ },

*m*

^{ ′ },

*i*,

*m*) represents information of input/output relationship corresponding to the state transition

*S*

_{ t }=

*m*

^{ ′ }→

*S*

_{ t }=

*m*, specified by the trellis diagram of

*C*

_{ s }, as well as of the state transition probabilities depending on Markov source. Therefore,

*γ*can be decomposed as

*E*_{ t }(*i*,*m*) is the set of states {(*u*_{t−1},*S*_{t−1})} that have a trellis branch connected with state (*u*_{ t } = *i*,*S*_{ t } = *m*) in the super trellis.

*γ*is obtained,

*α*and

*β*can also be computed via the following recursive formulae

Since the output encoder always starts from the state zero, while the probabilities for the Markov source starts from state “0” or state “1” is equal. Hence, the appropriate boundary condition for *α* is *α*_{0}(0,0) = *α*_{0}(1,0) = 1/2 and *α*_{0}(*i*,*m*) = 0,*i* = 0,1;*m* ≠ 0. Similarly, the boundary conditions for *β* is *β*_{ L }(*i*,*m*) = 1/2^{v + 1},*i* = 0,1;*m* = 0,1,…,2^{ v }−1.

*LLR*s for${x}_{t}^{c1}$, as

representing the a priori *LLR*, the channel *LLR* and the extrinsic *LLR*, respectively. The same representation should apply to$\left\{{x}_{t}^{c2}\right\}$.

## EXIT chart analysis

*D*

_{ s }since the main aim is to successfully retrieve the information estimates$\widehat{\mathbf{u}}$. As shown in Figure4, the decoder

*D*

_{ s }exploits two a priori

*LLR*s:${L}_{a,{D}_{s}}^{c}$ and the updated version of${L}_{e,{D}_{r}}^{u}$,${L}_{a,{D}_{s}}^{u}$. Therefore, the EXIT function of

*D*

_{ s }can be characterized as

where${I}_{e,{D}_{s}}^{c}$ denotes the mutual information between the extrinsic *LLR* s,${L}_{e,{D}_{s}}^{c}$ generated from *D*_{ s }, and the coded bits of *D*_{ s }.${I}_{e,{D}_{s}}^{c}$ can be obtained by the histogram measurement[21]. Similar definitions can be applied to${I}_{a,{D}_{s}}^{c}$ and${I}_{e,{D}_{r}}^{u}$.

*D*

_{ s }utilizes the memory structure of Markov source. At first, we assume that the source-relay correlation is not exploited and only focus on the exploitation of source memory. In this case,${I}_{e,{D}_{r}}^{u}=0$ and the EXIT analysis of

*D*

_{ s }can be simplified to two-dimensional. The EXIT curves with and without the modifications described in the previous section are illustrated in Figure6. The code used in the analysis is a half rate memory-1 RSC with the generator polynomials (

*G*

_{ r },

*G*) = (3,2)

_{8}. It can be observed from Figure6 that, compared to the standard BCJR algorithm, the EXIT curves obtained by using the modified BCJR algorithm are lifted up over the whole a priori input region, indicating that larger extrinsic information can be obtained. It is also worth noticing that the contribution of source memory represented by the increase in extrinsic mutual information becomes larger as the entropy of Markov source decreases.

*D*

_{ s }to evaluate the impact of the source-relay correlation, where the source memory is not exploited. The corresponding EXIT planes of

*D*

_{ s }, shown in gray, are illustrated in Figure7. Two different scenarios, a relatively strong source-relay correlation (corresponding to small

*p*

_{ e }value) and a relatively weak source-relay correlation (corresponding to large

*p*

_{ e }value) are considered. It can be seen from Figure7a that with a strong source-relay correlation, the extrinsic information${I}_{e,{D}_{r}}^{u}$ provided by

*D*

_{ r }, has a significant effect on${T}_{{D}_{s}}^{c}(\xb7)$. On the contrary, when the source-relay correlation is weak,${I}_{e,{D}_{r}}^{u}$ has a negligible influence on${T}_{{D}_{s}}^{c}(\xb7)$, as shown in Figure7b.

For the proposed DJSCC decoding scheme, both the source memory and the source-relay correlations are exploited in the iterative decoding process. The impact of the source memory and the source-relay correlations on *D*_{ s }, represented by the 3D EXIT planes, shown in light-blue, is presented in Figure7. We can observe that higher extrinsic information can be achieved (EXIT planes are lifted up) by exploiting the source memory and the source-relay correlations simultaneously, which will help decoder *D*_{ s } perfectly retrieve the source information sequence even at a low SNR_{ sd } scenario.

## Convergence analysis and BER performance evaluation

A series of simulations was conducted to evaluate the convergence property, as well as BER performance of the proposed technique. The information sequences are generated from Markov sources with different state transition probabilities. The block length is 10000 bits, and 1000 different blocks were transmitted for the sake of keeping reasonable accuracy. The encoder used at the source and relay nodes, *C*_{ s } and *C*_{ r }, respectively, are both memory-1 half rate RSC with generator polynomials (*G*_{ r }*G*) = (3,2)_{8}. Five *VI* s took place after every *HI*, with the aim of exchanging extrinsic information to exploit the source-relay correlation. The whole process was repeated 50 times. All the three relay location scenarios were evaluated, with respect to the SNR of the source-destination link. The doping rates are set at *K*_{ s } = *K*_{ r } = 2 for location A, while *K*_{ s } = 1, *K*_{ r } = 16 for both the location B and C. The threshold for estimating${\widehat{p}}_{e}$[6] is set at 1.

### Convergence behavior with the proposed decoder

_{ sd }= − 3.5 dB is illustrated in Figure8. As described in Section ‘Proposed decoding scheme’, the decoding algorithms for

*D*

_{ s }and

*D*

_{ r }are not the same, and thus the upper and lower

*HI*s are evaluated separately. It can be observed from Figure8b that the EXIT planes of

*D*

_{ r }and ACC decoder finally intersect with each other at about${I}_{e,{D}_{r}}^{c}=0.52$, which corresponds to${I}_{e,{D}_{r}}^{u}=0.59$. This observation indicates that

*D*

_{ r }can provide

*D*

_{ s }with${I}_{e,{D}_{r}}^{u}=0.59$ a priori mutual information via the

*VI*. Figure8a shows that when${I}_{e,{D}_{r}}^{u}=0$, the convergence tunnel is closed, but it is slightly open when${I}_{e,{D}_{r}}^{u}=0.59$. Therefore, through extrinsic mutual information exchange between

*D*

_{ s }and

*D*

_{ r }, the trajectory of the upper

*HI*can sneak through the convergence tunnel and finally reach the convergence point while the trajectory of the lower

*HI*gets stuck. It should be noted here that since${\widehat{p}}_{e}$ is estimated and updated during every iteration, the trajectory of the upper

*HI*does not match exactly with the EXIT planes of

*D*

_{ s }and the ACC decoder, especially at the first several iterations. Similar phenomena is observed for the trajectory of the lower

*HI*.

### Contribution of the source-relay correlation

*p*

_{ e }), as described in the previous section. Figure9 shows the BER performance of the proposed technique when

*p*

_{ e }is known and unknown at the decoder, while the memory structure of Markov source is not taken into account. It can be observed that for relay location A and C, the BER performance of the proposed decoder is almost the same when

*p*

_{ e }is known and unknown at the decoder. However, for relay location B, convergence threshold is −7.7 and −7.4 dB when

*p*

_{ e }is known and unknown at the decoder, respectively, which results in a performance degradation of 0.3 dB. It can also be seen from Figure9 that, the performance gains obtained by exploiting only source-relay correlation (

*p*

_{ e }is assumed to be unknown at the decoder) for the locations A, B, and C, over the conventional point-to-point (P2P) communication system where

*relaying is not involved*, are 0.6, 5.4, and 2.6 dB, respectively. Among these three different relay location scenarios, the quality of the source-relay link with the location A is the worst and that with the location B is the best, if the SNR

_{ sd }is the same. This is consistent with the simulation results.

### Contribution of the source memory structure

*relaying is not involved in the scenarios assumed in this section*, are provided in Figure10, where

*K*

_{ s }= 1 was assumed. The BER curve of the conventional P2P communication system that does not exploit the memory structure of source is also provided in the same figure. It can be observed that the performance gain of 0.55, 1.5, and 3.6 dB can be obtained by DJSCC/SM exploiting the memory structure of Markov sources with entropy

*H*(

*S*) of 0.88, 0.72, and 0.47, respectively. This is consistent with the fact that as the entropy of the source decreases, the performance gain increases.

BER performance comparison between DJSCC/SM and JSCTC

Markov source parameters | Gains over conventional P2P | ||||
---|---|---|---|---|---|

| | | | | |

0.7 | 0.7 | 0.88 | 0.45 | 0.55 | |

0.8 | 0.8 | 0.72 | 1.29 | 1.5 | |

0.9 | 0.9 | 0.47 | 3.03 | 3.6 |

### BER performance of the proposed technique

BER performance gains of the DJSCC over the technique that only exploits source-relay correlation

Markov source parameters | Relay locations | |||||
---|---|---|---|---|---|---|

| | | | | | |

0.7 | 0.7 | 0.88 | 0.3 | 0.45 | 0.45 | |

0.8 | 0.8 | 0.72 | 1.2 | 1.4 | 0.9 | |

0.9 | 0.9 | 0.47 | 2.2 | 3.05 | 2.6 |

## Application to image transmission

*p*

_{1}= 0.9538 and

*p*

_{2}= 0.9480 is shown in Figure12a as an example. The image data is encoded column-by-column. Figures12b–e show the estimates of the image obtained as the result of decoding at SNR

_{ sd }= − 10 dB with the conventional P2P technique, DJSCC/SR, DJSCC/SM and DJSCC, respectively. As can be seen from Figure12, with the conventional P2P transmission, the estimated image quality is the worst containing 43.8

*%*pixel errors (see the figure caption), since neither source-relay correlation nor source memory is exploited. With DJSCC/SR and DJSCC/SM, the estimated images contain 19.4

*%*and 8.1

*%*pixel errors, respectively. The proposed DJSCC that exploits both source-relay correlation and source memory achieves perfect recovery of the image, with 0

*%*pixel error.

*p*

_{1}= 0.7167 and

*p*

_{2}= 0.6741. Figures13b–e show the estimates of the image, obtained as the result of decoding, at SNR

_{ sd }= − 7.5 dB with the conventional P2P, DJSCC/SR, DJSCC/SM, and DJSCC, respectively. It can be observed that the performance with DJSCC/SR (50.27

*%*pixel errors) and DJSCC/SM (96.9

*%*pixel errors) are better than that with conventional P2P (98.1

*%*pixel errors). However, by exploiting source-relay correlation and source memory simultaneously, the proposed DJSCC achieves perfect recovery of the image, with 0

*%*pixel error.

## Conclusion

In this article, we have presented a DJSCC scheme for transmitting binary Markov source in a one-way relay system. The relay does not aim to completely eliminate the errors in the source-relay link. Instead, the relay only extracts and forwards the source information sequence to the destination, even though the extracted information sequence may contain some errors. Since the error probability of the source-relay link can be regarded as source-relay correlation, in our proposed technique, the *LLR* updating function is adopted to estimate and exploit the source-relay correlation. Furthermore, to exploit the memory structure of Markov source, the trellis of Markov source and that of the channel encoder at the source node are combined to construct a super trellis. A modified version of the BCJR algorithm has been derived, based on this super trellis, to perform joint decoding of Markov source and channel code at the destination. By exploiting the source-relay correlation and the memory structure of Markov source simultaneously, the proposed technique can achieve significant gains over the techniques that only exploit the source-relay correlation, which is verified through BER simulations as well as image transmission simulations.

## Notes

### Acknowledgements

This research was supported in part by the Japan Society for the Promotion of Science (JSPS) Grant under the Scientific Research KIBAN, (B) No. 2360170, (C) No. 2256037, and in part by Academy of Finland SWOCNET project.

## Supplementary material

## References

- 1.Xiong Z, Liveris AD, Cheng S, Distributed source coding for sensor networks:
*IEEE Signal Process. Mag*. 2004, 21(5):80-94. 10.1109/MSP.2004.1328091CrossRefGoogle Scholar - 2.Li H, Zhao Q: Distributed modulation for cooperative wireless communications.
*IEEE Signal Process. Mag*2006, 23(5):30-36.CrossRefGoogle Scholar - 3.Zhao B, Valenti MC: Distributed turbo coded diversity for relay channel.
*Electron Lett*2003, 39(10):786-787. 10.1049/el:20030526CrossRefGoogle Scholar - 4.Youssef R, Graell i Amat A: Distributed serially concatenated codes for multi-source cooperative relay networks.
*IEEE Trans. Wirel. Commun*2011, 10: 253-263.CrossRefGoogle Scholar - 5.Si Z, Thobaben R, Skoglund M: On distributed serially concatenated codes.
*Proc. IEEE 10th Workshop Signal Processing Advances in Wireless Communications SPAWC’09*Perugia, Italy, 2009. 653–657Google Scholar - 6.Garcia-Frias J, Zhao Y: Near-Shannon/Slepian-Wolf performance for unknown correlated sources over AWGN channels.
*IEEE Trans. Commun*2005, 53(4):555-559. 10.1109/TCOMM.2005.844959CrossRefGoogle Scholar - 7.Anwar K, Matsumoto T: Accumulator-assisted distributed Turbo codes for relay system exploiting source-relay correlations.
*IEEE Commun. Lett*2012, 16(7):1114-1117.CrossRefGoogle Scholar - 8.Thobaben R, Kliewer J: Low-complexity iterative joint source-channel decoding for variable-length encoded Markov sources.
*IEEE Trans. Commun*2005, 53(12):2054-2064. 10.1109/TCOMM.2005.860065CrossRefGoogle Scholar - 9.Kliewer J, Thobaben R: Parallel concatenated joint source-channel coding.
*Electron. Lett*2003, 39(23):1664-1666. 10.1049/el:20031084CrossRefGoogle Scholar - 10.Thobaben R, Kliewer J: On iterative source-channel decoding for variable-length encoded Markov sources using a bit-level trellis.
*Proc. 4th IEEE Workshop Signal Processing Advances in Wireless Communications SPAWC*Auckland, New Zealand, 2003. 50–54Google Scholar - 11.Jeanne M, Carlach JC, Siohan P: Joint source-channel decoding of variable-length codes for convolutional codes and turbo codes.
*IEEE Trans. Commun*2005, 53: 10-15. 10.1109/TCOMM.2004.840664CrossRefGoogle Scholar - 12.Garcia-Frias J, Villasenor JD: Combining hidden Markov source models and parallel concatenated codes.
*IEEE Commun. Lett*1997, 1(4):111-113.CrossRefGoogle Scholar - 13.Garcia-Frias J, Villasenor JD: Joint turbo decoding and estimation of hidden Markov sources.
*IEEE J. Sel. Areas Commun*2001, 9: 1671-1679.CrossRefGoogle Scholar - 14.Zhu GC, Alajaji F: Joint source-channel turbo coding for binary Markov sources.
*IEEE Trans. Wirel. Commun*2006, 5(5):1065-1075.CrossRefGoogle Scholar - 15.Kobayashi K, Yamazato T, Okada H, Katayama M: Joint channel decoding of spatially and temporally correlated data in wireless sensor networks.
*Proc. Int. Symp. Information Theory and Its Applications ISITA*Rome, Italy, 2008. 1–5Google Scholar - 16.Anwar K, Matsumoto T: Very simple BICM-ID using repetition code and extended mapping with doped accumulator.
*Wirel. Personal Commun*2011, 1-12. 10.1007/s11277-011-0397-1Google Scholar - 17.Cover TM, Thomas JA:
*Elements of Information Theory,*. 2006. 2nd edn.(John Wiley & Sons Inc., New York)Google Scholar - 18.Bahl L, Cocke J, Jelinek F, Raviv J: Optimal decoding of linear codes for minimizing symbol error rate (Corresp.).
*IEEE Trans. Inf. Theory*1974, 20(2):284-287.MathSciNetCrossRefGoogle Scholar - 19.Tuchler M: Convergence prediction for iterative decoding of threefold concatenated systems, in.
*Proc. IEEE Global Telecommunications Conf. GLOBECOM’02*2002, 2: 1358-1362.Google Scholar - 20.ten Brink S: Code characteristic matching for iterative decoding of serially concatenated codes.
*Ann. Telecommun*2001, 56: 394-408.Google Scholar - 21.ten Brink S: Convergence behavior of iteratively decoded parallel concatenated codes.
*IEEE Trans. Commun*2001, 49(10):1727-1737. 10.1109/26.957394CrossRefGoogle Scholar

## Copyright information

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.