- Research
- Open Access
- Published:

# On the complexity-performance trade-off in soft-decision decoding for unequal error protection block codes

*EURASIP Journal on Advances in Signal Processing*
**volume 2013**, Article number: 28 (2013)

## Abstract

Unequal error protection (UEP) codes provide a selective level of protection for different blocks of the information message. The effectiveness of two sub-optimum soft-decision decoding algorithms, namely generalized Chase-2 and weighted erasure decoding, is evaluated in this study for each protection class of UEP block codes. The performances of both algorithms are compared to that of the maximum likelihood algorithm in order to evaluate the performance loss of each protection class provided by less complex algorithms as well as their complexities are evaluated according to the number of arithmetic operations performed at each decoding step. Finally, numerical results and examples are provided which indicate that a trade-off between performance and complexity for each protection class is obtained. The results of this study can be used to select appropriate UEP coding and decoding schemes in applications that demand low energy consumption.

## 1 Introduction

One of the main challenges in the design of battery-supplied wireless devices is the minimization of their energy consumption [1–4]. It is known that forward error correction (FEC) decoders are responsible for a large part of energy consumption of such devices [5, 6]. Since maximum likelihood (ML) decoding is often infeasible due to its complexity of exponential order, it is of interest to investigate sub-optimum decoding techniques in search of less complex alternatives.

Concerning block codes, a class of sub-optimum algorithms that deserves attention is composed of reliability-based soft-decision decoding techniques [7]. In this category, Chase-2 and weighted erasure decoding (WED) algorithms are recognized by their ease of implementation and reduced complexity when compared to the ML algorithm. The performance of the Chase-2 decoding algorithms applied to Bose–Chaudhuri–Hocquenghem codes is analyzed in [8].

In a number of wireless protocols, the importance of different bits in the information sequence often varies and certain blocks of this sequence need higher protection level than other blocks. This property is called unequal error protection (UEP) and can be obtained either by hierarchical modulation techniques [9, 10] or by FEC schemes [11, 12]. Such UEP methods have been applied to wireless and mobile computing applications [13–15], apart from several video and image coding standards as set partitioning hierarchical trees [16], ITU-T H.264 [17], and its extensions [18], and joint photographic expert group 2000 (JPEG 2000) [19]. Concerning UEP coding, the analysis of suboptimal decoding algorithms applied to UEP block codes has not been considered in the literature.

In this study, the effectiveness of sub-optimum soft-decision decoding algorithms (generalized Chase-2 algorithm [20] and WED algorithm [21]) for each protection class of UEP block codes is evaluated using binary transmission over an additive white Gaussian noise (AWGN) channel. Performances of both algorithms are compared to that of the ML algorithm in order to evaluate the performance loss of each protection class provided by less complex algorithms. We also analyze the arithmetic decoding complexity of each algorithm to decode a received sequence. In addition, an analysis of the trade-off between performance and complexity of the algorithms for each protection class is done. Based on this analysis, we discuss the choice of the parameters of the decoder with the best complexity-performance trade-off, such as the number of test patterns, the error-correcting capability of the binary decoder, and the number of quantization levels.

The remainder of this article is structured as follows: In Section 2, concepts related to UEP coding are described. The soft-decision decoding algorithms are defined in Section 3, while the analysis of their decoding complexity in terms of mathematical operations is presented in Section 4. Section 5 presents simulation results. A trade-off between performance and complexity for both decoding algorithms is established in this section. Finally, conclusions are drawn in Section 6.

## 2 UEP block codes

Consider a binary linear code *C*
_{
j
}(*n*,*k*,*d*) in which *n* is the codeword length, *k* is the dimension of the code, and *d* is the minimum Hamming distance of *C*
_{
j
}. The generator matrix of *C*
_{
j
}is denoted by **G**
_{
j
}. Assume that *w*(**u** **G**
_{
j
}) is the Hamming weight of the codeword **x** = **u** **G**
_{
j
}related to the information vector **u**. The separation vector of *C*
_{
j
}, ${\mathbf{s}}_{j}=\phantom{\rule{0.3em}{0ex}}[{s}_{j}^{0},\dots ,{s}_{j}^{i},\dots ,{s}_{j}^{k-1}]$, measures the UEP provided by the code *C*
_{
j
} for ML decoding [22]. The *i* th position of **s**
_{
j
} is given by [22]

where GF(2) is the binary Galois field. The smallest element of **s**
_{
j
}is the minimum Hamming distance of *C*
_{
j
}. A code *C*
_{
j
}is said to have equal error protection capability if all elements of **s**
_{
j
} are equal, otherwise *C*
_{
j
}has the UEP property. The error-correcting capability of the code *C*
_{
j
}is denoted by ${t}_{j}^{\ast}$.

To illustrate these concepts, consider the linear block codes *C*
_{1}(16,5,5) and *C*
_{2}(25,8,5) with generator matrices **G**
_{1} and **G**
_{2} (calculated using the method proposed in [23]) given by

Their separation vectors are **s**
_{1} = [8,8,5,5,5] and **s**
_{2} = [12,12,5,5,5,5,5,5], respectively. Thus, we can say that both codes are UEP codes with two distinct protection classes, denoted by cp_{1} (higher protection class) and cp_{2} (lower protection class).

## 3 Soft-decision decoding algorithms

Two decoding algorithms that deal with the least reliable positions of the received sequence, namely the generalized Chase-2 (GC-2) [20] and the WED [21] algorithms, are described in this section.

### 3.1 Generalized Chase-2 decoding algorithm

The GC-2 algorithm uses the sequence of real values observed at the output of the matched filter, **r** = [*r*
_{0},*r*
_{1},…,*r*
_{
n−1}], and the binary sequence **y** obtained by a hard quantization of **r**. For the AWGN channel, the real values of the sequence **r** correspond to the reliabilities *α*
_{
i
}such that *α*
_{
i
}= |*r*
_{
i
}|. Thus, the higher the value of *α*
_{
i
}, the lower the probability that the corresponding symbol had been strongly affected by the noise.

Let *p* be the number of least reliable positions of the sequence **r**, i.e., the positions that have the least values of *α*
_{
i
}. The value of *p* determines the set of test patterns *S*
_{
b
}= {**b**
_{
i
}},*i* = 0,…,2^{p}− 1, with cardinality |*S*
_{
b
}| = 2^{p}. At first, the GC-2 algorithm applies a binary decoder (with error-correcting capability *t*) to find an error pattern **z** associated to the sequence **y**
^{i}= **y** ⊕ **b**
_{
i
}, in which ⊕ represents sum modulo-2. If an error pattern **z** is obtained by the binary decoder,^{a} it is added to the test pattern **b**
_{
i
}, resulting the pattern **z**
^{i}=**z** ⊕ **b**
_{
i
}. After that, the analog weight *W*
_{
α
}of the pattern **z**
^{i}is figured out according to

If **z** is not found by binary decoding, the next test pattern **b**
_{
i
}is selected. The objective of the GC-2 algorithm is to find the pattern ${\mathbf{z}}^{{i}^{\star}}$ with minimum analog weight *W*
_{
α
}to estimate the transmitted codeword **x**, as $\hat{\mathbf{x}}=\mathbf{y}\oplus {\mathbf{z}}^{{i}^{\star}}.$ When a pattern **z**
^{i}is not selected (for all test patterns), then $\hat{\mathbf{x}}=\mathbf{y}$.

A detailed description of the GC-2 algorithm is found in [20, 24] and a summary of its steps is presented in Table 1.

For a better understanding of the GC-2 algorithm, we consider the following example.

#### Example 1

Consider the Hamming code *C*(7,4,3) whose error-correcting capability is equal to one. Assume that the codeword **x** = [1,0,0,1,0,1,1] is BPSK modulated and is transmitted over the AWGN channel. Suppose that the received sequence is **r** = [1.5,0.05,−0.8,2.2,0.1,1.2,0.3].

According to the first step of the GC-2 algorithm, the sequence **y**= [1,1,0,1,1,1,1] is obtained by hard quantization of **r**. Let us assume that *p* = 2, so the two least reliable positions are the second and the fifth ones. Thus, considering all the combinations of 0’s and 1’s in these two least reliable positions, we have four test patterns **b** according to the set

To obtain the sequence **y**
^{0}, the test pattern **b**
_{0} is selected, resulting in **y**
^{0} = **y** ⊕ **b**
_{0} = [1,1,0,1,1,1,1]. After computing the syndrome associated to this **y**
^{0}, we get the error pattern **z** = [0,0,1,0,0,0,0]. Since the error pattern **z** exists, the sequence **z**
^{0} = **z**⊕**b**
_{0} = [0,0,1,0,0,0,0] is achieved and its analog weight *W*
_{
α
}(**z**
^{0}) is 0.8, according to (4). Repeating this procedure with the other test patterns from *S*
_{
b
}, the algorithm stores ${\mathbf{z}}^{{i}^{\star}}={\mathbf{z}}^{2}=\left[0,1,0,0,1,0,0\right]$ as the sequence with the minimum analog weight (*W*
_{
α
}(**z**
^{2}) = 0.15). Finally, the estimate $\hat{\mathbf{x}}=\mathbf{y}\oplus {\mathbf{z}}^{2}=\left[1,0,0,1,0,1,1\right]$ is obtained, characterizing the correct codeword.

### 3.2 WED Algorithm

The WED algorithm is based on the quantization of the sequence **r** in *Q* = 2^{m}regions that are uniformly spaced by the quantization step *δ*. Figure 1 illustrates the quantization regions (denoted by ${R}_{{D}_{j}},0\le j\le Q-1$) for *Q* = 8 (*m* = 3). The optimal value of *δ*, denoted by *δ*
_{op}, that minimizes the bit error probability, can be obtained algebraically [25] or through computer simulations.

Given **r** and *Q*, two sequences (**v** and **q**) are obtained. First, consider the sequence **v** = [*v*
_{0},…,*v*
_{
ℓ
},…,*v*
_{
m−1}], where each component *v*
_{
ℓ
} is given by

The *Q*-ary sequence **q** = [*q*
_{0},*q*
_{1},…,*q*
_{
i
},…,*q*
_{
n−1}],*q*
_{
i
}∈ {0,…,*Q*−1} is defined such that *q*
_{
i
}= *j*, if ${r}_{i}\in {R}_{{D}_{j}}$. Then, a matrix **A** of dimensions *m*×*n* is determined such that the *i* th column of **A** is the binary representation of *q*
_{
i
}.

Next, a matrix **A**’ having the same dimensions of **A** is obtained from the binary decoding of the rows of **A**. The syndrome of each row of **A** is computed in order to find its associated error pattern. If an error pattern is found, it is added to the row of **A** to generate the row of **A**’. Otherwise, the row of **A**’ is equal to the row of **A**.

We also define the vector **f** = [*f*
_{0},…,*f*
_{
ℓ
},…,*f*
_{
m−1}], where each component *f*
_{
ℓ
} is the number of positions of the *ℓ* th row of **A**’ that differs from the *ℓ* th row of **A**. Using **f**, the reliability *R*
_{
ℓ
}of the *ℓ* th row of **A**’ is computed as [21]

In the WED algorithm proposed in [21], the error-correcting capability of the binary decoder is *t* = *t*
^{∗} = ⌊(*d* − 1)/2⌋. To allow the use of an arbitrary value of *t*, we propose a new reliability ${R}_{\ell}^{\prime}$ given by

It is assumed that ${R}_{\ell}^{\prime}=0$ if the binary decoder cannot find the error pattern associated with the syndrome of the *ℓ* th row of **A**. This consideration is intended to reduce the reliability of sequences in which the high number of errors has made impossible the binary decoding. Also, the candidate sequences with fewer errors are favored.

Let ${S}_{0}^{i}$ corresponds to the set of indices of the rows of **A**’ containing the bit 0 in *i* th column and ${S}_{1}^{i}$, the set of indices for the presence of the bit 1 in *i* th column. The *i* th bit is decoded as 0 if

or as 1 if

If $\sum _{\ell \in {S}_{0}^{i}}{R}_{\ell}^{\prime}{v}_{\ell}=\sum _{\ell \in {S}_{1}^{i}}{R}_{\ell}^{\prime}{v}_{\ell}$, the *i* th bit is obtained by hard-decision decoding of the component *r*
_{
i
}.

A detailed description of the WED algorithm is found in [21], and a summary of its steps is presented in Table 2. For a better understanding of this algorithm, we consider in the following example the same code, transmitted codeword, and received sequence of Example 1.

#### Example 2

Assume the mapping of **r** into four quantization regions (*Q* = 4) with *δ* = 0.2. According to the first step of the WED algorithm, the sequences **q** = [3,2,0,3,2,3,3] and **v** = [0.666,0.333] are obtained. Given **q**, the matrix **A** is obtained as

Next, applying a binary decoding (with *t* = 1) to each row of **A**, we obtain the matrix **A**’ as

From **A** and **A**’, we obtain **f** = [*f*
_{0},*f*
_{1}] = [1,0]. Assuming *t* = 1, the reliabilities of the rows of the **A**’ are *R* 0 ′ = 1 and *R* 1 ′ = 3. For the first column of **A**’, we have ${S}_{0}^{0}=\left\{\varnothing \right\}$ and ${S}_{1}^{0}=\{0,1\}$, resulting in ${\widehat{x}}_{0}=0$. For the second column of **A**’, ${S}_{0}^{1}=\left\{1\right\}$ and ${S}_{1}^{1}=\left\{0\right\}$. Since ${R}_{1}^{\prime}{\upsilon}_{1}>{R}_{0}^{\prime}{\upsilon}_{0}$ (1 > 0.666), we have ${\widehat{x}}_{1}=0$. Continuing with the decoding, the estimate $\widehat{\mathrm{x}}=\left[1,0,0,1,0,1,1\right]$ is obtained. This is the correct codeword as it was obtained in the GC-2 algorithm.

## 4 Arithmetic complexity of the GC-2 and WED algorithms

The complexity of both algorithms considered in this article is evaluated according to the number of arithmetic operations performed at each decoding step.^{b} Consider *N*
_{
s
},*N*
_{
g
},*N*
_{
m
}, and *N*
_{
c
}, the number of additions, additions modulo-2, multiplications and comparisons, respectively.

Table 3 indicates the number of operations performed at each step of the GC-2 algorithm, as described in Table 1, for each decoded sequence. In Step 3, multiplications and additions modulo-2 correspond to the syndrome computing. Also in Step 3, we assume that there is no arithmetic operations associated to the search of an error pattern **z** (a lookup table may be used for this purpose). Operations related to Steps 1, 5, and 6 are omitted because either they are not performed for each test pattern or do not require arithmetic operations. Thus, they represent a very small percentage of the whole operations.

It is noteworthy that the operations in Step 4 depend on the result obtained in Step 3, i.e., depend on the success of the binary decoder in the search for an error pattern **z** associated with the sequence **y**
^{i}. Thus, it is necessary to estimate the average value of the operations performed in Step 4. For this, we define the relative frequency of computing *W*
_{
α
}as *f*
_{
A
}= *N*
_{
W
}/2^{p}, in which *N*
_{
W
}is the number of times that the analog weight *W*
_{
α
}is computed in the main loop of the algorithm. This value is evaluated via computer simulations in the next section (see also [26]).

In the case of the WED algorithm, defined *Q* and *δ*, the implementation of the algorithm follows the steps described in Table 2. The computing of the sequence **v** depends only on *Q* and does not need to be executed for each received sequence. Therefore, these operations are not considered in Table 4 that relates the number of operations required to implement the WED algorithm for each decoded sequence. In Step 1, it is considered the binary tree mapping [27], in which, for *Q* regions, *m* comparisons are needed to the quantization of a component *r*
_{
i
}. Step 2 is omitted, because it does not require mathematical operations.

Finally, in Step 5, depending on the sequence that is being decoded, it may be necessary either (*m* − 1) or (*m* − 2) additions to perform the comparison $\sum _{\ell \in {S}_{0}^{i}}{R}_{\ell}^{\prime}{v}_{\ell}\u2a8b\sum _{\ell \in {S}_{1}^{i}}{R}_{\ell}^{\prime}{v}_{\ell}$. It is considered the worst case for all *n* positions, totalizing *n*(*m* − 1) additions per decoded sequence.

## 5 Numerical results

The performance of three decoding algorithms (ML, GC-2, and WED) is evaluated via computer simulations for the two UEP codes defined in Section 2 using binary transmission over the AWGN channel. Various configurations of the GC-2 and WED algorithms are considered by changing their parameters (*t* and *p* for GC-2; *t* and *Q* for WED), in order to compare their performance to that of the ML algorithm for each protection class. Using these results together with the operations in Tables 3 and 4, a trade-off between performance and complexity for both decoding algorithms is also established. In the following sections, the GC-2 and the WED algorithms will be denoted by GC-2 (*t*,*p*) and WED (*t*,*Q*), respectively.

### 5.1 GC-2 decoding algorithm

Figure 2 shows the curves of the bit error probability (*P*
_{
b
}) versus signal-to-noise ratio (SNR) *E*
_{
b
}/*N*
_{0}, in which *E*
_{
b
} is the energy per information bit and *N*
_{0} is the power spectral density of the noise, of the GC-2(2,2) and GC-2(3,4) algorithms for both classes of the UEP code *C*
_{1}. For this code, the maximum value of the error-correcting capability of the binary decoder (*t*) is assumed equal to 3 and the maximum value of *p* is such that the cardinality of *S*
_{
b
}is always lower than the search set of the ML algorithm (|*S*
_{
b
}| < 2^{k}− 1).

For the GC-2(2,2) algorithm, we observe that there is virtually no performance difference between the classes cp_{1} and cp_{2}. In addition, considering *P*
_{
b
}= 10^{−4}, the SNR difference compared to the ML algorithm is approximately 2 and 1.1 dB for the classes cp_{1} and cp_{2}, respectively. For the GC-2(3,4) algorithm, the SNR difference to the ML algorithm is 0.1 dB (cp_{1}) and 0.03 dB (cp_{2}).

To assess the complexity of the GC-2 algorithm, it is necessary to evaluate *f*
_{
A
}, as mentioned in Section 4. Figure 3 illustrates the values of *f*
_{
A
}as a function of *E*
_{
b
}/*N*
_{0} for GC-2(2,2), GC-2(2,7), GC-2(3,2), GC-2(3,7), GC-2(6,2), and GC-2(6,7) algorithms applied to the UEP code *C*
_{2}. For *p* = 2 and considering *t* = 2,*t* = 3, and *t* = 6,*f*
_{
A
} reaches its maximum value (it is estimated for all test patterns **b**
_{
i
}), when *E*
_{
b
}/*N*
_{0} = 9.5, 8, and 4 dB, respectively. The reduction of SNR occurs due to the increased possibility of an error pattern **z** be found, a consequence of the increasing of the error-correcting capability of the binary decoder. We observe that for *t* = 2 and *t* = 3 (*p* = 7), there are test patterns **b** which do not produce calculations of *W*
_{
α
}(*f*
_{
A
}< 1), even in regions of high SNR (*E*
_{
b
}/*N*
_{0} > 7.5 dB). For example, considering the GC-2(2,7) algorithm and *E*
_{
b
}/*N*
_{0} > 7.5 dB, it is very probable that the bit inversion resulting from the addition of test patterns **b**
_{
i
}causes errors in the sequence **y**. As the binary decoder used is able to correct only 2 errors in this algorithm, 31 estimates of *W*
_{
α
}(*N*
_{
W
}= 31) occur, which represents the sum of all test patterns of weight less than 2, resulting in *f*
_{
A
}= 31/128 ≃ 0.242.

Finally, it is analyzed the compromise between performance and complexity of the GC-2 algorithms in terms of the SNR difference with respect to the ML algorithm related to the class cp_{
i
} (for *P*
_{
b
} = 10^{−4}), namely *Δ*
_{
i
} (dB), and the number of mathematical operations executed in the algorithm, defined as a 4-tuple MO = [ *N*
_{
s
}; *N*
_{
g
}; *N*
_{
m
}; *N*
_{
c
}]. For the estimation of MO, it is necessary to determine in Step 4 of Table 3 the value of *f*
_{
A
} used to weight the number of operations. Provided the GC-2 algorithm and the protection class cp_{
i
},*i* = 1,2, the SNR value corresponding to *P*
_{
b
} = 10^{−4} is determined. With this SNR, we can identify the correspondent value of *f*
_{
A
} (see Figure 3).

Tables 5 and 6 summarize the complexity-performance trade-off for various configurations of the GC-2 algorithm applied to the UEP codes *C*
_{1} and *C*
_{2}, respectively. For each intersection of a row (*t*) with a column (*p*), the values of *Δ*
_{
i
} (dB) and MO required to achieve *P*
_{
b
} = 10^{−4} are shown for each protection class. For both codes, these results indicate that an increasing in *p* (for a fixed *t*) provides better performance, however increases the complexity, since each operation shown in Table 3 grows exponentially with *p*. On the other hand, an increasing in *t* (for a fixed *p*) also results in an improved performance, but with a smaller increasing of complexity. For example, considering the class cp_{1} of the code *C*
_{1} and the GC-2(2,2) algorithm, we have *Δ*
_{1} = 2.0 dB and MO = [*N*
_{
s
};*N*
_{
g
};*N*
_{
m
};*N*
_{
c
}]≃[58.3;790;770;3.89]. Moreover, the GC-2(3,2) provides *Δ*
_{1} = 0.9 dB and MO = [*N*
_{
s
};*N*
_{
g
};*N*
_{
m
};*N*
_{
c
}]≃[59.5;790;770;3.96], while the GC-2(2,4) yields *Δ*
_{1} = 0.8 dB and MO = [*N*
_{
s
};*N*
_{
g
};*N*
_{
m
};*N*
_{
c
}]≃[159.6;3,100;3,000;10.6], which represents a significant complexity increase related to the previous two cases, while the value of *Δ*
_{1} is approximately the same as obtained by the GC-2(3,2) algorithm. This analysis led us to conclude that the increase of the error-correcting capability of the binary decoder is more advantageous than the increase of the number of test patterns of the GC-2 algorithm. Also, it is possible to observe (analyzing Tables 5 and 6) that, in most of the cases, *Δ*
_{2} is smaller than *Δ*
_{1}, indicating that the performance achieved by the class cp_{2} is closer to the ML one than the obtained by the class cp_{1}.

### 5.2 WED algorithm

The WED (*t*,*Q*) algorithm uses the reliability ${R}_{\ell}^{\prime}$ defined in (7). The number of quantization regions considered is *Q* = 4, 16 and 1024. Table 7 illustrates the optimal value of the quantization step, *δ*
_{
o
p
}, (in the sense of minimizing *P*
_{
b
}) for the WED (2,*Q*) algorithm (class cp_{1}) applied to the codes *C*
_{1} and *C*
_{2}. In general, increasing the number of quantization regions causes a decrease in the value of *δ*
_{op}, reducing even more the spacing between adjacent regions. In addition, we can observe that the higher the value of *Q*, the lower the variation of *δ*
_{op} over the range of SNR considered. This behavior indicates that as *Q* increases, the optimal value of the quantization step becomes approximately constant.

Figure 4 shows the curves of *P*
_{
b
} versus *E*
_{
b
}/*N*
_{0} of the WED(2,4) and WED(3,16) algorithms for both classes of the UEP code *C*
_{1}. Similarly to observed in the GC-2(2,2) algorithm, there is no performance difference between the two classes for the case *t* = 2, while for *t* = 3, the SNR difference to the ML algorithm is 0.9 dB (cp_{1}) and 1.2 dB (cp_{2}).

Table 8 summarizes the complexity-performance trade-off for various configurations of the WED algorithm applied to the UEP codes *C*
_{1} and *C*
_{2}. For each intersection of a row (*t* and cp_{
i
}) with a column (*Q* and a code *C*
_{
j
}), we have *Δ*
_{
i
} (dB) for *P*
_{
b
} = 10^{−4}. For code *C*
_{1}, the error-correcting capability of the WED algorithm is *t* = 2,3 and 4, while for code *C*
_{2}, *t* = 2,3,4,5 and 6, as it was considered for the GC-2 algorithm. Thus, considering the complexity-performance trade-off, it is more advantageous to increase *t*, as in the GC-2 (*t*,*p*) algorithm, than increase the number of quantization regions *Q*.

Finally, we compare both soft-decision decoding algorithms for a specific protection class, such as the higher protection one (cp_{1}). To do this, we define a binary decoding ratio, denoted by *γ*, as

The parameters *p* and *Q* are associated with the number of binary decodings that the GC-2 (*t*,*p*) and WED (*t*,*Q*) algorithms, respectively, execute. When decoding a received sequence, the GC-2 (*t*,*p*) algorithm does 2^{p} binary decodings, while the WED (*t*,*Q*) algorithm does log2*Q* ones. Thus, for making a fair comparison of the algorithms, we choose configurations where *γ* ≅ 1. In this case, the WED algorithm can offer a performance closer to the ML curve (for the higher protection class), but at the price of increased complexity. For *γ* = 1 and code *C*
_{2}, we can see this comparing GC-2(5,2) and WED(5,16) algorithms (see Tables 6 and 8). For the GC-2(5,2) algorithm, *Δ*
_{1} = 1.3 dB and MO ≃[95.7;1,800;1,800;3.99], while for the WED(5,16) one, *Δ*
_{1} = 1.0 dB and MO ≃[179;1,832;1,808;229]. Another example is verified if the GC-2(4,3) and WED(4,1024) are compared (*γ* = 0.8 and code *C*
_{2}). For the GC-2(4,3) algorithm, *Δ*
_{1} = 1.7 dB and MO ≃[183;3,655;3,591;7.63], while for the WED one, *Δ*
_{1} = 1.4 dB and MO ≃[485;4,580;4,520;535]. It should also be observed in Table 8 that the performance of the WED algorithm degrades when *t* is high. The authors conjecture that this behavior is due to some limitation of the reliability ${R}_{\ell}^{\prime}$ adopted.

## 6 Conclusions

In this study, the effectiveness of two sub-optimum soft-decision decoding algorithms (GC-2 (*t*,*p*) and WED (*t*,*Q*) algorithms) was investigated for each protection class of UEP block codes using binary transmission over an AWGN channel. It was verified the performance of both algorithms compared to that of the ML one. The behavior of the GC-2 algorithm was investigated for estimating the analog weight (Step 4) according to the variation of its parameters (*t* and *p*), while the WED algorithm was examined for a new proposed reliability according to the variation of its parameters (*t* and *Q*). To estimate the complexity of each algorithm, it was computed the number of arithmetic operations per decoded sequence. An analysis of the trade-off between performance and complexity of the algorithms was performed for each protection class assuming various configuration options. These analyses led us to conclude that, when choosing the parameters of the algorithms, the increase of the error-correcting capability of the binary decoder (*t*) was more advantageous in both cases. In addition, choosing the values of *p* and *Q* such that *γ* is close to one (for a fixed value of *t*), it was verified that the GC-2 algorithm is less complex, while the WED algorithm can offer (depending on the code adopted) a performance closer to the ML one.

## Endnotes

^{a}Error pattern **z** associated with the syndrome of the sequence **y**
^{i}.^{b}The complexity of decoding algorithms should be taken into consideration additional factors besides the arithmetic operations (like memory reads and writes). Since these factors are architecture dependent, we omit their contribution in this article.

## References

- 1.
Lin T-H, Kaiser WJ, Pottie GJ: Integrated low-power communication system design for wireless sensor networks.

*IEEE Commun. Mag*2004, 42(12):142-150. - 2.
Niewiadomska-Szynkiewicz E, Kwasniewski P, Windyga I: Comparative study of wireless sensor networks energy-efficient topologies and power save protocols.

*J. Telecommun. Inf. Technol*2009, 3: 68-75. - 3.
Gómez-Vilardebó J, Pérez-Neira AI, Nájar M: Energy efficient communications over the AWGN relay channel.

*IEEE Trans. Wirel. Commun*2010, 9(1):32-37. - 4.
Zhu Y, Wu W, Pan J, Tang Y: An energy-efficient data gathering algorithm to prolong lifetime of wireless sensor networks.

*Comput. Commun*2010, 33(5):639-647. 10.1016/j.comcom.2009.11.008 - 5.
Howard SL, Schlegel C, Iniewski K: Error control coding in low-power wireless sensor networks: when is ECC energy-efficient? EURASIP.

*J. Wirel. Commun. Netw*2006, 2: 1-14. - 6.
Kienle F, Wehn N, Meyr H: On complexity, energy- and implementation-efficiency of channel decoders.

*IEEE Trans. Commun*2011, 59(12):3301-3310. - 7.
Fossorier M, Lin S, Snyders J: Reliability-based syndrome decoding of linear block codes.

*IEEE Trans. Inf. Theory IT-44*1998, 388-398. - 8.
Singh J, Pesch D: Application of energy efficient soft-decision error control in wireless sensor networks.

*Telecommun. Syst*(Springer Netherlands) (2011) , pp. 1–11 http://dx.doi.org/10.1007/s11235-011-9588-z - 9.
Chang YC, Lee SW, Komiya R: A low complexity hierarchical QAM symbol bits allocation algorithm for unequal error protection of wireless video transmission.

*IEEE Trans. Consum. Electron*2009, 55(3):1089-1097. - 10.
Nguyen HX, Nguyen HH, Le-Ngoc T: Signal transmission with unequal error protection in wireless relay networks.

*IEEE Trans. Veh. Technol*2010, 59(5):2166-2178. - 11.
Pimentel C, Souza RD, Uchôa-Filho BF, Pellenz ME: Generalized punctured convolutional codes with unequal error protection.

*EURASIP J. Adv. Signal Process.*2008, 2008: Art. ID 280831, 1-6. - 12.
Borade S, Nakiboglu B, Zheng L: Unequal error protection: an information-theoretic perspective.

*IEEE Trans. Inf. Theory*2009, 55(12):5511-5539. - 13.
Zhang S, Lau VKN: A novel unequal error protection (UEP) scheme using D-STTD for multicast service.

*IEEE Trans. Wirel. Commun*2009, 8(2):978-984. - 14.
Arslan SS, Cosman PC, Milstein LB: Coded hierarchical modulation for wireless progressive image transmission.

*IEEE Trans. Veh. Technol*2011, 60(9):4299-4313. - 15.
Kang K, Jeon WJ: Differentiated protection to video layers to improve perceived quality.

*IEEE Trans. Mobi. Comput*2012, 11(2):292-304. - 16.
Thomos N, Boulgouris NV, Strintzis MG, Wireless image transmission using turbo codes and optimal unequal error protection:

*IEEE Trans. Image Process*. 2005, 14(11):1890-1901. - 17.
Qu Q, Modestino JW: An adaptive motion-based unequal error protection approach for real-time video transport over wireless IP networks.

*IEEE Trans. Multimed*2006, 8(5):1033-1044. - 18.
Ha H, Yim C: Layer-weighted unequal error protection for scalable video coding extension of H.264/AVC.

*IEEE Trans. Consum. Electron*2008, 54(2):736-744. - 19.
Zhang W, Shao X, Torki M, HajShirMohammadi A, Bajic IV: Unequal error protection codes for JPEG2000 images using short block length turbo codes.

*IEEE Commun. Lett*2011, 15(6):659-661. - 20.
Tendolkar NN, Hartman CRP: Generalization of Chase algorithms for soft decision decoding of binary linear codes.

*IEEE Trans. Inf. Theory*1984, IT-30(5):714-721. - 21.
Weldon Jr E: Decoding binary block codes on Q-ary output channels.

*IEEE Trans. Inf. Theory*1971, 17(6):713-718. 10.1109/TIT.1971.1054713 - 22.
Masnick B, Wolf J: On linear unequal error protection codes.

*IEEE Trans. Inf. Theory*1967, IT-3(4):600-607. - 23.
van Gils WJ: On linear unequal error protection codes. EUT-Rep-82-WSK-02, Department of Mathematical and Computing Science. Eindhoven University of Technology, 1982

- 24.
Chase D: A class of algorithms for decoding block codes with channel measurement information.

*IEEE Trans. Inf. Theory*1972, IT-18(1):170-182. - 25.
Chen WHJ, Fossorier MPC, Lin S: Optimum quantizer design for the weigthed erasure decoding algorithm. In

*Proceedings of the IEEE International Conference on Communications (ICC)*. (Vancouver, Canada; June 1999:838-842. - 26.
Albuquerque RC, Cunha DC, Pimentel C: An evaluation of the generalized Chase-2 algorithm applied to unequal error protection block codes. In

*Proceedings of the IEEE 3rd Latin-American Conference on Communications (LATINCOM)*. (Belém-PA, Brazil; October 2011:1-6. - 27.
Tenenbaum AM, Langsam Y, Augenstein MJ:

*Data Structures Using C*. Facsimile edition: Prentice Hall; 1989.

## Acknowledgements

This study was supported in part by the State of Pernambuco Research Foundation (FACEPE) under Grant APQ-1060-3.04/10 and the Brazilian Council for Scientific and Technological Development (CNPq) under Grant 302535/2010-1.

## Author information

### Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access**
This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (
https://creativecommons.org/licenses/by/2.0
), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

de Albuquerque, R.C., Cunha, D.C. & Pimentel, C. On the complexity-performance trade-off in soft-decision decoding for unequal error protection block codes.
*EURASIP J. Adv. Signal Process.* **2013, **28 (2013). https://doi.org/10.1186/1687-6180-2013-28

Received:

Accepted:

Published:

### Keywords

- Block codes
- Bit error probability
- Unequal error protection
- Soft-decision decoding
- Computational complexity