Abstract
The first problem addressed in this chapter is how to find the number of errors when the capability of the code is not exceeded. It is shown that there is a polynomial (the error locator polynomial) whose roots point to the positions of the errors. This polynomial can be computed solving a set of linear equations and its roots are obtained by a method known as the Chien search. The error values are then found solving another set of linear equations. Appendix E presents a fast way to compute the determinants needed. Several examples illustrate how, when the number of errors surpasses the capability of the code, the decoding algorithm may fail, thus allowing the detection of errors. It is also explained how to decode in the presence of errors and erasures. Finally, the Massey algorithm is presented to find the locator polynomial without having to solve a system of linear equations. The proof of the algorithm is in Appendix F.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Appendices
Appendix E: The “Condensation” Method for Evaluating Determinants
When done by hand, the evaluation of determinants is a very laborious task for large orders. For a (15, 3) RS code, for instance, to decide if 6 errors are present, we need to compute the value of a 6 × 6 determinant.
The cofactor expansion method for evaluating a determinant of order 6 involves the computation of 6 determinants of order 5, which in turn requires the evaluation of 5 determinants of order 4 and so on. Therefore, the number of 2 × 2 determinants that must be computed to find the value of a 6 order determinant is 360 (6 × 5 × 4 × 3).
The method for evaluating determinants presented in this appendix reduces this task considerably. For instance, for a 6 × 6 determinant it only involves the computation of 55 determinants of order 2, a mere 15% of the work required by the cofactor expansion.
To get a flavor for the method, I start by presenting two examples.
Example 17
Say we want to compute the following determinant of real numbers
Two definitions before we begin: interior of a determinant and consecutive minor .
The interior of a determinant is the array (the matrix) resulting after we eliminate the first and last rows and columns. For a 3 × 3 determinant, its interior is a single number. In the above example, 4. For a 6 × 6 determinant, its interior is a 4 × 4 array and so on.
A consecutive minor is a minor whose rows and columns are adjacent in the original determinant. The minor
is a consecutive minor, but the minor
is not.
The method starts by “condensing” the above determinant into a 2 × 2 determinant whose elements are all the consecutive minors of the original determinant, that is: the minors “pivoting” on the elements a11, a12, a21, and a22. In the example,
After this initial condensation , with the “condensed” determinant
we do the following two things:
-
Condense it (evaluate it!) into a number, 40.
-
Divide this value by the “interior” (4) of the given determinant.
The result (10) is the value of the original determinant.
Remark
Two remarks are in order. First, the method requires that the interior of the 3 × 3 determinant be different from 0. This is always possible (permuting rows/columns), except for trivial cases of no interest. Second: we had to compute five 2 × 2 determinants, whereas the usual expansion by minors requires only three. Therefore, for determinants of order 3, the method does not offer any advantage. However, the situation reverses for higher orders.
Example 18
The next determinant is a 4 × 4 Vandermonde determinant. We know how to compute its value from Chapter 3.
Remark
The reason for the negative signs is that we are working with real numbers.
To apply the “condensation” method, we start by “condensing” the above determinant into a determinant of order 3. For that, we compute its 9 “consecutive minors,” from
to
and finally
The result is
After this initial condensation step, we proceed with additional condensation/division (C/D) steps.
-
Condense A1 into \( {A}_1^{\prime } \)
$$ {A}_1^{\prime }=\left|\begin{array}{cc}4& 6\\ {}48& 432\end{array}\right| $$ -
Divide the elements of \( {A}_1^{\prime } \) by the elements of the “interior” of the determinant A0 (highlighted here), which must be different from zero.
We obtain
Continue now as in Example 17: Condense A2 and divide it by the interior of A1 (that is, 6), which, again, must be different from zero.
The result is \( \raisebox{1ex}{$72$}\!\left/ \!\raisebox{-1ex}{$6$}\right.=12 \), as said before.
Let us recapitulate. The computation consists of an initial condensation (In this example, from 4 × 4 to 3 × 3) followed by two condensation/division pairs . The number if these condensation/division pairs increases with the order of the determinant we want to evaluate. To evaluate a determinant of order n, the number of C/D pairs is n − 2.
Remark
If any of the entries in the interior of A0 are 0, use elementary row/column operations to remove all zeros. Interchanging two rows or columns is enough in many cases.
Two more examples, related to the PGZ decoding of RS codes, follow.
Example 19
To find the locator polynomial of the (15, 3) RS code presented in Section 4.9, we need to compute six determinants of order 5. Say, for instance, we want to calculate L4
Let’s compute the numerator using the condensation method explained before.
The matrix is the following
The initial condensation produces the 4 × 4 matrix
The other matrices are
And finally
Therefore, we conclude that
Remark
Similarly, we can compute the denominator. The result is α6 and therefore \( {L}_4=\frac{\alpha }{\alpha^6}={\alpha}^{10} \) as said in Section 4.9.
Example 20
Before we proceed with the calculation of the locator polynomial, we need to know how many errors are present. The first step is to see if there are six errors. Therefore, we must find the value the determinant of the 6 × 6 matrix
All the entries in the interior of A0 are nonzero. But, after a quick look at the elements of A0 we notice that, when we condense A0 into A1, the element a22 of A1 is zero. In fact,
This element belongs to the interior of A1, which makes impossible the computation of A3. Interchanging rows or columns usually solves the problem. For instance, permuting columns 1 and 2, we have the matrix
We can now proceed with the computation of A1 and the other matrices.
This implies that the number of errors present is not 6.
The computational effort required by the condensation method, when it can be made to work, compares very favorably to the usual cofactor expansion. The nice feature of Massey algorithm is that it completely avoids the evaluation of determinants, and that it always works. In this same line of reducing computational complexity, I present in Chapter 5 a method, due to Forney, to find the error values without having to solve a system of linear equations, in sharp contrast to the methods I used in Sections 4.6 and 4.10.
Appendix F: The Massey Algorithm
The Massey algorithm iteratively solves the problem of finding the shortest shift register that produces a prescribed sequence of symbols . Therefore, it can be applied to finding the error locator polynomial or to problems in other areas, for instance cryptography.
A shift register is defined giving its length, L, and its connection polynomial, L(D). Since the length may not be equal to the degree of L(D), both L(D) and L ≥ g are needed to specify the circuit.
Equivalently, the register is also described appending L − g leading zeros to L(D), that is writing
instead of
It is a trivial task to find a register that produces the sequence s0, s1, … , sN, namely: L = N + 1 and any L(D) (In fact, we don’t care about the symbols the register generates, and therefore about the polynomial. They are irrelevant, since all the symbols we want to match are already loaded in the register!). However, to find a minimum length register requires a little work and much insight.
As stated, Massey algorithm works iteratively. So, say we have found some register (Ln(D), Ln) (not necessarily minimum length) that produces the first n terms of s0, s1, … , sN but not sn. Let’s call \( {s}_n^{\prime}\ne {s}_n \) the symbol output by the register. Keeping up with the idea of employing prior computations, to find a register (Ln + 1(D), Ln + 1) that also generates sn (and perhaps more terms, although this is not required), we’ll use Ln(D) adding to it a correction term constructed utilizing the connection polynomial Lm(D) of a previously obtained register (Lm(D), Lm) with which we produced s0, s1, …, sm − 1 but not sm. Again, call \( {s}_m^{\prime } \) the register output.
The discrepancies are (remember: subtraction is the same as addition)
Then, the following correcting term does the job
Thus
I show below that the register with the connection polynomial given above generates s0, s1, … , sn − 1, sn. It may even produce n′ ≥ n + 1 symbols. However, we don’t care about that.
To avoid unnecessary complications, I’ll justify the above formula using an example. Say that (L8(D), L8) and (L5(D), L5) produce only the first 8 and 5 symbols with 5 cells and 3 cells, respectively (see Fig. 4.21). I underlined “only” to emphasize that symbol \( {s}_8^{\prime } \), output by (L8(D), L8), differs from s8, and the same happens with s5 and \( {s}_5^{\prime } \), the symbol generated by (L5(D), L5).
A full description of the circuits is provided by the polynomials L8(D) and L5(D)
Recall that c5 and \( {c}_3^{\prime } \) (and other coefficients, as well) may be zero.
After being initially loaded with s0, s1, s2, s3, s4, the first four symbols generated by (L8(D), L8) are
Similarly, with register (L5(D), L5) we generate
after initially loading it with s0, s1, s2.
The connection polynomial for the circuit that (as we shall see) also produces (at least) s8 is:
The circuit is represented in Fig. 4.22. Its length is 6, but its grade may be less than 6 (if \( {c}_3^{\prime }=0 \)).
To prove that the circuit generates s8 also (at least!), we’ll begin by looking at the equivalent circuit in Fig. 4.23. If the upper connections were not present, the lower connections would clearly produce the same symbols as (L8(D), L8), although with a longer register. Therefore, the lower part of the circuit can be called the generating part of the register.
Remark
From Fig. 4.23, it is apparent that, in general, the length of the circuit is
In the example
Since the symbols produced by the upper and lower connections are added, let’s see what effect have the symbols that come from the upper connections.
From Fig. 4.23, the first symbol generated by the upper connections is
But
Therefore, the contribution is 0, and s6 enters the register unchanged (as if the upper connections wouldn’t exist). The same happens with s7 (see Fig. 4.24). Finally, from Fig. 4.25 we see that the third contribution from the upper part of the circuit is
This is exactly what we need to correct the output provided by the lower part of the circuit. Thus, the upper part of the shift register can be called the correcting part of the circuit, in agreement with the name (correcting term) given to the second term of L9(D).
Let us recapitulate.
The construction just presented to synthesize a (Ln + 1(D), Ln + 1) register capable of producing (at least) the first n + 1 terms of a given sequence works with any pair (Ln(D), Ln), (Lm(D), Lm) that only output the first n and m(m < n) symbols, respectively. The feedback polynomial is
with
and the register length
Suppose that the lengths of (Ln(D), Ln) and (Lm(D), Lm) are the shortest possible? Call these minimum lengths \( {\widehat{L}}_n \) and \( {\widehat{L}}_m \).
Now, two questions
-
Is there anything we can say about \( {\widehat{L}}_n \) and \( {\widehat{L}}_m \).
-
If we have several choices for \( \left({\widehat{L}}_m(D),{\widehat{L}}_m\right) \), is there a choice that guarantees that (Ln + 1(D), Ln + 1) is also a shortest length register?
We’ll proceed in three steps.
First Step
The first step towards answering the questions above is to prove a lower bound on Ln+1, namely:
To show that, let’s consider the following problem: We have the two registers, Reg1, of length 2, and Reg2, of length 5, both represented in Fig. 4.26. Reg1 is initially loaded with s0 and s1. The output the sequence is
where
and so on.
Reg2 is initialized with the first 5 symbols output by Reg1. How many of the symbols produced by Reg1 can be matched by Reg2? Clearly, at least 5, but perhaps more symbols depending on the connections.
Let’s write the equations required by the matching
and so on.
Observe that Eq. (4.3′) is a linear combination of Eqs. (4.1′) and (4.2′). In fact, we have
as can be seen from the Eqs. (4.29)–(4.34). Similarly, we have:
and so on.
This means that, not only Eq. (4.3′), but also Eqs. (4.4′)–(4.7′)… are linear combinations of Eqs. (4.1′) and (4.2′). Therefore, if Eqs. (4.1′) and (4.2′) are satisfied, all the other equations are satisfied. In other words, if the coefficients of Reg2 are chosen to output s5 and s6, then Reg2 produces not only the first 7(=5 + 2) symbols output by Reg1, but all of them.
Expressing this more generally, we can say that whenever the outputs of two registers of length L1 and L2 coincide in the first L1 + L2 symbols, they always coincide.
This is the key result needed to prove the lower bound on Ln + 1. But before we do that, here is a numerical example to illustrate the preceding argument.
Example 21
Refer to Fig. 4.24. Reg1 is loaded with s0 = 1 and s1 = α. The connection polynomial is
The register output is
Reg2 is loaded with
Equations (4.1′) and (4.2′) are
The values of \( {c}_4^{\prime } \) and \( {c}_5^{\prime } \) can be determined as functions of \( {c}_1^{\prime },{c}_2^{\prime },{c}_3^{\prime } \).
We obtain
There are multiple solutions. If, for instance, we set
we have
The symbols output by Reg2 are
This agrees with the symbols produced by Reg1.
Let’s proceed now to prove the lower bound on Ln + 1.
Assume the contrary, that is
Then
Since the outputs of registers (Ln(D), Ln) and (Ln + 1(D), Ln + 1) coincide in n symbols, and n is greater than or equal to the sum of the two lengths, their outputs must always coincide, which contradicts that (Ln + 1(D), Ln + 1) generates sn and (Ln(D), Ln) does not.
Second Step
This bound is valid in general and, therefore, also when (Ln(D), Ln) is a shortest length register. Thus
Clearly, the above equation implies
On the other hand
(\( {L}_{n+1}<{\widehat{L}}_n \) contradicts that \( \left({\widehat{L}}_n(D),{\widehat{L}}_n\right) \) is a minimum length register that produces n symbols)
Consequently
So, we can write
Third Step
To continue, say we have iteratively constructed a sequence of registers that satisfy (4.35) with equality, that is: registers of the minimum possible length. Then, I shall prove that Ln + 1 also does if \( \left({\widehat{\boldsymbol{L}}}_{\boldsymbol{m}}\left(\boldsymbol{D}\right),{\widehat{\boldsymbol{L}}}_{\boldsymbol{m}}\right) \) is chosen properly.
The polynomial Ln + 1(D) is constructed using \( {\widehat{L}}_n(D) \) and \( {\widehat{L}}_m(D) \)
As said before, the length of the circuit is
If we could find a register such that
we would have
That is, a register of minimum length!
Choose as \( \left({\widehat{L}}_m(D),{\widehat{L}}_m\right) \) the register just prior to the whole bunch of registers of length \( {\widehat{L}}_n \). That is, the register that generates more symbols than any other register of length less than \( {\widehat{L}}_n \) (see Fig. 4.27).
Remark
Recall that the register \( \left({\widehat{L}}_m(D),{\widehat{L}}_m\right) \) produces s0 s1 … sm − 1 but not sm. Similarly, (\( {\widehat{L}}_{m^{\prime }}(D),{\widehat{L}}_{m^{\prime }} \)) outputs \( {s}_0{s}_1\dots {s}_{m-1}{s}_m\dots {s}_{m^{\prime }-1} \). That is: at least up to sm, but perhaps more. Therefore, m′ ≥ m + 1 and \( {d}_{m^{\prime }} \) are both known. Hence, it is more appropriate to write (\( {\widehat{L}}_{m^{\prime }}(D),\kern0.5em {\widehat{L}}_{m^{\prime }} \)) than (\( {\widehat{L}}_{m+1}(D),{\widehat{L}}_{m+1} \)).
We, then, have
Now, since (\( {\widehat{L}}_{m^{\prime }}(D),{\widehat{L}}_{m^{\prime }} \)) is minimum length and generates sm, we can write
Therefore
(The last equality holds because \( {\widehat{L}}_n>{\widehat{L}}_m \)).
Hence
as desired.
Summarizing:
As we did in the examples of this chapter, we start the algorithm with two minimum length registers of different lengths, say L″ and L′ with L″ < L′, such that the register of length L″ generates more symbols than any other register of length less than L′. Then, the algorithm guaranties that the registers will be minimum length at every step.
Remark
Once (\( {\widehat{L}}_{n+1}(D),{\widehat{L}}_{n+1} \)) has been computed, we know how many symbols it generates. Then, we incorporate the register to the “chain” depicted in Fig. 4.25 with the new “name” (\( {\widehat{L}}_{n^{\prime }}(D),{\widehat{L}}_{n^{\prime }} \)), where n′ ≥ n + 1 is the number of symbols produced. That information, together with \( {d}_{n^{\prime }} \), will be used in the construction of other registers with which to obtain more terms of the given sequence.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Sanvicente, E. (2019). Decoding RS and BCH Codes (Part 1). In: Understanding Error Control Coding. Springer, Cham. https://doi.org/10.1007/978-3-030-05840-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-05840-1_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05839-5
Online ISBN: 978-3-030-05840-1
eBook Packages: EngineeringEngineering (R0)