# Using Linear Block Codes To Correct Errors English Language Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

In coding theory, a linear code is an error-correcting code for which any linear combination of code words is another codeword of the code. Linear codes can be divided into block codes and convolutional codes, although Turbo codes can be seen as a hybrid of these two types[1]. Linear codes allow for more efficient encoding and decoding algorithms than other codes.

A code is said to be linear if any two code words in the code can be added in modulo 2 addition to produce a third code word in the code.

Linear block codes:

Consider an (n,k) linear block code, where 'k' represents message bits and 'n' represents code bits.

In this, k bits of the n code bits are identical to message sequence to be transmitted. And 'n-k' bits are computed from the message bits and encoding rule used. These 'n-k' bits are referred to as parity check bits. Block codes in which the message bits are transmitted in unaltered form are called systematic codes. For applications requiring both error detection and error correction, the use of systematic block codes simplifies implementation of the decoder.

Code rate of above mentioned block code is k/n.

Let m0,m1,…mk-1 constitute a block of k arbitrary message bits, so we will have 2k distinct message blocks. This sequence of message bits are given to the encoder which makes a sequence of n encoded bits, let b0,b1…bn-k-1 parity bits ie n-k parity bits for the n message sequence. For the code to posses a symmetric structure we divide the code into two parts message bits and parity bits. We can send the message bits first and parity bits later, also the vice versa can be done.

In our representation we take the n-k parity bits to the left and the message bits to the right .So we write

Ci = bi i=0,1,…n-k-1

Ci=mi+k-m i= n-k, n-k+1 ,…. n-1

The n-k parity bits are linear sums of the k message bits as shown by the generalized relation

Bi = p0im0 + p1im1 … pk-1mk-1

And the coefficients are pij = 1 if bi depends on mi and pi = 0 otherwise

The coefficients pij are chosen in such a way that the rows of the generator matrix are linearly independent and the parity equations are unique.

Now using matrix notations

M={ m0,m1…..mk-1}

B={b0,b1..bn-k-1}

C={ c0,c1, cn-1}

Note that all three vectors are row vectors.

B=mP

Now c can be expressed as c=[b;m]

Now we have

C=m[P;Ik]

Where Ik is a k-k identity matrix

Now G=[P;Ik]

The generator matrix is said to be in its canonical form in that its k rows are linearly independent that is , it is not possible to express any row of the matrix G as a linear combination of remaining rows.

Now c=mG

The full set of code words referred to simply as the code is generated. Moreover the sum of the code word is another code word. This basic property of linear code words is closure.

Ci+cj = (mi+mj)G

The modulo 2 addition of mi and mj represent a new message vector. Correspondingly the modulo 2 sum ci and cj represent a new vector.

There is another way of expressing the relationship between the message bits and parity bits of a linear block codes. Let H denote an n-k by n matrix defined as H =[In-k:PT]

Now HGT=0

And cHT = mGHT = 0

The matrix H is called the parity check matrix of the code and the set of equations specified ove equations are called parity check equations.

Turbo Coding:

Turbo codes derive their name from the analogy of the developing algorithm to the turbo engine principle. This operates on the noisy versions of the systematic bits and the two sets of parity check bits in two decoding stages to produce an estimate of the original message bits.

Each of the decoding stages uses an BCJR algorithm which was originally invented by Bahi, Cocke, Jelinek, and Raviv to solve a maximum a posteriori probability MAP detection problem. The BCJR algorithm differs from viterbi algorithm in two fundamental aspects.

The BCJR algorithm is a soft input soft output decoding algorithm with two recursions one forward and the other backward, both of which involve soft decisions. In contrast the viterbi algorithm is a soft input hard output decoding algorithm which a single forward recursion involving soft decisions, the recursion ends in a hard decision, whereby a particular survivor path among several ones is retained. In computational terms the BJCR algorithm is therefore complex than the viterbi algorithm because of the backward recursion.

The BJCR algorithm is a MAP decoder in that it minimizes the bit errors by estimating the a posteriori probabilities of the individual bits in a code word to reconstruct the data sequence, the soft outputs of the BCJR algorithm are hard limited.

Most important, formulation of the BCJR algorithm rests on the fundamental assumption that the channel encoding namely the convolutional encoding performed in the transmitter is modeled as a Markov process and the channel is memory less. In the context of our present discussion the markovian assumption means that if a code can be represented as a trellis, then the state of the trellis is depends only on the past state and the input bit.

Before proceeding to describe the operation of the two stage turbo decoder. We find it desirable to introduce the notion of extrinsic information. The most convenient representation for this concept is a log-likelihood ratio, in which case extrinsic information is computed as the difference between the log-likelihood ratio compound at the input.

On this basis we may depict the flow of information in the two stage turbo decoder in a symmetric extrinsic manner. The first decoder stage uses the BCJR algorithm to produce a soft estimate of systematic bit xj is expressed as the log-likelihood ratio.

L1(x)= ∑j=1bL1(xj)

Hence the extrinsic information about the message bits derived from the first decoding stage is

L1(x)= L1(x)-L2(x)

Before application to the second stage, the extrinsic information is reordered to compensate for the pseudo random interleaving introduced in the turbo encoder. In addition, the noisy parity check bits generated by encoder 2 are used as input. Thus by using the BCJR algorithm the second decoding stage produces a more refined soft estimate of message bits x.

BCJR algorithm:

For a discussion of turbo decoding to be complete, a mathematical exposition of the BCJR algorithm for MAP estimation is in order.

Let x(t) be the input to a trellis encoder at time. Let y(t) be the corresponding output observed at the receiver. Note that y(t) may include more than one observation

Y(1,t) =[y(1),y(2),….y(t)]

Æ›m(t) denotes the probability that the state s(t) of the trellis encoder equals m, where m=1,2,…M. So, we write

Æ›(t) = P[s(t)/y]

where s(t) and Æ›(t) are both M by 1 vectors.

P(x(t)=1/y) =∑ Æ›(t)

Where Æ› is the set of transitions that correspond to a symbol '1' at the input, and Æ›(t) is the s- component of Æ›(t).

α(t) =P(s(t)/y(I,j))

β(t) = P(s(t)/y(t,k))

α and β are estimates of state probabilities.

Æ›(t) =α(t).β(t)/mod(α(t).β(t))

The vector products are defined in terms of the individual elements of α and β.

ϒ(t) = {ym,m(t)}

We may then formulate the recursion thermo as follows

αT(t) = αT (t-1) ϒ(t)/mod(αT (t-1) ϒ(t))

β(t) = ϒ(t+1)β(t+1)/mod(ϒ(t+1)β(t+1))

Convolutional code:

In block coding, the encoder accepts a 'k' bit message block and generates an 'n' code word. Thus, code words are produced on a block-block basis. Clearly, provision must be made in the encoder to buffer an entire message block before generating the associated code word. There are applications, however, where the message bits come in serially rather than in large blocks, in which case the use of a buffer may be undesirable. In such situations, the use of convolutional coding may be undesirable. In such situations, the use of convolutional coding may be the preferred method. A convolutional coder generates redundant bits by using modulo-2 convolutions, hence the name.

Figure 1

The encoder of a binary convolutional code with rate 1/n, measured in bits per symbol, may be viewed as a finite state machine that consists of an M stage shift register with prescribed connections to n modulo 2 adders, and a multiplexer that serializes the outputs of the adders. An L -bit message sequence produces a coded output sequence of length n(L+M) bits. The code rate is therefore given by

r = L / n(L+M) bits/symbol

Typically, we have L >> M. Hence, the code rate simplifies to

r = 1/n bits per symbol

The constraint length of a convolutional code, expressed in terms of message bits, is defined as the number of shifts over which a single message bi can influence the encoder output. In an encoder with an M stage shift register, the memory of the encoder equals M message bits, and K=M+1 shifts are required for a message bit to enter the shift register and finally come out. Hence, the constraint length of the encoder is K. Figure 1 shows a convolutional encoder with n=2 and K =3.Hence, the code rate of this encoder is ½.

The constraint length of a convolutional code, expressed in terms of message bits, is defined as the number oifo shifts over which a single message bit can influence the encoder output. In an encoder with an M stage shift register, the memory of the encoder equals M message bits, and K = M +1 shifts are required for a message bit to enter the shift register and finally come out. Hence, the constraint length of the encoder is K.

Figure 1 shows a convolutioinal encoder with n =2 and K = 3 . Hence, the code rate of this encoder is ½. The encoder of figure 1 operates on the incoming message sequence, one bit at a time.