Linear Block Convolutional And Turbo Codes Computer Science Essay Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The comparison of linear block codes, convolutional codes and turbo codes are briefly explained from the research done. Properties and concepts of all the codes are explained that helps in detecting the error and correcting it. Diagrams show the path transitions and using particular equation for correcting the errors. Many advantages and disadvantages are described in the principles of all the codes. Examples for each code are also explained. Encoder and decoder implementation with a particular equation and diagrams find out the errors and detect it.

Linear Block Code maps a block of Kbits together to a code words of N bits. . A group of input bits are mapped to a group of output bits and therefore it is called (N, K) code. As Kbits are mapped to N code words, there will be 2k code words. The idea for the error correction code is to pick the 2K code words of the 2N total possible code words and you can detect and correct certain number of errors.

Convolutional Codes:

In case of Convolutional Codes, each block of K bits are mapped into a block of N bits. N bits are not only determined by K bits but also by the previous information bits and this dependence can be found out by a "Finite state machine".

Turbo Codes:

Turbo code is a Forward Error-Correction Coding. Turbo code is a recently innovated in 1993. For achieving large coding gains, Forney put forward the concatenated coding schemes with a combination of two or more simple building block or constituent code. Resulting code lead to the error-correction capability of longer codes and their structure allowed easy complex decoding. Power limited systems such as transmitters use serial concatenation of codes. REED SOLOMON outer code and a convolutional inner code are popular among these schemes. Turbo code can be referred to elaboration of the concatenated encoding structure and an iterative algorithm for decoding the associated code sequence.

CORE OF THE PAPER:

LINEAR BLOCK CODES:

VECTOR SPACES:

Set of all binary n-tuples Vn are called Vector spaces. Binary field has addition and subtraction as the two operations. Conventions of algebraic field define the arithmetic operation of addition and subtraction. Addition operation symbol notation is. In binary the rules of addition and subtraction are as follows:

FIG 1

VECTOR SUBSPACES:

Vector subspace is a subset 'S' of the vector spaces Vn. There are two conditions for vector subspaces. They are

All-zeroes vector is in 'S'.

Sum of any two vectors in 'S' is also in S which is also known as Closure Property.

Let Vi and Vj be two codewords with (n,k) binary block codes. The code is said to be linear if only is also a code vector. Basic goals of choosing particular code can be started by

Strive for coding efficiency, packing Vn space with many codewords possible. This says that we want to expend a small amount of redundancy- excess bandwidth.

Codewords should be apart from one another, so when there are some corruptions in vector it can be decoded with more probability.

GENERATOR MATRIX:

If 'K' is large, the encoder in the look up table is prohibitive. It is possible to reduce the complex codes by generating limited codewords rather than storing them. BASIS of the subspace is the smallest linearly independent set that spans the subspace and number of vectors in the basis set is the dimension of the subspace. Basis set 'K' which is linearly independent n-tuples V1V2,……, Vk are used to generate the required linear block code vectors as it is a linear combination of V1V2,……, Vk .

U- Each set of 2k codewords.

U = m1V1+m2V2+……+mkVk.

Where mi= 0 or 1 are the message digits i=1,…..,k.

We can also define a generator matrix by

U the generation codeword is written in the matrix notation with product of m and G.

U = mG.

In general the matrix multiplication C=AB is performed as follows:

By conventions codewords are designated as row vectors.

PARITY CHECK MATRIX:

Let 'H' be the Parity Check Matrix, which will decode the received vectors. For every (kÃ-n) generator matrix G, there will be an (n-k) Ã-n matrix H. Rows of G are orthogonal to rows of H, which is written as GHT=0, where HT = transpose of H and 0 is all zero matrix. To fulfill orthogonality for systematic code it is written as

Where HT matrix is written as

Using UHT= p1+p2+......+pn-k = 0 we can test whether received vector is a valid member of the set. U is a codeword generated by the matrix G if it satisfies UHT=0. And also syndrome of r is defined by the following equation S= rHT where r = U + e and e = e1,e2,…..,en is a error vector or error pattern.

S= rHT

S= (U + e) HT

S= UHT + eHT (UHT = 0 for all set of codewords)

Hence S = eHT.

IMPORTANT PROPERTIES:

Not any column of H can all be zeroes, because error in code word position will not effect the syndrome and the error would be undetectable.

All columns of H must be unique, if they are identical the error in the two corresponding position would be indistinguishable.

ERROR CORRECTION:

In the previous syndrome test, we have performed tests to detect error code or error pattern by using parity check matrix. Here we arrange 2n n-tuples which represents possible received vectors in an array, which is called a standard array, where the first row contain all codewords with zeroes and first column contains all the error patterns that can be detected and corrected. Each row over here is called a 'COSET' and column with error pattern is 'COSET LEADER'. The standard array format of (n,k) code as follows,

As there 2n/2k cosets = 2(n-k) cosets. When ej is the error pattern of jth closet, Ui+ej is n-tuple in this coset. Therefore the syndrome can be written as

As UiHT=0

.

The syndrome of each coset is different from any other coset in the code and that is used to detect the error pattern.

DECODER IMPLEMENTATION:

Decoder is implemented when the code is short. There are steps to implement the decoder:

Syndrome calculation.

Location of the error pattern.

Performing modulo-2 addition of error pattern and vector that is received.

As the figure given below, it is made up of AND gates and OR gates which can accomplish result for any single error pattern. Syndrome expressions are used for wring in the circuit given below. Exclusive OR gate uses the same symbols as it uses the identical operation of modulo-2 arithmetic. Logic component of the signal is indicated by a small circle at the termination and line entering the AND gate.

In decoder the faulty signal simultaneously enters into two places. The syndrome is corrupted in the upper part and syndrome is transformed to its corresponding error pattern in lower part. Correction of error is done by adding corrupted signal back to the received vector resulting in correct codeword. Sequential approach method is used for longer codes when the implementation will be complex rather than parallel method. Figure of the circuit given above is used to detect only single error pattern. For double error correction double error pattern additional circuitry is required.

CONVOLUTIONAL CODES:

DISTANCE PROPERTIES OF CONVOLUTIONAL CODES:

Simple encoder and Trellis diagram gives the distance properties of convolutional codes. We need to calculate distance between pair of codeword sequences. Minimum distance is taken in the line of block code between codeword sequence of the code and it is related to error correcting capability. Convolutional is linear or a group code, it does not make any loss in finding minimum distance between codeword sequence and all-zeros sequence. Let us assume that all-zeros sequence where an input sequence is sent through paths of interest, so that they start and end in the 00 state and do not return. In all zero transmission an error occurs when all-zeros path does not survive. Error found by the divergence means the decoder garbage the output. Hamming distance can be found by appending the required number of zeros to the shorter sequence to make the two sequences equal in length. All the error codes and minimum distance for the set of all arbitrarily long paths that diverge and remerge is called a minimum free distance where minimum distance is dmin replaced by the free distance df.

The trellis diagram shows the path transitions through 00 states and furthers more:

The trellis diagram represents the all short hand transitions from start to finish by using finite state machine. The trellis diagram helps in gain coding while using error correction coding. From the above figure we can figure out that the decoder cannot make error in an arbitrary way. Trellis diagram finds out the error in all allowable paths. Decoder uses Eb/Eo to find out error performance requirements.

Finding the free distance in an easy way is by using a closed expression can be done by using State Diagram. From the diagram given below 'D' describes the hamming distance from one branch to the all-zeros branch. All paths which are originated at state a=00 can be terminated at state e=00. This can be pointed in the state diagram. From the diagram, D5 is the hamming distance from all-zeros path.

SYSTEMATIC AND NON-SYSTEMATIC CONVOLUTIONAL CODES:

When input k-tuple occurs as a part of the output n-tuple associating with the k-tuple, then it is known as Systematic convolutional code. Binary rate ½, k=3 systematic encoder from the table. In linear block code, non-systematic code can be converted into systematic code by distance properties. In convolutional code it is not possible. Convolutional codes depend upon free distance and it reduces maximum free distance for length and rate.

FIGURE represents systematic encoder with rate ½ and k=3.

EXAMPLE FOR CONVOLUTIONAL CODES:

A rate

½ convolutional coder k=1, n=2 with memory length 2 and constraint length 3.

Here, the length of the shift register is 2 with 4 different rates. Convolution coder behaviour can be captured by 4 state machines -00 01 10 11.

Encoding and the Decoding procedure can be seen in the trellis diagram.

Here, the transfer function T(D) is generating function and can be expressed as T(D) = Xe/Xa. From the state diagram the expression of the transfer function can be written as

T (D) = D5/ 1-2D.

Transfer function can also be used to provide more information than the distance of various paths.

ERROR-CORRECTIONG CAPABILITY OF CONVOLUTIONAL CODES:

Error-Correcting capability't' can be figured out by the number of code symbol errors with maximum decoding and can be correct in the length of code. But when decoding of convolutional codes, error-correcting capability is quite concise. Length depends up on the errors distributed. Length can be progressed by transfer function for code and error pattern.

Input sequence 1 1 0 0

Output sequence will be 11 10 10 11

Therefore the error will be detected by the hamming distance and error-correcting capability decoder.

SOFT-DECISION VITERBI DECODING:

There are two types of Decision Viterbi decoding. They are Soft-decision viterbi decoding and Hard-decision viterbi decoding. The difference between hard-decision and soft-decision viterbi decoding is that soft-decision cannot use hamming distance metric because of limited resolution. Here binary number of 1 and 0 are transformed to octal number 7 and 0. We would prefer maximum correlation than minimum distance because of such metric.

TURBO CODES:

Forward Error Correction is the major role for a turbo code .Figure give below is the example of encoder with three codes. Let π1, π2, π3 be the three codes transits to the encoder, x0, x1, x2 and x3 be the three outputs. At first when turbo codes were introduced it was a scheme that achieves a bit-error-probability of 10-5 with a rate ½ code over an Additive White Gaussian Noise (AWGN) channel and BPSK modulation at an Eb/E0 of 0.7db. For turbo codes to work

Properly, the decoding algorithm should not limit itself by passing hard decisions to the decoder. For easy understanding, the decoder algorithm must effect an exchange of soft decisions rather than hard decisions. Turbo decoding passes the soft decision from the output of one decoder to the input of the other decoder and it is done several times to produce reliable decisions in case of the system with two component codes. The figure given below R=1/2 coder.

CONCEPTS OF TURBO CODES:

LIKELIHOOD FUNCTIONS:

In communication engineering, applications involving AWGN channel come with a great interest. Bayes' theorem of posteriori probability (APP) of a decision in terms of a continuous value random variable x is expressed as

Here P (d=i|x) is the APP. d= i where data belonging to the ith signal. P (x|d = i) represents probability density function of a received continuous valued data plus noise signal x. P (d=i) is a priori probability. p (x) is a scaling factor as it is gained by average of all the classes. Lower case p is used to designate the pdf of a continuous valued random variable and upper case P is used to designate probability.

Other concepts of turbo codes include the TWO-SIGNAL CLASS CASE where binary logical elements are represented electronically by voltages +1 and -1. 'd' is used to represent data bit.

The other concept LOG LIKELIHOOD RATIO is developed from the equation of the two-signal class case concept and the equation are derived and expressed as given below

So after comparing with the equation of the two-signal class case

Or

TURBO ENCODER AND DECODER:

Turbo encoder and decoder are discussed below with a particular diagram and briefly explained.

Figure represents the turbo encoder.

Figure represents the turbo decoder.

PRINCIPLES OF ITERATIVE TURBO DECODING:

A demodulator is designed to produce soft decisions and is transferred to a decoder in the case of communication receiver. In AWGN, 2db is the approximate value determined by the error performance improvement of system utilizing such soft decisions compared with hard decisions and the decoder is known as soft-input/hard-output decoder, as the final decoding of the decoder must terminate in bits which are hard decisions. Hard decision degrades the system performance in a decoder. Soft-input/Soft-output is required to overcome this issue.

Figure shows the soft-input/soft-output decoder.

CONCLUSION:

Linear block codes, convolutional codes and turbo codes are discussed above as the error correcting codes. All the three have different properties and different mechanism to find out the errors and correcting it. These codes use different decoders and encoders with different concepts. Work done above with many equations and by using different diagrams such as state, tree and trellis diagram shows the path transition and errors can be detected and corrected. Improving the concatenated codes and concepts of convolutional and turbo codes might make error detecting process more easy in the future. All these codes plays an important role in communications.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.