Types Of Block Codes Computer Science Essay

Published:

Turbo code is a Forward Error-Correction Coding. Turbo code is a recently innovated in 1993. For achieving large coding gains, Forney put forward the concatenated coding schemes with a combination of two or more simple building block or constituent code. Resulting code lead to the error-correction capability of longer codes and their structure allowed easy complex decoding. Power limited systems such as transmitters use serial concatenation of codes. REED SOLOMON outer code and a convolutional inner code are popular among these schemes. Turbo code can be referred to elaboration of the concatenated encoding structure and an iterative algorithm for decoding the associated code sequence.

Simple encoder and Trellis diagram gives the distance properties of convolutional codes. We need to calculate distance between pair of codeword sequences. Minimum distance is taken in the line of block code between codeword sequence of the code and it is related to error correcting capability. Convolutional is linear or a group code, it does not make any loss in finding minimum distance between codeword sequence and all-zeros sequence. Let us assume that all-zeros sequence where an input sequence is sent through paths of interest, so that they start and end in the 00 state and do not return. In all zero transmission an error occurs when all-zeros path does not survive. Error found by the divergence means the decoder garbage the output. Hamming distance can be found by appending the required number of zeros to the shorter sequence to make the two sequences equal in length. All the error codes and minimum distance for the set of all arbitrarily long paths that diverge and remerge is called a minimum free distance where minimum distance is dmin replaced by the free distance df.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The trellis diagram shows the path transitions through 00 states and furthers more:

The trellis diagram represents the all short hand transitions from start to finish by using finite state machine. The trellis diagram helps in gain coding while using error correction coding. From the above figure we can figure out that the decoder cannot make error in an arbitrary way. Trellis diagram finds out the error in all allowable paths. Decoder uses Eb/Eo to find out error performance requirements.

Finding the free distance in an easy way is by using a closed expression can be done by using State Diagram. From the diagram given below 'D' describes the hamming distance from one branch to the all-zeros branch. All paths which are originated at state a=00 can be terminated at state e=00. This can be pointed in the state diagram. From the diagram, D5 is the hamming distance from all-zeros path.

Here, the transfer function T(D) is generating function and can be expressed as T(D) = Xe/Xa. From the state diagram the expression of the transfer function can be written as

T (D) = D5/ 1-2D.

Transfer function can also be used to provide more information than the distance of various paths.

ERROR-CORRECTIONG CAPABILITY OF CONVOLUTIONAL CODES:

Error-Correcting capability't' can be figured out by the number of code symbol errors with maximum decoding and can be correct in the length of code. But when decoding of convolutional codes, error-correcting capability is quite concise. Length depends up on the errors distributed. Length can be progressed by transfer function for code and error pattern.

SYSTEMATIC AND NON-SYSTEMATIC CONVOLUTIONAL CODES:

When input k-tuple occurs as a part of the output n-tuple associating with the k-tuple, then it is known as Systematic convolutional code. Binary rate ½, k=3 systematic encoder from the table. In linear block code, non-systematic code can be converted into systematic code by distance properties. In convolutional code it is not possible. Convolutional codes depend upon free distance and it reduces maximum free distance for length and rate.

FIGURE represents systematic encoder with rate ½ and k=3.

EXAMPLE FOR CONVOLUTIONAL CODES:

A rate

½ convolutional coder k=1, n=2 with memory length 2 and constraint length 3.

Here, the length of the shift register is 2 with 4 different rates. Convolution coder behaviour can be captured by 4 state machines -00 01 10 11.

Encoding and the Decoding procedure can be seen in the trellis diagram.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Input sequence 1 1 0 0

Output sequence will be 11 10 10 11

Therefore the error will be detected by the hamming distance and error-correcting capability decoder.

SOFT-DECISION VITERBI DECODING:

There are two types of Decision Viterbi decoding. They are Soft-decision viterbi decoding and Hard-decision viterbi decoding. The difference between hard-decision and soft-decision viterbi decoding is that soft-decision cannot use hamming distance metric because of limited resolution. Here binary number of 1 and 0 are transformed to octal number 7 and 0. We would prefer maximum correlation than minimum distance because of such metric.

TURBO CODES:

Forward Error Correction is the major role for a turbo code .Figure give below is the example of encoder with three codes. Let π1, π2, π3 be the three codes transits to the encoder, x0, x1, x2 and x3 be the three outputs. At first when turbo codes were introduced it was a scheme that achieves a bit-error-probability of 10-5 with a rate ½ code over an Additive White Gaussian Noise (AWGN) channel and BPSK modulation at an Eb/E0 of 0.7db. For turbo codes to work

Properly, the decoding algorithm should not limit itself by passing hard decisions to the decoder. For easy understanding, the decoder algorithm must effect an exchange of soft decisions rather than hard decisions. Turbo decoding passes the soft decision from the output of one decoder to the input of the other decoder and it is done several times to produce reliable decisions in case of the system with two component codes. The figure given below R=1/2 coder.

CONCEPTS OF TURBO CODES:

LIKELIHOOD FUNCTIONS:

In communication engineering, applications involving AWGN channel come with a great interest. Bayes' theorem of posteriori probability (APP) of a decision in terms of a continuous value random variable x is expressed as

Here P (d=i|x) is the APP. d= i where data belonging to the ith signal. P (x|d = i) represents probability density function of a received continuous valued data plus noise signal x. P (d=i) is a priori probability. p (x) is a scaling factor as it is gained by average of all the classes. Lower case p is used to designate the pdf of a continuous valued random variable and upper case P is used to designate probability.

Other concepts of turbo codes include the TWO-SIGNAL CLASS CASE where binary logical elements are represented electronically by voltages +1 and -1. 'd' is used to represent data bit. The equation of two-signal class case is given below.

Diagram shows the concept of the two-signal class case.

The other concept LOG LIKELIHOOD RATIO is developed from the equation of the two-signal class case concept and the equation are derived and expressed as given below

So after comparing with the equation of the two-signal class case

Or

TURBO ENCODER AND DECODER:

Turbo encoder and decoder are discussed below with a particular diagram and briefly explained.

Figure represents the turbo encoder.

Figure represents the turbo decoder.

PRINCIPLES OF ITERATIVE TURBO DECODING:

A demodulator is designed to produce soft decisions and is transferred to a decoder in the case of communication receiver. In AWGN, 2db is the approximate value determined by the error performance improvement of system utilizing such soft decisions compared with hard decisions and the decoder is known as soft-input/hard-output decoder, as the final decoding of the decoder must terminate in bits which are hard decisions. Hard decision degrades the system performance in a decoder. Soft-input/Soft-output is required to overcome this issue.

Figure shows the soft-input/soft-output decoder.

CONCLUSION:

Linear block codes, convolutional codes and turbo codes are discussed above as the error correcting codes. All the three have different properties and different mechanism to find out the errors and correcting it. These codes use different decoders and encoders with different concepts. Work done above with many equations and by using different diagrams such as state, tree and trellis diagram shows the path transition and errors can be detected and corrected. Improving the concatenated codes and concepts of convolutional and turbo codes might make error detecting process more easy in the future. All these codes plays an important role in communications.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work