# Convolutional Codes And Turbo Codes Computer Science Essay

Published:

In order to improve communications performance we transform signals using channel coding. Channel coding is basically divided into two categories namely waveform coding and structured sequences. In waveform coding we transform waveforms using orthogonal coding techniques such that they are less subject to errors. But the major drawback of orthogonal coding techniques is the inefficient use of bandwidth. For an orthogonal set containing 'k' waveforms that are to be transmitted, the number of coded waveforms is M=2k, then the required transmission bandwidth is M/k times that needed for uncoded case. Whereas in the structured sequences we transform the data sequences such that less errors occur. Linear block codes [LBC] are a type of parity check codes where the modulo-2 sum of any two valid code words is also a code word. Linear block codes can be implemented very easily in hardware because they are determined algebraically. LBC are easy to decode and have high code rate usually above 0.95. Retransmission and decoding of data through LBC is easy and the drawback of these codes are limited error correction capability. Convolutional codes are more complicated than linear block codes because the encoding process involves the use of finite state machine. Convolutional codes have code rates usually below 0.90 and powerful error correcting capabilities. These codes are difficult to implement and retransmissions are not practically achievable. Decoding is difficult in convolutional codes as it involves the use of finite state machines and branching paths for encoding. Turbo codes are one of the most powerful, reliable and high performance forward error correcting codes. Turbo codes have the properties of both LBC and convolutional codes. Performance of turbo codes are similar to low density parity check codes.

### Professional

#### Essay Writers

Get your grade

or your money back

using our Essay Writing Service!

Explanation

## Linear Block Codes

These codes are described by (n, k) notation where n and k represent the number of bits in a code word and input message respectively. The k-bit message blocks are transformed in to n-bit code words by the encoder. These k-bit messages can produce 2k message combinations which are referred as k-tuples and there are 2n code words at the output of encoder which are referred as n-tuples. The k-bit message vector is multiplied with the generator matrix to produce n-bit code words. Decoders use n-k redundant bits to detect errors and corrects them using error detection and correction techniques. Generation of the code word U in matrix notation is given by U=mG. Where m is a row vector given by m=m1, m2,…..,mk . At the receiver end reverse operations are performed and a matrix known as parity check matrix, denoted by 'H' is designed to perform orthogonal operations of convolution matrix 'G'.

## Convolutional Codes

Unlike linear block codes that are based on algebraic techniques, convolutional codes are based on construction techniques. Convolutional encoders encode the entire data stream into a single code word without segmenting them. Convolutional codes are specified by three parameters (n, k, K). Here k is the number of data bits to block encoder, n is the number of bits associated with the code word and K is a parameter known as constraint length. The information bits are transmitted through convolutional encoder which consists of K shift registers and K-1 memory elements. The incoming bit is fed to first shift register and the rest form the memory of the encoder. The encoder yields the code word U= G(m). Maximum likelihood decoding and viterbi decoding algorithm are two types of decoding techniques involved in these codes.

## Turbo Codes

Turbo codes are combination of two codes that work together to produce required code words. Turbo codes consists of two encoders separated by an interleaver. The parallel concatenation of two encoders results in parity bits which are then serialized with input data stream to form a single turbo code word. Interleaver is a device that shuffles the order of the data bits in a prescribed manner. The decoding algorithm in turbo codes outputs a continuous value of each bit estimate. Viterbi decoders are used to minimize the errors by finding a maximum likelihood estimate of transmitted code word. The number of decoders in turbo codes are equal to the number of encoders used.

### Comprehensive

#### Writing Services

Plagiarism-free

Always on Time

Marked to Standard

Comparision

## A.Error Correcting Capability:

Hamming distance between two code words u, v is the number of elements in which they differ. This equals the number of ones in the sum of the two code words, known as Hamming weight : that is d(u, v)=w(u + v). Error correcting capability (t) of a code can be defined as the maximum number of errors that can be corrected per code word. The error correcting capability of linear block codes is given by

Where dmin is the minimum distance between two code words.

The error correcting capability of convolutional codes is given

Where df is free distance.

If dmin = df =5 then the error correcting capabilities of both linear and convolutional codes are 2. Turbo codes use forward error correction technique. In this method additional bits are added to data bits using an algorithm. Forward error correction techniques are inefficient in some applications.

B.PERFORMANCE:

The hamming bound for a standard array in linear block codes can be described as Number of parity bits: n-k ≥ log2 [1 + (n1) +(n2) +…. +(nt)] where (nj) = n!/j!*(n-j)!

The sum of terms within the square brackets gives the minimum number of rows needed in the standard array to correct all combinations of errors through t-bit errors. Let us choose (127,106) BCH code as an example. The array contains 2n=2127=1.70*1038 n-tuples in the space. The topmost row of the array contains the 2k=2106=8.11*1031 code words, this is the number of columns in the array. The leftmost column contains the 2n-k=221=2,097,152 coset leaders, this is the number of rows in the array. Hence there are at most 2,097,151 error patterns that can be corrected by this code. As there are 127 bits in a code word , there are 127 ways to make a single errors. Number of ways in which double errors, triple errors can occur is 8001, 333,375 respectively and the number of ways they can be corrected are 8,129 and 341,504 respectively. These figures are not practically achievable. Hence for this (127,106) , the code has a hamming bound that guarantees the correction of up to and including 3-bit errors.

The probability of bit error, PB, for a binary convolutional code using hard decision decoding can be shown as PB≤dT(D,N)/dN for N=1, D=2*sqrt(p(1-p)) where p is the probability of channel symbol per error. From this equation it clear that error probability decreases exponentially and decoder complexity increases exponentially with constraint length.

With the parallel concatenation of RSC convolutional codes and feedback decoding, the error performance of a turbo code is with in the Shannon limit. Most important factors that affect the performance of turbo codes are the size of interleaver and overall code rate. As the size of the interleaver increases performance of the code improves. The performance of codes improves also for lower code rates. The performance of turbo codes is such that it minimizes the gap between the theoretical and real time values.

C.COMPLEXITIES:

The hardware of linear block codes is algebraically implemented with modulo-2 adders. So linear block codes are less complex. Convolutional codes are more complicated than linear block codes because they consist of state machines and branching paths in encoding procedures. Complexity of turbo codes is moderate because of the presence of interleavers and memory elements like shift registers.

D.BIT ERROR RATES:

The bit error rates of linear block codes is relatively low than convolutional codes. Errors in turbo codes are detected by the interleavers present in between the encoders.

E.CODING GAIN:

Coding gain of these codes depend upon the input and output requirements. Coding gain of linear block codes and convolutional codes is relatively low when compared with turbo codes. For these reasons linear and Convolutional codes are used in band limited applications and turbo codes are used in applications where bandwidth is unlimited.

Conclusion

In this journal we have discussed briefly about different types of structured sequences their perfomances, complexities and eroor correcting capabilities . It is evident that linear block codes are very easy to implement and useful in situations where bit error rate of the channel is relatively low and available bandwidth is limited. Whereas convolutional codes with higher error correcting capabilities and bit error rates are useful in satellite and space communications where bandwidth is unlimited. Turbo codes provide an advancement in the area of power efficient communications. Use of interleaving and memory elements made communication reliab-le in these codes.

### This Essay is

#### a Student's Work

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work