Convolutional Codes Turbo Codes A Comparative Review Computer Science Essay


This paper summarizes the use and purpose of using error coding techniques.A breif explanation of few error coding techniques like Linear block codes, convolutional codes and Turbo codes are done. Linear block codes are basic coding technique which are very simple, the encoding and decoding of the linear block codes are clearly explained using syndrome, Parity check matrix and Hamming codes. Convolutional codes are explained along with their state diagram and state representation, also their trellis diagram representation is shown clearly.The introduction of turbo codes in 1993, much more powerful codes than convolutional codes are referred to collectively as turbo and turbo-like codes have eclipsed classical methods. The encoders and decoders like soft in/soft out decoders are also explained. The merits and demerits of these error coding techniques are specified respectively.


The basic goal in digital communications is to transport bits of information without losing too much information along the way. The level of information loss that is acceptable varies for different applications. The loss is measured in terms of the bit error rate, or BER. We can think of the channel as a block that causes errors to occur when a signal passes through it. Regardless of the error source, we can describe the problem as follows: when the transmitted signal arrives at the receiver after passing through the channel, the received data will have some bits that are in error. The system designer would like to incorporate ways to detect and correct these errors. The field that covers such digital processing techniques is known as error control coding. There are different types of error control coding techniques that can detect and correct the errors. Some of the coding techniques explained here are Linear block codes, Convolutional codes and Turbo codes.


Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

A linear block code over a field F is a linear subspace of Fn, where n is the blocklength of the code. (John Gill, Stanford University)

Some facts about linear block codes are that the sum and difference of codewords are codewords, and also scalar multiples of codewords are codewords. Every linear block code is a group code, but not converse. A binary group code is a linear block code because the only scalars

are 0 and 1. These codes are based on algebraic or combinatorial techniques.

Linear Block Codes can also be defined as a set C ⊂ with cardinality if it is a subspace of a vector space .

C ⊂ [1]

Here members of C and all zero code word are also code words. Any linear combination of code words is a code word.

In linear Block codes the information bit stream is chopped into blocks of k-bits,

where each block is encoded to larger blocks of n-bits.

Fig-a [2]

The coded bits are modulated and sent over a channel and after that the same procedure is done in reverse at the receiver end.


n- k represents the redundant bits

= gives the code rate


By linear transformation


c =m ⋅G =m0g0 + m1g0 +……+ mk-1gk-1

Here in the above equation,

The code C is called a k-dimensional subspace.

G is called a generator matrix of the code and G is a k Ã-n matrix of rank k of elements from GF(2), is the i-th row vector of G. The rows of G are linearly independent since G is assumed to have rank k.


(7, 4) Hamming code over GF(2)

The encoding equation for this code is given by



c2 = m2




c6 = m0 + m1 + m3


An (n, k) linear systematic code is completely specified by a k Ã- n generator matrix of the following form.

where is the k Ã- k identity matrix.


For G = [ P | ], define the matrix H = [|] (The size of H is can be represented as (n-k)xn).

It follows that GHT = 0.

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

Since c = mG, thereby cHT = mGHT = 0.


The Hamming weight wH(v) is the number of nonzero components of v.

Definition: The minimum (Hamming) weight of a block code is the weight of the nonzero codeword with smallest weight:

wmin = w* = min{wH(c) : c ϵ C, c ≠ 0} ( John Gill, Stanford University)

In linear block codes the generator matrix is defined by the following array:


Fig-b [3]


Hamming codes explains a class of single-error correcting codes and it can be defined as:

n = 2r-1, k = n-r, r > 2

Hamming codes are perfect codes and the minimum distance of the code dmin = 3.

The construction rule of H matrix of a Hamming code of order r can be defined as its columns all non-zero r-bit patterns.

Therefore the size of H : r x(2r-1)=(n-k)xn


Fig-c [3]

From the above figure, Let c be transmitted signal and r be received signal, where

r = c + e

e = error pattern = e1e2..... en,

Where e ={ 1 ; if the error has occurred in ith location

0 ; otherwise

The weight of e determines the number of error and if the error pattern can be determined, decoding can be achieved by:

c = r + e

Example: Consider the (7,4) code,

Let 1101000 be transmitted and 1100000 be received. Therefore e = 0001000 (Here error is in the fourth location)

Let r = 1110100 that was transmitted,

c e

#2 0110100 1000000

#1 1101000 0011100

#3 1011100 0101000

From the above results the first scenario is the most probable.


For a standard array decoding, huge storage memory and searching time is required. Thereby the syndrome is defined as

s = vHT = (c + e) HT = eHT

Syndrome depends only on the error pattern and not on the transmitted codeword.Therefore, each coset in the array will be associated with a unique syndrome.


For the received vector v, compute the syndrome s = vHT. Using the below mentioned syndrome table, identify the error pattern e.

Now Add e to v to recover the transmitted codeword c.

Example: v = 1110101 ==> s = 001 ==> e = 0010000 Then, c = 1100101

Syndrome decoding will reduce the storage memory from nx2n to 2n-k(2n-k) and also, reduces the searching time considerably.


Fig-d [5]

From the above figure,

Let r = r0 , r1, r2, r3, r4, r5, r6 and s = s0, s1, s2

From the H matrix:

s0 = r0 + r3 + r5 + r6

s1 = r1 + r3 + r4 + r5

s2 = r2 + r4 + r5 + r6

Therefore now from the table of syndromes and their corresponding correctable error patterns, a truth table can be constructed. A combinational logic circuit with s0 , s1 , s2 as input and e0 , e1 , e2 , e3 , e4 , e5 , e6 as outputs can be designed.


The minimum distance h(C) is easy to compute if C is a linear code.

 Linear codes will have simple specifications.

Linear block code is the easiest & simple technique to detect and correct errors.

Extra parity bits do not convey any information by themselves but make it possible to detect and correct errors in the received message i.e. error probability is reduced.


Transmission bandwidth requirement is more as in this process extra parity bit is added.

These extra bits will reduce the power of the transmitter and also reduce its bitrate.[3]


Convolutional codes are used for error control coding and it is different from that of the block codes. A Convolutional encoder encodes the entire data stream into a single code word. It is a machine with memory and does not need to segment the data stream into blocks of fixed size.


Fig-e [5]

Here k =1, n = 2 , (2,1) Rate-1/2 convolutional code and it is a two-stage register ( M=2 ).

Each input bit influences the output for 3 intervals (K=3)

Where K = constraint length of the code = M + 1.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

A convolutional code may be defined by a set of n generating polynomials for each input bit.

From the diagram above,

g1(D) = 1 + D + D2

g2(D) = 1 + D2

Now the set {gi(D)} defines the code completely. The highest-degree generator polynomial is equal to length of the shift register.

Convolutional codes map information to code bits sequentially by convolving a sequence of information bits with "generator" sequences. Convolutional codes are based on construction techniques. A convolutional encoder encodes K information bits to N>K code bits at one time step. These codes can be called as block codes for which the encoder has certain structure such that we can express the encoding operation as convolution.

The constraint length can influence the output of the encode which gives the number of k-bit shifts over a single information bit. Here for the first k stages of the register, k bits are shifted, then all the bits in the register are shifted k stages to the right, and the outputs of the n adders are sequentially sampled inorder to get the binary code symbols or bits at each unit of time. The code rate is k/n message bit per code bit, where k< n, as there are n code bits for each input group of k message bits.


Fig-f [1]

Initially, the shift register holds all binary 0s. The input data bits are fed in continuously at a bit rate Rb, and the shift register is clocked at this rate.As the input moves through the register, the rightmost bit is shifted out so that there are always 3 bits held in the register. At the end of the message, three binary 0s are attached, to return the shift register to its initial condition.The commutator at the output is switched at twice the input bit rate so that two output bits are generated for each input bit shifted in.At any one time the register holds 3 bits which form the inputs to the exclusive-OR circuits.

Fig-g [2]


First initialize the memory to all zero before encoding the first bit and similarly clear out the memory to all zero after encoding the last bit. Hence, a tail of zero-bits is appended to the data bits initially as shown in the below figure.

Fig-h [2]

Thereby the effective code rate can be defined as,

Where L is the number of data bits.

Convolutional encoder can be represented in different ways like

a)Vector Representation

b) Impulse response representation

c) Polynomial representation


A convolutional encoder can also be called as one type of finite-state machines, which is the usual name given to machines which can have memory of past signals. The machine can encounter only a finite number of unique states. The state will have the smallest amount of information which combines together with the current input and predict the output of the machine.

Fig-i [1]

In a Convolutional encoder, content of the memory represents the state. For 1/n convolutional encoder the contents of the rightmost K-1 stages represents the state.

A state diagram represents the encoder. and it

contains all the states and all possible transitions between them. It will have only two transitions initiating from a state and two transitions ending up in a state.

The states, shown in the figure represent the rightmost K - 1 stages of the register, and the paths between the states represent the output branch words resulting from such state transitions.

The states of the register are designated S0 = 00, S1 = 01, S2 = 10 and S3 = 11; the diagram illustrates all the state transitions that are possible for the encoder.


Trellis diagram is the expansion of state diagram in time.

Fig-j [1]

One time-interval section of a fully-formed encoding trellis structure completely defines the code. The only reason for showing several sections is for viewing a code-symbol sequence as a function of time. The state of the convolutional encoder is represented by the contents of the rightmost K - 1 stages in the encoder register. Some authors describe the state as the contents of the leftmost K - 1 stages.Which description is correct? They are both correct.


a) Using Convolutional codes we can get the lowest possible decoding error.

b) One decoder can decode all codes in the family.[5]


a) These codes may be highly complex.

b) Resulting codes may be sub- optimum.[6]


In 1993 turbo codes are introduced by Berrou, Glavieux, and Thitimajshima.[7] These codes are binary parallel concatenated recursive systematic convolutional code and capable of achieving near Shannon capacity performance(R=1/2 Turbo Codes, 0.7dB at BER = 10-5). Turbo Codes works better for high rates or high level of noise and can be thought of as a refinement of the concatenation coding structure and an iterative algorithm for decoding the associated code sequence.[8]


Fig-k [11]

Here the codes are parallel concatenated where the k-bit block is encoded N times with different versions(order).The sequence remains RTZ is 1/2Nv .Randomness with 2 encoders; error pro of 10-5 . Permutations are to fix dmin .


Fig-l [10]

The inputs to the decoders are the Log likelihood ratio (LLR) for the individual symbol d. LLR value for the symbol d is defined ( Berrou) as

The SISO decoder reevaluates the LLR utilizing the local Y1 and Y2 redundancies to improve the confidence.

The value z is the extrinsic value determined by the same decoder and it is negative if d is 0 and it is positive if d is 1. The updated LLR is fed into the other decoder and which calculates the z and updates the LLR for several iterations. After several iterations , both decoders converge to a value for that symbol. For each data bit, calculate the LLR given that a sequence of bit were sent. Compare the LLR output, to see if the estimate is towards 0 or 1 then take HD.


A soft-in-soft-out (SISO) decoder receives as input a "soft" (i.e. real) value of the signal. The decoder then outputs for each data bit an estimate expressing the probability that the transmitted data bit was equal to one. In the case of turbo codes, there are two decoders for outputs fromboth encoders. Both decoders provide estimates of the same set of data

bits, albeit in a different order. If all intermediate values in the decoding process are soft values, the decoders can gain greatly from exchanging information, after appropriate reordering of values.

Information exchange can be iterated a number of times to enhance performance. At each round, decoders re-evaluate their estimates,using information from the other decoder, and only in the final stage will hard decisions be made, i.e. each bit is assigned the value 1 or 0.[11]


Fig-m [9]


Deep space exploration (France SMART-1 probe)

JPL equipped Pathfinder 1997

Mobile 3G systems ( Currently used in Japan, UMTS and NTT DoCoMo)

Broadband Satellite Systems


These codes gives rise to new codes like Low Density Parity Check (LDPC).

These codes are used for improving the decoding delay.

Turbo codes have been adopted as a required element for most 3G standards, such as CDMA2000 and W-CDMA.

Turbo codes are being used for low-power applications such as deep-space and satellite communications.

Turbo codes makes possible to increase the data rate without increasing power of a transmission, or they can be used to decrease the amount of power used to transmit at a certain data rate.


Their main drawbacks are relatively high decoding complexity and relatively high latency, which are not suitable for some applications. For satellite use, this is not of great use, since the transmission distance itself introduces latency due to finite propagation speed.


Even though linear block codes are the easiest & simple technique to detect and correct errors when compared to convolutional codes, they use more transmission bandwidth when compared to convolutional codes. This extra bandwidth is used to process the parity bits in linear block codes which are not required in convolutional coding and due to this extra parity bits the power and bitrate of the transmitter will be reduced. Convolutional codes use a completely different approach to coding. Instead of breaking the message into blocks, the entire message stream is converted into a single code word. Convolutional codes get their name because the encod- ing process can be viewed as convolv- ing the message stream with the impulse response of the code.


C:\Documents and Settings\chsss\Desktop\linera block codes.jpg

Fig-n From Digital communication Lab-5


Fig-o From digital communication lab-5

The above graphs shows the graphical representation of bit error rate for various error probabilities. Here we can observe the performance characteristics were improved for encoded codes when compared with the uncoded one.


As Turbo codes are developed with a slight improvement in the convolutional codes, greater performance improvements can be realized when it is used with two convolutional codes. A class of convolutional codes recently designed specifically to accomplish this goal is turbo codes. In short, turbo codes are two convolutional codes (inner and outer) with an interleaver in between that iteratively bootstrap their mutual performance. Turbo codes have produced some really remarkable improvements in performance when compared to convolutional codes. Thereby turbo codes are better in performance when compared to convolutional codes.[13]


A clear view on, the errors in the information bits at the receiver end of the channel are being detected and corrected using error control coding techniques. Some of the error coding techniques like linear block codes, convolutional codes and turbo codes are explained. Linear block codes are simple and easiest techniques used for computing minimum distance signal transmission. Convolutional codes are advanced and are also used for getting lowest possible decoding errors when compared to linear block codes. Turbo codes are class of convolution codes that have the properties of large block codes by using recursive coders. From this we can clearly say that turbo codes can perform very well when compared to convolutional and linear block codes, particularly for multilevel tasks.