# Transmission signal

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

### Abstract

Enhancing the performance of a transmission signal by which it could fulfil the specific role subjected to it without being interfered by impairments in channel like noise, fading and so is referred to as channel coding. The so called process can be segmented into two regions of study specifically waveform coding and structured coding. This paper throws its focus, with a comparative eye, on linear block codes, convolutional codes and turbo codes, which can be tagged under structured coding that mainly deals with the transformation of data sequences,rather than waveforms as in waveform coding, into redundant bits. First a theoretical analysis of above code is made. Then a comparative analysis is made based on the key parameters like bit error rate and error correction and detection.

Index Terms-Linear block codes, Convolutional codes, Turbo codes, Code rate, Bit error rate

### INTRODUCTION

Information theory, proposed by Claude Shannon, is mainly concerned with source coding and channel coding in which source coding refers to data compression while the latter is concerned with adding certain robustness to communicating signals by which we could detect and correct errors at the receiving side resulting in an error free transmission of data. In this paper we are dealing with error detection and correction performance of some structured codes which can be clubbed under channel coding. In channel coding error detection and correction is made possible by adding redundant bits to the information stream so that at the receiving side the receiver could check whether an error has occurred or not. Basically there are two error control techniques namely error detection and transmission and forward error control. The first one adds parity bits to the data stream to know whether an error has occurred or not but doesn't bothers to correct it. The latter design's parity bits in such a way that it detects and corrects error eliminating the need of two way-links as needed by the first error control method [1]. The error correcting codes can be broadly sectored into three, namely linear block codes, convolutional codes and turbo codes, where the main focus of this paper lies. Linear block-codes[1][2]can be thought of as parity code in which the data stream that is to be transmitted is segmented into various blocks , the check code are computed and is added to this block forming a block code of length n. On reception of the information bit at the receiver, it will compare with the calculated check code to those that has been received. If the both code appears to be similar then there is no error else it's assumed that an error has occurred. Convolutional codes on the other part rather than operating on blocks of data, it operates on bit by-bit basis. In this coding technique the present bits depends on previous bits, so the encoder has memory a characteristic's not seen in linear block codes[1]. The turbo codes, a concept established by Berrou [3][4], can be viewed as an error-correcting code which can be obtained by the parallel-concatenation of two convolutional-codes and a feed-back decoder. The spectral efficiency of transmission can be improved by combining turbo-codes using a bandwidth -efficient modulation.

### THEORITICAL ANALYSIS

**Linear Block Codes**

Linear block code belongs to a group of parity check-code in which bits inthe information stream is not treated individually rather a block of bits are taken. Suppose there are k-bit information sequenceand n-bit code-word. An m bit of data can form 2^m distinct sequences of data. So the k-bit information sequence and n-bit code word forms 2^k and 2^n distinct sequences respectively. These 2^k distinct sequences (also called as k-tuples) are linearly mapped onto 2^n distinct codewords (also known as n-tuples). The mapping is obtained via a look-up-table.

Suppose that the n-tuples form a matrix space known as vector space denoted by Vj. Then a subspace of the vector space usually denoted as S can be formed if it satisfies the two conditions. The first one is that all the zero cord word must be in the subset and secondly the addition of any two code words must be another code word in the subset. This means that the code word cannot be from outside the subset.

If the number of message bits is small, that is if k is less, the look-up table for finding the code word will make sense. But on the other hand if k is large then the lookup table implementation will not serve its purpose. Here the generator matrix will come to role. Since the linear block code is a k-dimensional subset of n-dimensional vector space, Vj, it is possible to find n-tuples less than 2^k and can generate all the 2^k code words.

Let the message, m be shown as row-vector,

m= m1,m2,-----------,mk

Then the generated code word U can be written as

U=mG

We can define a parity-check-matrix that will help us to decode the vectors received. For each generator matrix of the form (k-n), there always exist a parity matrix H of the form (n-k)-n such that GHT =0 that is rows of both G and H are orthogonal to each other [1].

Assume that the transmission of a code word U resulted in a received vector r such that r = U + e, where e is the error. Then the syndrome Sof r defined by S = rHT, will help us find the position of error.

**Cyclic Codes:**

Cyclic codes are a part of linear block in which the codes are found using shift registers. In this the syndrome calculation is much easier. An (n,k) linear block code can be said as a cyclic code if it could satisfy the following condition. Suppose that an n-tuple U=[u0,u1,-------un-1] is a code-word of subspace S, then the code-word obtained by the end shift is also code word in subset [1].

**Most Common Block Codes:**

Hamming Codes: Hamming codes are characterized by [1]

These codes have a dmin =3. So it can correct all single error codes and can detect a maximum of two error s in a block.

BCH Codes: These are a generalization of Hamming codes and can perform multiple error correction.

**Convolutional Codes**

Block codes, as said above, takes a block containing information bits and transforms it into a block of code word. But in convolutional coding a different approach is introduced, that is convolutional binary encoder takes an information stream of bits and translates it to a stream of transmitted bits, by the process of shifting using a shift register. Convolutional codes can represented by an (n,k,K) notationwhere K is called the constraint length that is the number of stages in the shift register. The code rate is given by k/n.

To define a convolutional-code we have to define the encoder function G(m) such that from given m(input sequence) we can calculate the output U [1].The encoder representation can be done in different but equivalent methods. The most popular representations include connection diagram, tree diagram, trellis diagram and state diagram.

State Diagram Representation: The state-diagram depicts the state-information of a convolutional encoder. The information are stored in shift-registers. The state gives information of the past signal and also the possible outcome. Idea about the state clubbed with the next information bit is sufficient to obtain the next output-level.

Tree Diagram: One main disadvantage of state diagram is that it won't be helpful for getting encoder transaction as a function of time as state cannot store the time history. This dimension clubbed with properties of state diagram makes tree diagram. The tree-diagram representation gives allthe possible information and encoded sequence for encoder. The branching code for obtaining code word is that if the input is 0 then the neighbour branch is found by proceeding to next right branch in upward direction and if it is one the same process in downward direction [1].

Trellis Diagram Representation:The trellis-diagram is essentially a redrawing of what is said in state diagram. It gives all the possible state transitions in each step-time.

The decoding process is done using Maximum likelihood decoders and Hard decision Viterbi decoders. If the sequence of messages are equally-likely then maximum-likely decoder will compute the minimum probability of error. The Viterbi decoders use Viterbi algorithm which computes a maximum-likelihood estimate on the code sequence say,yestimatedfrom the received sequence say, rso that it maximize the probability, p(r|y) that sequence,rreceived is conditioned on the code sequence yestimated.

**Turbo Codes:**

Turbo codes, a new error-correcting code, to be more precise a forward error-correcting-code(FEC), scheme yields significant improvement when compared to the conventional convolutional error-correcting-coding methodologies. It has its performance very close to the Shannon limit. Turbo codes differs from conventional coding scheme as they are systematic and recursive. The turbo codes are constructed using two or more than two component codes on distinct interleaved version of information sequence which are same[1].

Turbo codes can be effectively decoded by anIterative(turbo) MAP decoder. Since the most commonly used MAP decoder algorithm assumes AWGNchannel, the effect of nonlinearities in channel isexpected to lower the coding gain as obtained from turbo-codec.

### COMPARITIVE ANALYSIS

The main aspect that has to be considered regarding the transmission of a signal through a noisy channel can be tagged to three main characteristics namely the rate of information bit often called as code rate, the bit error and finally the error detection and correction capabilities. A comparison between the error correcting codes based on these factors is made below.

**Code Rate:**

Code rate can be theoretically defined as a ratio of information bits(data bits) to the total bits that is encoded bits [1].

That is if y is the information bit and x represents the total bits then code rate is given by y/x.

For linear block code and convolutional codes the code rate has the same significance but in convolutional codes x doesn't point to a block as in linear block codes[1].

For turbo codes it sends 3 blocks of bit. The first sub block is y information bit. The second and third is the x/2 parity of payload data. So the code rate is given by y/(y+x/2+x/2) that is code word = y/(y+x).

**Bit Error Rate:**

Linear block code becomes inefficient if the number of errors in the transmitted data is more than two. In those cases linear codes will have high bit error rate. Convolutional codes have a bit error rate slightly less than that of linear block code. In the case of a turbo code the bit error is very less comparing the three. The BER curve of turbo code can be divided into two regions namely waterfall region in which the error rate decreases rapidly at low SNR and secondly the error-flow region in which BER decreases at high SNR[5,6].

**Error Detection and Correction Capability:**

We cannot correctly decode all bits correctly specially in a channel that is badly affected by noise. The capability of a code correct errors depends on the minimum distance between the code words. In general the error correcting capability of a code, t is defined by t=|_ dmin-1/2?

Comparing all the codes turbo code has the best error correcting capability and as the number of iterations increases the error correcting capability increases.

### CONCLUSION

In this paper we discussed the various error correcting codes for structured sequences, in whichtheoretical aspect of linear block codes , convolutional codes and turbo codes has been discussed. A comparison based on bit error , error correcting capability and code rate has also been made. From these analysis we could say that turbo codes has best performance in detecting and correcting errors. Even though the architecture seems to be complex for turbo codes considering the bit error rate and error detecting capability the odd factor of it can be neglected.

### REFERENCES

- Bernard Sklar "Digital Communications: Fundamentals And Applications" 2nd edition
- Jack K. Wolf, FellowIEEE, ArnoldMichelson, Member, IEEE, AndAllenH. Levesque; Senior Member, IEEE"On the Probability of Undetected Error for Linear Block Codes": IEEE Transaction On Communications, VOL. COM-30, NO. 2, February1982.
- C. Berrou, A. Glavieux, and P. Thitimajshima, "Near-Shannon limit error correcting coding and decoding: Turbo codes," in Proc. IEEE Int.Conf. Commun., Geneva, Switzerland, May 1993, pp. 1064-1070
- "Non-systematic Turbo Codes" : Adrish Banerjee, Member, IEEE, Francesca Vatta, Member, IEEE, Bartolo Scanavino, Member, IEEE, and Daniel J. Costello, Jr., Fellow, IEEE:IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 53, NO. 11, NOVEMBER 2005
- J.D Anderson and V.V. Zylabov,"Interleaver design for turbo coding", proc.Int.Symp. on Turbo Codes , 1997,Brest.
- S. Dolinar and D.Divsalar , "Weight distributions for turbo codes using random and non-random permutations", The Telecommunications and Data Acquisition Progress(tDA) Progress Report42-122, Jet Propulsion Lab(JPL), Pasadena, CA, Aug. 15, 1995,pp.56-65.