This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In the year 1948 Claude E. Shannon demonstrate in his paper that data can be transmitted up to full capacity of the channel and almost free of errors by using certain kind of coding schemes. The engineers at that time and today was surprised by Shannon's channel capacity that has become a fundamental standard for communication engineers as it gives a determination of what can a system do and what a system can't do. Shannon's paper also sets a limit on the swapping between the transmission rate and the energy required.
Many of the researchers have formed different coding techniques to rise above the space between theoretical and practical channel capacities. Theses codes can be sorted by simple like repetition codes to a bit multipart codes like Array, Cyclic and Hamming codes and a more complex codes like Bose- Chaudhuri- Hocquenghem (BCH) and Reed- Solomon (RS) codes. So most of the codes were well design and most of them make distinct from Shannon's theoretical channel capacity.
 A new coding technique Low Density Parity Check () codes were introduced by Gallager in. codes were carry out to achieved near the Shannon's limit but at that time almost for thirty years the work done by Gallager was ignored until in Mackay reintroduced the codes. code does remarkable performance near Shannon's limit when perfectly decoded. Performance of codes near Shannon's limit is the soft decoding of these codes. Slightly then taking a preliminary hard decision at the output of the receiver, soft decision decoders use the information of the channel properties as well to decode the received bit properly.
Since at the revival ofcodes many of the research groups have think up of dissimilar modes for the construction of these codes. Some of the codes were verified to approach the Shannon's limit within tenth of decibel. Structural and random type of codes have been constructed and verified to have an excellent bit error rate (BER) performance. codes had become strong rivals of Turbo codes in many aspects of useful applications.
Finite fields are the structured method for construction if codes [7-9]. Codes of any length can be constructed using finite fields. Theses codes have a low down error rate and are tested to give an excellent performance of their BER. Different algebraic tools are used for the construction of codes that shows an efficient and admirable performance. Additive and Multiplicative code construction are done using finite fields.
In this research study finite fields are being used for the construction of short length codes. As these codes are short in length so they are easy to implement. The aim of this research study is the high BER performance of these short length codes. Diverse techniques like masking, dispersion and array dispersion are applied on the base matrix W to get the parity check matrix. Generator matrix is then constructed using parity check matrix. Performance of the code depends on the construction of parity check matrix.
To decode the received code word bits, Bit Flipping (BF) Algorithm and the Sum Product Algorithm () are used. These algorithms are the soft decision decoders but the difference is that in bit flipping algorithm a hard decision is taken at the received codeword bits and the sum product algorithm is used to take information of channel properties and uses it to find the probabilities of the received bits. At the end of hard decision is taken and so soft information is utilized that is related to the received bits.
In this project MATLAB is used as a tool in which dedicated library of different functions are created. These functions include base matrix construction, short cycle calculator, permitted elements of a finite field calculator, dispersion, array dispersion and masking. Theses generalized functions can be used for any finite field and for any code length.
Additive White Gaussian Noise () channel is used as a transmission medium to simulate the constructed code. At much lower these short length codes achieve Shannon's limit very close. At = codes for rate carry out within from Shannon's limit. Initially when the simulation is done the lower and midrate codes perform much better and don't show any error floor.
To reiterate finite fields are the structural construction method that is used to construct codes. Short length codes have an excellent performance over Additive White Gaussian Noise () channel.
BACKGROUND OF THE STUDY
SHANNON'S CODING THEOREM
Claude E. Shannon discussed in his paper  the limits for reliable data transmission over unreliable channels and also the techniques to achieve these limits. This paper also gives the information concept and the established limits for maximum amount of information that can be transmitted over these unreliable channels.
Shannon proved that for a given communication channel a number exist there and efficient transmission can be done when the rate are less than this number. This number is given the name as capacity of the channel. Reliable communication is impossible if the rates are above the capacity of the channel. Shannon defined capacity as in terms of information theory and also introduced the conception of codes as group of vectors that are to be transmitted.
For the channel it is clear that if one input element can be received in a two different ways then there is no possibility of reliable communication if that one element is to be sent on the channel. In the same way a reliable communication is not possible if the multiple elements are not correlated and are sent on the channel. If the data is sent in a way that all the elements are correlated to each other than a reliable communication is possible. That gives the code concept which is defined as the finite set of vectors over the input. The vectors length is called as the block length of the code. Consider n be the vectors length and k be the number of vectors, so every vector is to be described as k bits where k is derived from the relation and time bits have been transmitted. bits per channel is used as the code rate for this transmission.
If a codeword is sent and at the receiver a codeword is received, than how can be extracted from given the channel can put in noise into it and corrupt the codeword sent. There is no common way to find that from the list of all possible code words which codeword was sent. As a possibility if the probability of all code words are calculated and the maximum probability code word is supposed to be the code word sent. That is not the exact way as it depends on the probabilities of the code words, take lots of time and may be ends up with an error if the number of code words are maximum. This is maximum likelihood decoder method.
Shannon also proved that the block length of code when goes to infinity there exist codes. Theses codes consists of rates that are very close to the capacity of the channel, and for that the error probability of maximum likelihood decoders gets zero. So he did not give any indication for how to find or construct these codes. By communication point of view these capacity approaching codes are still very good.
Two communication channels to be considered as an example: the binary symmetric channel () and the binary erasure channel (). Both the inputs are in binary form and there elements are called input bits. The output of has binary bits with an additional element called erasure and is denoted by. That means each bit is either correctly transmitted with probability or is erased with probability. is the capacity of this channel. Similarly have also binary output bits. So each bit is either correctly transmitted with probability or is flipped with probability. This is shown in the figure (1) below,
Figure 1: BEC (erasure probability p), BSC (error probability p).
The figure shows that is uncomplicated than but that is not the case. In the complexity occurs because the bit transmission is unknown here in this case. Capacity of this channel is.
So the complexity is more and also the error probability is higher than where it clearly shows that which bits are erased for maximum likelihood decoding.
Recently demand for reliable and proficient digital data transmission and storage systems has greatly increased. High speed, large-scale networks for data exchange, processing and storing of digital information in government, military and commercial areas has increased this demand. So for a system designer the major task is the control of errors so that the data can be reproduced efficiently.
In 1948, Shannon described in his paper that by  appropriate encoding of information, error arises due to noisy channel can be reduced to any desired level without effecting the transmission, provided rate of information is less than the capacity of the channel. To find efficient encoding and decoding techniques for error control in a noisy channel huge effort has been done.
Storage and transmission of digital information have much in common. They both transfer data from source to sink. Figure 1 shows a typical transmission system block diagram. Source can be either a person or a machine like a digital computer. The source encoder converts information from source to series of binary digits called the information sequence. As in case of continuous source conversion from analog-to-digital is done by the source encoder. The source encoder is designed in a way, that the minimum number of bits per unit time required to represent information, and from the information sequence the source output can be reconstructed . The encoder channel converts the sequence of information into encoded discrete sequence called a codeword.
The modulator converts the channel encoder output symbol into a waveform of second's duration which is appropriate for transmission because discrete symbols are not appropriate for transmission over physical channel. This waveform then by entering the channel is corrupted by the noise present in the channel. Different forms of channel can add noise of different types like microwave, satellites, telephone lines, fiber optic cables etc. As an example noise can come in a telephone line from the switching and also cross talk and thermal noise on other lines. This noise in the transmission line corrupts the signal and sometime it's too high to differentiate between signal and noise. The demodulator converts the noisy waveform into discrete or a continuous output. This represents the encoded sequence and is called as received sequence. Figure (2) below shows a generalized communication system block diagram.
Figure 2: Communication System
The decoder channel takes the received sequence r as an input and transforms it into binary sequence u which is referred as the estimated information sequence. In channel encoder decoding is done on the basis of encoding scheme and also the noise characteristics of the channel. By designing channel encoder and decoder a huge amount of effort has been done to reduce these errors and to minimize the channel noise effect on the sequence of information.
2.3 low density parity check codes
Codes are discovered by Gallager in [2, 4]. These codes belong to a specific class of Shan non capacity approaching codes. Unfortunately Gallager's work was ignored for a period of twenty years, in another scientist named Tan ner worked on codes, Tanner use graphs to represent codes. Later on in David Mackay rediscovered Tan ner's work . Long codes decoded by belief propagation are shown to be closer to the theoretical Shan non limit [5, 6]. Due to this advantage are the strong competitors of other codes used in communication system like turbo codes. have some advantages as compared to turbo codes.
Codes have error floor at very lower.
The Codes are not Trellis based.
Codes can be made for any arbitrary length according to the requirements.
Rate can be changed for Codes by changing different rows.
They have low decoding complexity involved.
Codes are easily implementable.
Codes have good block error presentation.
If a cyclic shift of times in a codeword generates another codeword this code is called cyclic code. If then this is a Special form of cyclic codes called Quasi-Cyclic codes, this means that a single shift in the code word will give birth to another codeword. In this project the generated codes has the property that when there is a single shift in a code word will generate another code word, this is the reason why we call it Quasi Cyclic or
Code can be defined completely by the null space of the parity check matrix, this matrix is in binary form and it is very spars e, this means that there is only a small number of in this matrix and the rest of the matrix is composed of. is said to be a regular parity check matrix if it has fix number of in every row and column, and so the code constructed by this matrix will be regular. The will be irregular if the number of is not fixed in every row and column and so the code generated by this parity check matrix will be irregular .
Regular are ideal when it comes to error floor, performs of regular is better than that of irregular codes, the former demonstrates an error floor caused due to small stop ping sets in case when there is a binary era sure channel in use.
Codes were proposed for error control by Gallager, and gives an idea for the construction of pseudo random Codes. The best codes are the one that are generated by a computer, these codes have no proper structure and that is why their encoding and decoding are a little bit complex.
First of all Lin, Kou and F ossorier [7, 8] introduced the concept for the formation of these codes on the bases of finite geo me try. This concept is based on geo me tric method, The performance of these codes is excellent and the reason is that they have comparatively enough minimum distance and the relative Tanner graphs have no short cycles. Different methods may be used for the decoding of such codes, depending on the error perform ance of the system and barrier of the system. These codes may be either cyclic or Quasi cyclic and therefore it can be easily implemented by using simple shift registers.
From the parity check mat r ix the Generator mat r ix can be constructed, if the structure of the generator mat r ix is such that the code word made by this has split parity bits and information then this type of generator matrix is said to be in systematic cir cul ant form. If the parity checks mat r ix is reduced to this from using Gaussian elimination, then the generator mat r ix can be found as. If the matrix is not enough sparser then the complexity of encoding will be enough high. Mat r ix can be made sparser using different techniques.
The decoding techniques used in this project are sum product al gorithm and Bit flipping al gorithm. Sum product al gorithm also called belief propagation, this al gorithm works on probabilities, and there is no hard decision in the stat of the al gorithm. If there is no cycles in the Tanner graph then sum product al gorithm will give optimal solution.
Bit flipping al gorithm is also called iterative decoding. There is a hard decision taken initially for each coming bit. Messages are passed form connected bit nodes to the check nodes in the Tanner graph of the code. The value of those bites are either or as from check nodes to which bit nodes are connected informing them about the parity, the answer will be "satisfies" or "not satisfied".
2.4 Character is tics of a Channel
Channel is the medium between in sender and the receiver, in communication system channel is the medium for sending information from sender to receiver, there are several forms of a communication channel.
A channel may be a connection between the initiator and terminator node of an electronic circuit.
It may be like a buffer where messages can be put and got ten.
Or it may be a in the form of a path which convey electrical and electromagnetic signals.
Or it may be the part of a communication system that connects data source to the s ink.
A communication channel may be full or half duplex. There are two connecting devices in a duplex system that switches information. Each device can send or receive information in half duplex system, is such system communication is done in both direction s but in one direction at a time. In a full duplex communication system information can be send in both direction at the same time, sender and receiver can exchange data simultaneously. PSTN is a good example of a full duplex communication system.
When two devices are communicating each other it may be either out of band or in band communication, in out of band communication the channel between the devices is not primary, the communication is done though alternative Channel. While in band communication use primary channel as a communication channel between the communicating devices.
There are some properties and factors which describes the characteristics of a communication channel. Some of the characteristics are given below.
Quality: quality is measured in terms of BER (bit error rate). Quality is the measurement of the reliability of the conveyed information across the channel. It is very important that is signal should be reliable. If the quality of a communication channel is poor it will distort the information or message send across it, where as a channel with good quality will preserve the message integrity. Receiving the signal correctively from the source is directly proportional to channel quality. If the received message is incorrect it should be retransmitted.
Band width: how much information can be send at a unit time through a communication channel3 is the band width of the channel. Band width of a channel can be expressed as bps (Bits per second) if the band width is high the channel is good. If the band width of a communication channel is high it can be used by more users.
Dedication: this property shows if the channel is dedicate d to a single user or shared by many users. A shared communication channel is like a school classroom, where each student attempts to catch the teacher's attention simultaneously by raising hands. The teacher will decide which student to be allowed to speak at one time.
The system (channel, source, and message) must include some mechanism to overcome the errors in case of poor quality channel if the probability of correctly receiving message across the communication channel is low. Other the communication across the channel is not positive. Channels that are shared and reliable are efficient in terms of resources than those channels which have lack of these character is tics.
2.4.1 Types of Fading Channels
Additive White Gaussian Noise is the starting point in understanding the communication system. In add it iv e White Gaussian Noise autonomous Gaussian noise samples usually corrupt the primary source and data that may cause degradation of performance and data corrupt ion in the form of thermal noise at the receiver. The probability density function of this noise has zero mean Gaussian voltage, with flat power spectral density.
In wire less communication system the transmitted signal travels over multiple paths, this phenomenon is known as multi path propagation. As a result of this phenomenon there are fluctuations in phase and amplitude of the received signal, this terminology is called multi path fading. This is the most complex phenomenon and modeling and design for this channel is more complex than AW GN channel.
C odes are used for the representation of data opp osed to errors. A code must be simple for easy implementation and it should be enough complex for the correction of errors. Decoder implementation is often complex but in case of codes deco ders are not that much complex, and also easily implementable. codes can be decoded by simple shift registers due to the unique behavior of codes.
LDPC belongs to linear block codes. Defied as "if is the length of the block code and there is a total of code words then this is linear code if it's all code words form a dimensional sub space of the vector space of the over the field" these codes defined in terms of their generator matrix and parity- check matrix. A linear block code is either the row space of generator matrix, or the null space of the parity check matrix.
Assume the source information in binary, the total number of information digits is K. the total number of unique messages will be. is the codeword of the input message, this message is transferred by the encoder in to binary form with the condition of . In a block code there are total code words so there will be messages. There will be a different codeword for each message (one to one correspondence). A block code will be linear if the sum of code word is another codeword. Message and parity are the two parts of a codeword, the length of the message is this is the information part the second part consists of bits the length of those bits are.
as mentioned above codes are defined as the null space of a parity check matrix . This matrix is sparse and binary; most of the entries are zero in the parity check matrix. is called a regular parity check matrix if it has fixed number of in each column and row. The code made by regular parity check matrix will be regular. will be irregular parity check matrix if it does not have fixed numbers of in each column or row. Codes derived from irregular parity check matrix will be irregular codes.
A regular code defined as the null space of a parity check matrix has the following properties.
Both and are small compared to the number of columns and row in Parity check matrix.
Each row contains
Each column contains
The number of common to any two rows is or
Property number one says has very less number of which makes sparse.
Implies that has fixed column and rows weight and they are equal to and respectively. Property 4 says there is no common to two rows of more than once. The last property also called row-column () constraint; this also implies that there will be no short cycles of length four in the tan ner graph. The number of 1's in each row and column denoted by '' column weight, and "" row weight respectively. The number of digits of the code words is, and the total number of equa tions is. The code rate is given as:
As defined above codes for regular, the total number of in the parity check matrix is . The parity check matrix will be irregular if the column or row weight is non uniform.
Hamming distance is a quality measure factor; quality measure factor for a code is defined as "positions of the number of bits where two or more code words are different". For example and are two code words; differ in and position, the hamming distance for the code words are two. is minimum ham ming distance, for even parity code is. Since are two this means if any two bits are corrupted in a code word, the result will be another code word. If the corrupt ed bits are more than one, then more parity check bits are required to detect this code word.
For example consider a code word.
There are three parity check equations is either or. is the represent ation for the parity check matrix.
The given matrix is code word: if it satisfies the following condition.
Here is the parity check matrix; the parity check matrix contains parity check equations the code can be defined from that equation. The code can be written as follows.
Have three bits in a message the bits are calculated from the message. I.e. the message produces parity bits the resulting code word is is the generator matrix of this Code. As we have three bits the total number of code words will be. These Eight code words will be separate from each other.
Codes are transmitted through a channel. In communication channels received codes are often different from the codes that are transmitted. This means a bit can change from to or from to. For example transmitted codeword is and the received codeword is.
Since the result is not zero so is not code word.
Assume the smallest number of bits is in error, and the transmitted code word is the closest ham ming distance to the received code word. Comparing the received code with all possible code words is the closest codeword. The hamming distance is in this case. Three is the minimum distance from the codeword. When one bit is in error the resultant code word is closer to the code word transmitted, as compared to any other code word, so it can always be corrected. Generally if the minimum distance for a code, e number of bits can always be corrected.
We can use this method when the number of code word is small. When the number of code words is large it becomes expensive to search and compare with all the code words. Other methods are used to decode these codes one of them is like computer calculations.
The requirement for the parity check matrix regarding codes is that will be low density; this means that the majority of entries of are zero. is regular if every code bit is fixed in number. Code bitcontained in a fixed number of parity check and every parity check equation. Codes will be regular if the parity check matrix is regular, otherwise it will be irregular codes. Tanner graph is another representation for the decoding of. There are two vertices in tanner graph. Bit nodes are and check nodes are, for every parity check equation parity check vertex and for each codeword bit a bit vertex.bit vertex are connected by an edge to the check vertex in the parity check equation. Length of a cycle is the number of edges in a cycle and girth of the graph is the shortest of those lengths.
3.2.1 Iterative decoding
Iterative decoding is presented by bit flipping algorithm, this is based on a hard decision in the start for each bit received. In iterative decoding the message is passed form the nodes of the tanner graph of the code. Messages are sent by each bit nodes to every check node the message will be either 0 or 1, and each bit node receives message from the check node to which it is connected. Bit flipping algorithm has three steps :
A hard value or is assigned to every bit node this depends on the received bit at the receiver. As check node and bit node is connected, this value is received at the connected check node. As the bits in each parity check equation are fixed, therefore this message is received at each time on fix amount of nodes.
On the base of information received from a specific bit node, the parity as well as the parity check equation is calculated and checked respectively by check node. This algorithm will be terminating at this point if all parity check equations are satisfied. If not satisfied each connected bit node and check node will exchange information indicating whether the parity check is sat is fied or not.
If more messages received from check node are "not satisfied" the current value of the bit will be changed from. The bit value will remain the same if it is satisfied. After this each connected check node and bit node sends new values to each other. This algorithm will terminate if it reaches to the allowed number of maximum iterations. Otherwise the algorithm goes back to step number, where messages are send by the bit node to connected check node.
Bit- flipping algorithm is illustrated in the following example,
The send codeword is and the received code word is. The steps required to decode this code is depicted in fig ure (3). Bit va lues are initialized in first step, the three steps are explained below.
A value is assigned to each bit after this all info rmation is sent to the connected check nodes. Check nodes and are conn ected to the bit node; check nodes and are conn ected to bit node; check nodes and are connected to bit node ; and finally check nodes and are connected to bit node . Check nodes receives bit values send by bit nodes.
Step 2, Parity update
All the values of the bits w hich makes the pari ty check eq uation are with check nodes, the pari ty is calculated from this parity ch eck equation and if the parity check equ ation has even number of the parity will be satisfied. Check nodes & satisfies the parity check equation but check nodes & does not satisfy the parity check equation. Now the wrong check nodes sends bit inform back to the bit related to them. Maxi mum number of iter ations is checked on this point in the process. The algorithm will terminate if the maximum is rea ched else it will continue. In this step the bit che ck will indicates "satis fied" or "not". The bit will flip its value if more rece ived messages indicate "not satisfied". For exa mple bit will change its value from to if more if more number of received mess ages indicates "not satisfied" and the up date information will be send to the che ck node.
Step 3, Bit update
Step number 2 will be repeated ag ain and again this time until all the parity check equations are satisfied. When all the parity check equ ations are satisfied the algorithm will ter minate and the final value of the deco ded codeword will be. Short cycles are one of the demerits of tanner graph. Using this me thod sometimes it is very diffi cult to de code a codeword.
Figure (3): Process of Bit Flipping algorithm
3.2.2 Sum-product decoding algorithm
The difference between Sum-product decoding algorithm and bit flipping algorithm is that in Sum-product decoding algorithm the values of the messages sent to each node are probabilistic denoted by log likelihood ratios. On the other hand a hard decision is taken in the start in bit flipping algorithm and the information about the confidence of the bit received is lost. First introduced by Gallager in 1962  Sum-product decoding algorithm is also known as belief decoding. What we receive are positive and negative values of a sequence, the signs of these values shows the value of the bit supposed to be sent by the sender and the confidence in this decision is represented by the real value. For a positive sign it will send a 0 and for a negative sign it will represent a 1. Sum-product decoding algorithm use soft information and the nature of the channel to for the information about the transmitted signal.
If p is the probability of 1 in a binary signal, then the probability of 0 will be.
The log-likelihood ratio for this probability is as under.
The probability of the decision is the Magnitude of, and the negative or positive sign of the represents the transmitted bit is or. The complexity of the decoder is reduced by Log-likelihood ratios. To find the upcoming probability for every bit of the received code word is the responsibility of Sum-product decoding algorithm, condition for the satisfaction of the parity-check equations is that the probability for the codeword bit is. The probability achieved form event N is called extrinsic probability, and the original probability of the bit free form the code constraint is called intrinsic probability. If is assumed to be this condition will satisfy the parity check equation then computed codeword from the parity check equation is extrinsic probability. Also the codeword bits are 1 is the probability.
Is the notation represents the bits of the column locations of the parity check equation of the considered codeword. Bit of code is checked by the parity check equationis the set of row location of the parity check equation. Putting the above equation in the notation of log likelihood the result is as follows.
There are three steps in Sum-product algorithm explained below :
Communication to every check node from every bit node is called of the received signal i.e. ". The properties of the communication channel are hidden in this signal. If the channel is Additive White Gaussian () with signal to noise ratio this message will be like below.
The communication of the check node with the bit node is possibility for satisfaction of the parity check equation if we assume that bit expressed in the following equation as.
The combination of s obtained from the addition of step one plus the obtained in step two is the resultant.
To find out the value of a hard decision is made, if its value is less or equal to zero, then we assume that is equal to. If the value of is greater than then we assume that will be . The bit decoded hard value is represented by for the bit that is received. In mathematical form this is shown as.
This algorithm will terminate if for a valid code i.e. after a hard decision on total bits for, or if the maximum number of iterations reached.
The communication of each bit node to the entire connected check node is the calculated this is obtained with no information taken from the check node is the process of sending data to the check node.
The message is send back after this step, to step two. Sum product algorithm is shown below as a flow chart. Figure (4)
From Bit node to check node
From Check node to Bit node
Both added & hard decision taken
Are a valid codeword OR Mix number of iterations reached
From Bit node to check node
Figure (4) Flow chart for sum product algorithm
Here in this flow chart the termination at (step 3) has two advantages, 1st are that the convergence of the algorithm is always detected, and the 2nd one is that more iteration are stopped when a solution is determined.
Presence of an exact termination rule for sum-product algorithm (step 3) has two important benefits: first is that if algorithm fails to converge, it is always detected, and second is that when a solution is determined, additional iterations are avoided.