C odes are used for the representation of data opp osed to errors. A code must be simple for easy implementation and it should be enough complex for the correction of errors. Decoder implementation is often complex but in case of codes deco ders are not that much complex, and also easily implementable. codes can be decoded by simple shift registers due to the unique behavior of codes.
LDPC belongs to linear block codes. Defied as "if is the length of the block code and there is a total of code words then this is linear code if it's all code words form a dimensional sub space of the vector space of the over the field" these codes defined in terms of their generator matrix and parity- check matrix. A linear block code is either the row space of generator matrix, or the null space of the parity check matrix.
Get your grade
or your money back
using our Essay Writing Service!
Assume the source information in binary, the total number of information digits is K. the total number of unique messages will be. is the codeword of the input message, this message is transferred by the encoder in to binary form with the condition of . In a block code there are total code words so there will be messages. There will be a different codeword for each message (one to one correspondence). A block code will be linear if the sum of code word is another codeword. Message and parity are the two parts of a codeword, the length of the message is this is the information part the second part consists of bits the length of those bits are.
as mentioned above codes are defined as the null space of a parity check matrix . This matrix is sparse and binary; most of the entries are zero in the parity check matrix. is called a regular parity check matrix if it has fixed number of in each column and row. The code made by regular parity check matrix will be regular. will be irregular parity check matrix if it does not have fixed numbers of in each column or row. Codes derived from irregular parity check matrix will be irregular codes.
A regular code defined as the null space of a parity check matrix has the following properties.
Both and are small compared to the number of columns and row in Parity check matrix.
Each row contains
Each column contains
The number of common to any two rows is or
1st Property says has very less number of which makes sparse.
Implies that has fixed column and rows weight and they are equal to and respectively. Property 4 says there is no common to two rows of more than once. The last property also called row-column () constraint; this also implies that there will be no short cycles of length four in the tan ner graph. The number of 1's in each row and column denoted by '' column weight, and "" row weight respectively. The number of digits of the code words is, and the total number of equa tions is. The code rate is given as:
As defined above codes for regular, the total number of in the parity check matrix is . The parity check matrix will be irregular if the column or row weight is non uniform.
Hamming distance is a quality measure factor, quality measure factor for a code is defined as "positions of the number of bits where two or more code words are different". For example and are two code words; differ in and position, the ham ming distance for the code words are two. Is minimum ham ming distance, for even parity code is. Since are two this means if any two bits are corrupted in a code word, the result will be another code word. If the corrupt ed bits are more than one, then more parity check bits are required to detect this code word.
For example consider a code word.
There are three parity check equations is either or. is the represent ation for the parity check matrix.
Always on Time
Marked to Standard
The given matrix is code word: if it satisfies the following condition.
Here is the parity check matrix; the parity check matrix contains parity check equations the code can be defined from that equation. The code can be written as follows.
Have three bits in a message the bits are calculated from the message. I.e. the message produces parity bits the resulting code word is is the generator matrix of this Code. As we have three bits the total number of code words will be. These Eight code words will be separate from each other.
Codes are transmitted through a channel. In communication channels received codes are often different from the codes that are transmitted. This means a bit can change from to or from to. For example transmitted codeword is and the received codeword is.
Since the result is no zero so is not codeword.
Assume the smallest number of bits is in error, and the transmitted code word is the closest ham ming distance to the received code word. Comparing the received code with all possible code words is the closest codeword. The hamming distance is in this case. Three is the minimum distance from the codeword. When one bit is in error the resultant code word is closer to the code word transmitted, as compared to any other code word, so it can always be corrected. Generally if the minimum distance for a code, e number of bits can always be corrected .
We can use this method when the number of code word is small. When the number of code words is large it becomes expensive to search and compare with all the code words. Other methods are used to decode these codes one of them is like computer calculations.
The requirement for the parity check matrix regarding codes is that will be low density; this means that the majority of entries of are zero. is regular if every code bit is fixed in number. Code bitcontained in a fixed number of parity check and every parity check equation. Codes will be regular if the parity check matrix is regular, otherwise it will be irregular codes. Tanner graph is another representation for the decoding of. There are two vertices in tanner graph. Bit nodes are and check nodes are, for every parity check equation parity check vertex and for each codeword bit a bit vertex.bit vertex are connected by an edge to the check vertex in the parity check equation. Length of a cycle is the number of edges in a cycle and girth of the graph is the shortest of those lengths.
3.2.1 Iterative decoding;
Iterative decoding is presented by bit flipping algorithm, this is based on a hard decision in the start for each bit received. In iterative decoding the message is passed form the nodes of the tanner graph of the code. Messages are sent by each bit nodes to every check node the message will be either 0 or 1, and each bit node receives message from the check node to which it is connected. Bit flipping algorithm has three steps :
A hard value or is assigned to every bit node this depends on the received bit at the receiver. As check node and bit node is connected, this value is received at the connected check node. As the bits in each parity check equation are fixed, therefore this message is received at each time on fix amount of nodes.
On the base of information received from a specific bit node, the parity as well as the parity check equation is calculated and checked respectively by check node. This algorithm will be terminating at this point if all parity check equations are satisfied. If not satisfied each connected bit node and check node will exchange information indicating whether the parity check is sat is fied or not.
If more messages received from check node are "not satisfied" the current value of the bit will be changed from. The bit value will remain the same if it is satisfied. After this each connected check node and bit node sends new values to each other. This algorithm will terminate if it reaches to the allowed number of maximum iterations. Otherwise the algorithm goes back to step number, where messages are send by the bit node to connected check node.
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Bit- flipping algorithm is illustrated in the following example,
The send codeword is and the rec eived code word is . The steps required to decode this code is depicted in fig ure(4). Bit va lues are initialized in first step, the three steps are explained below.
A value is assigned to each bit after this all info rmation is sent to the connected check nodes. Check nodes and are conn ected to the bit node; check nodes and are conn ected to bit node; check nodes and are connected to bit node ; and finally check nodes and are connected to bit node . Check nodes receives bit values send by bit nodes.
Step 2, Parity update
All the values of the bits w hich makes the pari ty check eq uation are with check nodes, the pari ty is calculated from this parity ch eck equation and if the parity check equ ation has even number of the parity will be satisfied. Check nodes & satisfies the parity check equation but check nodes & does not satisfy the parity check equation. Now the wrong check nodes sends bit inform back to the bit related to them. Maxi mum number of iter ations is checked on this point in the process. The algorithm will terminate if the maximum is rea ched else it will continue. In this step the bit che ck will indicates "satis fied" or "not". The bit will flip its value if more rece ived messages indicate "not satisfied". For exa mple bit will change its value from to if more if more number of received mess ages indicates "not satisfied" and the up date information will be send to the che ck node.
Step 3, Bit update
Step number 2 will be repeated ag ain and again this time until all the parity check equations are satisfied. When all the parity check equ ations are satisfied the algorithm will ter minate and the final value of the deco ded codeword will be. Short cycles are one of the demerits of tanner graph. Using this me thod sometimes it is very diffi cult to decode a codeword.
Figure 4: Process of Bit Flipping algorithm
3.2.2 Sum-product decoding algorithm
The difference between Sum-product decoding algorithm and bit flipping algorithm is that in Sum-product decoding algorithm the values of the messages sent to each node are probabilistic denoted by log likelihood ratios. On the other hand a hard decision is taken in the start in bit flipping algorithm and the information about the confidence of the bit received is lost. First introduced by Gallager in 1962  Sum-product decoding algorithm is also known as belief decoding. What we receive are positive and negative values of a sequence, the signs of these values shows the value of the bit supposed to be sent by the sender and the confidence in this decision is represented by the real value. For a positive sign it will send a 0 and for a negative sign it will represent a 1. Sum-product decoding algorithm use soft information and the nature of the channel to for the information about the transmitted signal.
If p is the probability of 1 in a binary signal, then the probability of 0 will be.
The log-likelihood ratio for this probability is as under.
The probability of the decision is the Magnitude of, and the negative or positive sign of the represents the transmitted bit is 0 or 1. The complexity of the decoder is reduced by Log-likelihood ratios. To find the upcoming probability for every bit of the received code word is the responsibility of Sum-product decoding algorithm, condition for the satisfaction of the parity-check equations is that the probability for the codeword bit is 1. The probability achieved form event N is called extrinsic probability, and the original probability of the bit free form the code constraint is called intrinsic probability. If is assumed to be 1 this condition will satisfy the parity check equation then computed codeword from the parity check equation is extrinsic probability. Also the codeword bits are 1 is the probability.
Is the notation represents the bits of the column locations of the parity check equation of the considered codeword. Bit of code is checked by the parity check equationis the set of row location of the parity check equation. Putting the above equation in the notation of log likelihood the result is as follows.
There are three steps in Sum-product algorithm explained below :
Communication to every check node from every bit node is called of the received signal i.e. ". The properties of the communication channel are hidden in this signal. If the channel is Additive White Gaussian (AWGN) with signal to noise ratio this message will be like below.
The communication of the check node with the bit node is possibility for satisfaction of the parity check equation if we assume that bit expressed in the following equation as.
The combination of s obtained from the addition of step one plus the obtained in step two is the resultant.
To find out the value of a hard decision is made, if its value is less or equal to zero, then we assume that is equal to. If the value of is greater than then we assume that will be . The bit decoded hard value is represented by for the bit that is received. In mathematical form this is shown as.
This algorithm will terminate if for a valid code i.e. after a hard decision on total bits for, or if the maximum number of iterations reached.
The communication of each bit node to the entire connected check node is the calculated this is obtained with no information taken from the check node is the process of sending data to the check node.
The message is send back after this step, to step two. Sum product algorithm is shown below as a flow chart. Figure (5)
Here in this flow chart the termination at (step 3) has two advantages, 1st are that the convergence of the algorithm is always detected, and the 2nd one is that more iteration are stopped when a solution is determined.
Presence of an exact termination rule for sum-product algorithm (step 3) has two important benefits: first is that if algorithm fails to converge, it is always detected, and second is that when a solution is determined, additional iterations are avoided.