# LDPC Coding And Encoding Computer Science Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Codes are used for the representation of data opposed to errors. For easy implementation a code should be simple, for the correction of errors a code must be complex. Decoder implementation is often complex but in LDPC codes deco dares are not that much co m plex, and also easily implementable. LDPC codes can be decoded by simple shift registers due to the unique behavior of LDPC codes.

## 3.1 Encoding

LDPC belongs to linear block codes. A (N, K) liner code over vector space is a set of codeword having K_ dimensional subs pace and N_ d imensional vector space over GF (2). A linear block code is either the row space of generator matrix, or the null space of the parity check matrix.

Assume the source information in binary, the total number of information digits is K. the total number of unique messages will be 2k. n-tuple is the codeword of the input message, this message is transferred by the encoder in to binary form with the condition of n>k . In a block code there are total 2k code words so there will be 2k messages. There will be a different codeword for each message (one to one correspondence). A block code will be linear if the sum of modulo-2 code word is another codeword. Message and parity are the two parts of a codeword, the length of the message is k this is the information part the second part consists of bits the length of those bits are n-k.

as mentioned above LDPC codes are defined as the null space of a parity check matrix H. This matrix is sparse and binary, most of the entries are zero in the parity check matrix. H is called a regular parity check matrix if it has fixed number of 1's in each column and row. The code made by regular parity check matrix will be regular LDPC. H will be irregular parity check matrix if it does not have fixed numbers of 1's in each column or row. Codes derived from irregular parity check matrix will be irregular codes.

A regular LDPC code defined as the null space of a parity check matrix has the following properties.

Both j and k are small compared to the number of columns and row in Parity check matrix.

Each row contains k 1's

Each column contains j 1's

The number of 1's common to any two rows is 0 or 1

Property 1 says H has very less number of 1's which makes H sparse.

2 & 3 implies that H has fixed column and rows weight and they are equal to k and j respectively. Property 4 says there is no 1 common to two rows of H more than once. The last property also called row-column (RC) constraint; this also implies that no four cycles in tanner graph. The number of 1's in each row and column denoted by 'j' column weight, and "k" row weight respectively. The number of digits of the code words is N, and the total number of equations are M. The code rate is

R=K/N

As defined above codes for regular LDPC, the total number of 1's in the parity check matrix is jN=kM . The parity check matrix will be irregular LDPC if the column or row weight is non uniform.

Hamming distance is a quality measure factor, quality measure factor for a code is defined as positions of the number of bits where two or more code words are different. For example 1101110 and 1001010 are two code words; differ in 2nd and 5th position, the hamming distance for the code words are two. dmin is minimum hamming distance, for even parity code dmin is 2. Since dmin is two this means if any two bits are corrupted in code word, the result will be another code word. If the corrupted bits are more than one, then more parity check bits are required to detect this code word.

For example consider a code word.

C=

There are three parity check equations is either 0 or 1.H is parity check matrix representation.

## = =

## â†“

The given matrix C = is codeword if it satisfies the condition below.

0

Here H is the parity check matrix; the parity check matrix contains parity check equations the code can be defined from that equation.

The code can be written as follows.

## â†“

Have three bits in a message the bits are calculated from the message. I.e. the message 010 produces parity bits = 1+0 = 1, =1+1 = 0, = 1+0 = 1 the resulting code word is 110101. G is the generator matrix of this code. As we have three bits the total number of code words will be = 8. These

Eight doe words will be separate from each other.

3.2 Decoding

Codes are transmitted through a channel. In communication channels received codes are often different from the codes that are transmitted. This means a bit can change from 1 to 0 or from 0 to 1. For example transmitted codeword is 101011 and the received codeword is 101010.

From H = 0

H = =

Since the result is no zero so is not codeword.

Assume the smallest number of bits are in error, and the transmitted code word is the closest hamming distance to the received code word. Comparing the received code with all possible code words is the closest codeword. The hamming distance is 1 in this case. Three is the minimum distance from the codeword. When one bit in error the resultant code word is closer to the code word transmitted, as compared to any other codeword, so it can always be corrected. Generally if the minimum distance for a code, e number of bits can always be corrected [18].

We can use this method when the number of code words is small. When the number of code words is large it becomes expensive to search and compare with all the code words. Other methods are used to decode these codes one of them is like computer calculations.

The requirement for the parity check matrix H regarding LDPC codes is that H will be low density, this means that the majority of entries of are zero. H is regular if every code bit is fixed in number. Code bitcontained in a fixed number of parity check and every parity check equation. LDPC codes will be regular if the parity check matrix is regular, otherwise it will be irregular codes. Tanner graph is another representation for the decoding of LDPC. There are two vertices in tanner graph. Bit nodes are n and check nodes are m, for every parity check equation parity check vertex and for each codeword bit a bit vertex.bit vertex are connected by an edge to the check vertex in the parity check equation . length of a cycle is the number of edges in a cycle and girth of the graph is the shortest of those lengths .

## 3.2.1 Iterative decoding;

Iterative decoding is presented by bit flipping algorithm, this is based on a hard decision in the start for each bit received. In iterative decoding the message is passed form the nodes of the tanner graph of the code. Messages are sent by each bit nodes to every check node the message will be either 0 or 1, and each bit node receives message from the check node to which it is connected. Bit flipping algorithm has three steps [18]:

Step 1.

Initialization:

Each bit node is assigned the bit value received from the channel,

and sends messages to the check nodes to which it is connected indicating this value.

Step 2.

Parity update:

Using the messages from the bit nodes, each check node calculates

whether or not its parity-check equation is satisfied. If all parity-check equations

are satisfied the algorithm terminates, otherwise each check node sends messages

to the bit nodes to which it is connected indicating whether or not the parity-check

equation is satisfied.

Step 3.

Bit update:

If the majority of the messages received by each bit node are \not

Satisfied," the bit node ips its current value, otherwise the value is retained. If

The maximum number of allowed iterations is reached the algorithm terminates

and a failure to converge is reported, otherwise each bit node sends new messages

to the check nodes to which it is connected, indicating its value and the algorithm

Returns to Step 2.

Bit-flipping a lgorithm is illustrated in the following example,

H =

The send codeword is 00 1011 and the rec eived code word is 10101 1. The steps requ ired to decode this code is depicted in fig ure 4. Bit va lues are initialized in first step; a value is assigned to each bit after this all info rmation is sent to the connected check nodes. Check nodes 1, 2 and 4 are conn ected to the bit node 1;

Check nodes 2,3 and 5 are conn ected to bit node 2 ; check nodes 1, 5 and 6 are connected to bit node 3 ; and finally check nodes 3, 4 and 6 are connected to bit node 4. Check nodes receives bit values send by bit nodes.

Step 2; all the values of the bits w hich makes the parity eq uation are with check nodes, the pari ty is calculated from this parity ch eck equation and if the parity check equ ation has even number of 1s the parity will be satisfied. Check nodes 2 & 4 satisfies the pa rity check equation but check nodes 1 & 3 does not satisfy the parity check equation. Now the wrong check nodes sends bit inform action back to the bit related to them. Maxi mum number of iter ations is checked on this point in the process. The algorithm will terminate if the maximum is rea ched else it will continue. In this step the bit che ck will indicate "satis fied or not". The bit will flip its value if more rece ived messages indicate not "satisfied". For exa mple bit 1 will change its value from 1 to 0 if more mess ages if more number of received messages indicate "not satisfied" and the up date information will be send to the che ck node. Step number 2 will repeat ag ain and again this time until all the parity check equations are satisfied. When all the parity check equ ations are satisfied the algorithm will ter minate and the final value of the deco ded codeword will be 001011. Short cy cles are one of the demerits of tanner graph. Using this me thod sometimes it is very diffi cult to decode a codeword.

Figure 4: Process of Bit Flipping algorithm

## 3.2.2 Sum-product decoding algorithm

The difference between Sum-product decoding algorithm and bit flipping algorithm is that in Sum-product decoding algorithm the values of the messages sent to each node are probabilistic denoted by log likelihood ratios. On the other hand a hard decision is taken in the start in bit flipping algorithm and the information about the confidence of the bit received is lost. First introduced by Gallager in 1962 [3] Sum-product decoding algorithm is also known as belief decoding. What we receive are positive and negative values of a sequence, the signs of these values shows the value of the bit supposed to be sent by the sender and the confidence in this decision is represented by the real value. For a positive sign it will send a 0 and for a negative sign it will represent a 1. Sum-product decoding algorithm use soft information and the nature of the channel to for the information about the transmitted signal.

If p is the probability of 1 in a binary signal, then the probability of 0 will be 1-p.

The log-likelihood ratio for this probability is as under.

The probability of the decision is the Magnitude of , and the negative or positive sign of the represents the transmitted bit is 0 or 1. The complexity of the decoder is reduced by Log-likelihood ratios. To find the upcoming probability for every bit of the received code word is the responsibility of Sum-product decoding algorithm, condition for the satisfaction of the parity-check equations is that the probability for the codeword bit is 1. The probability achieved form event N is called extrinsic probability, and the original probability of the bit free form the code constraint is called intrinsic probability. If is assumed to be 1 this condition will satisfy the parity check equation then computed codeword from the parity check equation is extrinsic probability. Also the codeword bits are 1 is the probability.

Is the notation represents the bits of the column locations of the parity check equation of the considered codeword. bit of code is checked by the parity check equationis the set of row location of the parity check equation . Putting the above equation in the notation of log likelihood the result is as follows.

To give

The sum-product algorithm is explained step by step as follows [12]:

Initialization: The initial message from each bit node i to check node j is the LLR of signal received yi having the knowledge of the channel properties. For an AWGN channel having SNR Eb / No this message is equal to:

Li,j = Ri = 4yi

Check-to-bit: The message from check node to bit node is the probability that parity-check equation is satisfied if bit i is assumed to be a 1 and is shown in equation as LLR:

Codeword Test: The combined LLR is then the sum of the LLR calculated in step 1 plus the extrinsic LLRs calculated in step 2:

Li = Ei,j + Ri

Now at this step a hard decision is taken for the value of Li. If the value of Li comes out to be greater than zero then zi is assumed to be a 0 and if it comes out to be less than or equal to zero then zi is assumed to be a 1, where zi is the decoded bit hard value for the received bit i. This can be shown in mathematical form as given below:

zi =

if Z = [z1, z2, . . . , zn] is a valid codeword after taking hard decision on all bits i.e. HzT = 0, or if maximum number of allowed iterations have been completed, the algorithm terminates.

Bit-to-check: The message from bit node i to all check nodes it is connected is a LLR calculated without using information from check node j while sending information to check node j i.e.:

Li,j = Ei,j' + Ri

After this step the control returns back to step 2 where message is sent from check node to bit node. Process of Sum-Product algorithm is shown in the form of flow chart in figure 4.

## 4 Construc tion of LDPC Code

LDPC codes construction method s.

LDPC codes can be constructed by two methods.

Random methods

Random LDPC codes are computer generated. Random LDPC codes perform will in the waterfall region i.e. closer to the Shannon limit. As the code length increases the minimum size and minimum distance of their stopping set increases. However the encoding of random generated LDPC codes are complex due to their structure. There are some popular types of random LDPC codes.

Gallager construction [3]

Mackay construction [5]

Computer generated

Structured methods

In terms of hardware implementation structure code have advantage over random codes when it come to coding and decoding of these codes, i.e. structured codes like quasi-cyclic LDPC codes can be decoded using simple shift registers with low complexity. These codes can be contracted by algebraic methods. Structure codes are specified by a G (Generator) matrix. Below are some widely used structured methods.

Construction based on Superposition[20]

(array) codes method (Vandermonde matrix) [17]

Balanced incomplete block designs (BIBD)[17, 19]

Finite geometry [7, 9]

Construction based on graphs [22]

Construction based on RS codes[21]

Depending upon the requirements of the system any method can be used for construction of LDPC. The performance of the constructed LDPC codes depends upon the method of construction. Method of construction varies for these codes different methods are used by different research groups. In this re-search work Short length LDPC codes are constructed by using Finite geometry. In this re-search q is the power of a prime by changing the value of q any code of arbitrary length can be constructed. Tow values of q has been taken in this work q=17 and q=37. Mostly GF (17) is used, multiplicative and additive method is used to construct LDPC codes.

4.2 Construction of Base Matrix.

Definition of LDPC codes is that an LDPC cods is the null space of a parity check matrix H. from H matrix G (generator) matrix can be constructed. We start form the base matrix W; parity check matrix H is derived from this mother matrix (base matrix) W. first of all in the construction of LDPC codes Base matrix is defined. Length of LDPC codes depends on the base matrix. Base matrix is used to construct parity check matrix using different methods of construction and after different operations. Parity check matrix is mainly responsible for the performance of LDPC codes.

The null space of a parity check matrix defines a regular LDPC () code. With no cycle of length 4, have the following properties.

Each row has number of 1's

Each column has number of 1's

The common number of 1's in any two rows is 0 or 1.

Number of rows and columns in the H matrix are larger than both and

From first two properties it is clear that row and column weight of the parity check matrix is fixed respectively. Form third property it is clear that no two rows of H has more than one common. It is also called the row-column (RC) constraint; this property ensures that there is no short cycle of length 4 in the corresponding tanner graph. Therefore the density of 1's in the parity check matrix is small.

This is the reason why H is called a low density parity check matrix and code constructed called LDPC code. The ratio of the total number of 1's is called the density of H denoted by.

Denotes the number of rows in the parity check matrix H. this low density party check matrix H is also called a sparse matrix. Since this parity check matrix is a regular matrix so codes defined by this parity matrix will be regular LDPC codes.

Beside the minimum distance the performance of iterative decoded LDPC codes depends on the properties of the code structure. One such property is the girth of the code, the length of the shortest cycle in the tanner graph is called the girth of the code. The cycle which affects the performance of the code is cycle of length 4. That kind of cycle must be prevented when constructing the code, because they produce correlation in iterative decoding when the message is exchanges this correlation has bad consequences on the decoding.

## 4.2.1 Finite Fields

If a set G satisfies a binary multiplication and also satisfies the following four conditions, then it is called a set.

It will satisfy the associative property of multiplication, for any three elements in ,

There will be an identity element in , such that for any element in ,

For any element in, there will be a unique element in, such that the product of both and is 1.

For any two elements and a group will be commutative if it satisfies the following the condition.

Where belongs to set G.

A field is just like set of elements, in a set there are distinct objects in case of numbers a set is a collection of numbers which satisfies addition, subtraction, and multiplication and division. Distributive and Commutative laws with respect to addition and multiplication respectively must be satisfied. A field can be defined as; let be a set in which addition and multiplication are possible then is a field. If satisfies the following conditions.

Satisfies commutative law with respect to addition, the additive identity is an element when added to any element results the same. 0 is the additive identity of.

Is commutative with respect of multiplication the identity element is 1, the multiplicative identity of is 1.

For any three elements in , multiplication is distributive over addition.

Is the additive inverse of and is the multiplicative inverse of if a field if is a nonzero element. In a domain subtraction of an element from another element is equivalent to multiplication of -1 to the 2nd element and then add both of them, i.e. and are to be added then. In the same way division of and is equivalent to multiplying by the multiplicative inverse of if is a nonzero element.

A field must have the following properties.

For any element in the field

For any two nonzero elements if a field, multiplication of those two will be nonzero.

For any two elements if

For a nonzero element in a field

If are any two elements then,

G F Galois filed also called finite field (named after Evariste Galois) in this kind of field only finite many elements are there. Number of elements represents the order of the field. For a prime number and positive integer the size of the finite field is.

Let be a prime number and be a positive integer, then belongs to Galois Field, the notation used for Galois Field here is G F (). Where =. When the numbers of elements of two fields are the same they are called isomorphic fields. Numbers of elements of the fields are the characteristics of the Galois Field G F (). Using any Galois Field G F () characteristics symbols codes can be made. However in digital transmission symbols from binary field are used.

Modulo 2 addition & multiplication is used in binary arithmetic, Table 1 and table 2 defines modulo 2 addition & multiplication.

Table : Modulo-2 addition

Table : Modulo-2 Multiplication

A function of one variable is called a polynomial function if for all values of it satisfies the following.

Where are constant co-efficient, and is

The largest degree of is called the degree of the polynomial, from G F (2) there will be two polynomials with degree 1, i.e. . There will be four polynomials for degree two; generally there will be polynomials with degree in G F (2).

A polynomial is divisible by if the number of terms is even. If a polynomial is not divisible by any other polynomial then it is irreducible over the field G F (2). The polynomials are not irreducible. As are integral factors of this polynomial. The polynomial is irreducible. A primitive polynomial is an irreducible polynomial whose degree is and if is the least positive integer which divides is . Primitive polynomial is not easy to find.

The table shows a list of the primitive polynomial

Table : list of primitive polynomials

## 4.2.2 Additive Groups of Prime Fields.

In this section will present the location vector of binary QC-LDPC codes with respect to additive group of finite fields.

Let G F () is a prime field; q is a prime number and a member of this set. This set is the set of integers. This satisfies the Modulo-q addition and multiplication that is why it as prime field. Every element in this set of integers has its location vector in the additive group of this prime field, it is a single in the field G F (). Let G F (). Then each element in G F () belongs to and the element. The rest of all the elements are zero. Is a single element of the location vector and this is called loaction vector of here the represents additive field.

Location v actor of in GF (q) is of and the location vector is the right cycle shift. The m atrix is circular permutation matrix over G F (2), because laction v actor of is a row modu lo-q matrix.

In the field G F () mod lo-q matrix multiplication is carried as a result of the multiplication of and.

Adding elements form and replacing each component of by this set of integers over the field G F () a matrix with vertically expended row will be formed.

. Modulo - addition and multiplication are there for each . It is worth mentioning that the entries are distinct in a column.

We get the following base matrix over G F ()

## .

## 4.2.3 Multiplicative Groups of Prime Fields

In section 4.2.1 prime numbers are define. Is the power of a prime i.e. . Consider the finite field G F (). Let us take a primitive element from this field, G F (). Then under the multiplication operation all the elements of G F () form the multiplicative group, i.e. and 1 are the elements of G F (). All these elements are non zero and they form the Multiplicative group of G F (). A tuple in the field G F () is formed for nonzero elements when (, represented by and having the form) the components of are the nonzero elements of G F (. Here all the components are , with the exception of which is the component is equal to . has single one element and this the tuple is the location vector of in the multiple group of G F (). This is the location vector where stands for multiplicative.

Consider the location vector in the field G F () denoted by is the right cycle shift of the location vector of . Here is a nonzero element. This forms a matrix of order over G F (). This matrix is a circular permutation matrix, as mentioned above in a circular permutation matrix one row is the right cycle shift of the above and the last row is the right cycle shift of the first row. This matrix formed is fold dispersion of the matrix in the filed element. In the finite fields the back bone of our constructed code is the Dispersion of the matrix and representation of the location vector.

Our construction of LDPC codes begins with finite field G F (). We taken . is taken 17 in this report for simplicity. Let in the finite fields G F () be a primitive element. A matrix or order is formed over G F () denoted as.

This matrix has the following properties:

And differ ( positions from . When any two symbols from G F () is multiplied with two different tuple, the resulting tuple have only one position where both of them have the same symbol.

, and , differ ( positions from When two different elements from G F () are multiplied with an tuple the resulting product have two tuple which has only one place in common and both of them have the same symbol.

First property implies that position differ in any two rows of. The second property implies that there is one component in each row of in G F (). These are called multiplied row constraints respectively.

4.3 Array Dispersion of Base Matrix

In section 4.2 we learnt how to construct base matrix by additive and multiplicative methods.

In this section the parity check matrix has to be made using the base matrix. LDPC codes depend on parity check matrix therefore special care has to be taken in the construction of parity check matrix H. different techniques are used to construct parity check matrix H from the base matrix.

Array disper sion of base matrix is the method used in this research work for the construction of parity check matrix. In this method the base matrix is spread widely to get the maximum output.

Array disper sion for multiplicative group is performed as below in the figure.

## â†“ â†“

Upper (u)

Lower (l)

Upper (u)

Lower (l)

## â†™ â†˜ â†™ â†˜

Base matrix is divided into two halves, for further operation matrix is considered which the upper half of the base matrix. This rectangular matrix is considered to reduce the code rate to a half. Further this matrix is divided in two as, both of are square matrices. Now each of are divided into two parts, the lower part and the upper part. The lower matrix is denoted by lower matrix has all the entries equal to zero above the diagonal of the matrix. The upper matrix is denoted by, upper matrix has all the entries below the diagonal and on the diagonal equal to zero. The entries that are above the diagonal are unchanged. From the rectangular matrix the diagonal entries and entries above the diagonal are all zero, and there is no change in the entries that lies below the line of diagonal. This division of the matrices is called dispersion.

This dispersion produces four matrices two lower and two upper matrices i.e. and respectively. And are array dispersed matrices if are added the result will be the original matrix form which were constructed. The same result for when added.

There are four matrices of order, now these four array-disper sed matrices are arranged in different combinations for the simulation point of view. There are some simple combinations followed by some complex combinations. The four array- disper sed matrices are tested in those combinations and the combination showing the best results are considered for further implementation and simulation. Here is a simple arrangement in this arrangement the resultant matrix is a matrix.

Are arranged and the resultant matrix is, this code is used for a channel where the error comes as a burst. The zero entries in this matrix are like a burst form.

There is special arrangement in this matrix, a burst of zeros then a of nonzero entries. The number of nonzero entries and the number of zero entries in this matrix is the same. The code constructed using this arrangement will be good for channel in which errors come as a burst and then for some time there is no error.

This arrangement does not fulfill the requirement of sparser matrix, as the number of zero and nonzero entries is the same. So there is another arrangement.

The matrix is a null matrix of order . This matrix is sparser than the first one. The dimensions of is. This is sparser than the first matrix, as it has larger sequence of zeros. The following figures shows that code constructed using is batter as compared to the codes constructed from the comparison is done as a length of the sequence of zeros in each row.

Figures for w array dispersion 1 and 2 before making

Are to be pasted.

Here

The above figure shows

Shows that there are non-zero entries after each zero entries.

There are non-zero entries after each zero entries.

## 4.4 Description of Masking

Masking is another method for the construction of LDPC codes. By Masking the density of nonzero entries can be reduced in the parity check matrix, by replacing some of the nonzero entries of a matrix with zeros. Masking is a regular method; the wanted entries in the desired matrix are chosen in a sequence for masking.

Suppose the matrix has to be masked. Here rows are selected for masking. The following matrix shows making phenomenon. The colored entries are linked with the rows that are masked.

Masking of matrix

Masking starts form row number, say and column . The next step is row and column. In this way the next row and column are masked, when the control reaches to the last of the row then the next number is next column and first row.

There are two ways for masking; Specified row numbers may be delete or maybe not. This point must be noted when specifying a row number. and are two special functions made for this purpose. The specific row number in is conserved and the rest of all rows are replaced with .

Works opposite to here the specific rows are masked and the rest of the elements are conserved.

There are a number of factors in a code that are changed with masking. Following are some of the effects.

Rate of the code

In the length of short cycles in the tanner graph

Sparsity

Unnecessary rows or columns

For masking is separated in to two square matrices. Now these two sub matrices are masked. After masking the two sub matrices are combined again the resultant matrix is. This matrix is sparser as depicted in following figure.

## Figure 9: Plots showing the affect of masking on nonzero elements (a) before masking (b) after masking

## ???????????????????????????

## 4.5 Dispersion

In section 4.4 masking is done. But in this stage the matrix is not in binary form so this matrix must be converted to the binary form. The method of dispersion is used for this purpose of conversion from decimal to binary. When converting to binary a circular per-mutation matrix (CPM) replaces every element of the masked matrix.

Every element of this matrix is square matrix of order where (). There is separate (CPM) for every entry of the. Where position in the first row is taken by an element . If there is a zero in the matrix it will be replaced by null matrix of order. This matrix will meet the constraint if the matrix from which this matrix is derived meets constraint. The dimensions of the final parity check matrix will be 521 as every element replaced by matrix.

## 4.6 Encoding of LDPC

As discussed earlier parity check matrix H specifies LDPC codes. For QC-LDPC the parity check matrix will be in it will be like as an array of sparse circulates. A square matrix is a where each row in the matrix is rotated one element relative to the preceding row in the right direction. i.e each row is the circular shift of the above row , and the row is the circular shift of the bottom row. For such a cyclic matrix every column is the downstairs circular shift of the left column, and column is circular shift of the last column.

In a circular matrix the weight of row and column are the same. Suppose is the weight of a row or column then if the matrix will be a permutation matrix, or a. A is known by its row or column. Which is also called the Generator of the matrix.

For a circular matrix in the field, the rows of this matrix will be independent if the rank of the matrix is. If , then there will be row or columns linearly dependent.

And all the re maining rows or columns are in dependent. This is so because this is a circular matrix.

Suppose the parity check matrix H of codes. For any two integers and such that , H will be a matrix of in .

This matrix has two structural properties

The size of each is larger as compared to its weight.

There is one place where there is one component common in any two rows or columns of this matrix.

This implies that as each is sparse so the parity check matrix will be sparser. The property also called the, implies that the girth is at least as there is no cycles of length in the corresponding tanner graph. If the weight of all in is the same, i.e. then the row weight and column weight of are the same.

Code generated by this matrix is a regular code. If the weight of the row and column of are not the same, then the matrix will be an irregular parity check matrix and consequently the code generated is an irregular code.

## 4.6.1 Generator Matrix in Form

As mentioned above LDPC codes are defined by parity check matrix, from this parity check matrix, generator matrix can be constructed. Generator matrix can be constructed from parity check matrix by two ways in this section.

## First case;

Let be the rank of the parity check matrix, of the matrix is equal to the rows of , i.e. rank and there is a sub array of order of the same rank in parity check matrix . Parity check matrix and its sub matrix both are full rank matrices. This sub matrix has the rank equal to which is , and denoted by it has the form given below.

The first columns are assumed for the corresponding bits. For this code the generator matrix has the form given below:

Here null matrix and identity matrix, each element of this matrix is a circular matrix of order with the property that . There are two parts in the above matrix . The first part is an array with order and this part is the parity of the generator matrix . The second part is and this is an identity matrix of order. If the information bits and parity bits are separate in a generator matrix, then this generator matrix is called systematic matrix and the matrix above full fills this definition so this generator matrix is a systematic matrix.

can only be a generator matrix if = O, O is a null matrix of order

If the generator of the circulant matrix for . Here if this matrix is not unknown then all the circulant for the generator matrix are formed and the construction of is not impossible. Also if is the tuple and have a 1 at the top first position and are all zero tuple for , the contents of the top row of are:

At the position in comes u which is the tuple, therefore if is the generator matrix then the condition = O is satisfied for . Consider is an array of the last sections of i.e. and column of the circulants of parity check matrix is .

From the condition of equality = O These equation are obtained.

As we know that the D matrix is a square matrix and also a full rank matrix, also non singular so its inverse exists .denoted by , the above equation takes the form:

Solving the above two equations for we obtain . Generator matrix of circulants is obtained from these . can be made from the generators i.e. [23].

## Second case;

First case was a little bit simple where a sub matrix exists whose rank was equal to the rank of ; both matrices are full ranked matrices. In this case parity check matrix whose rank is equal to or less than i.e. . In this case there is no sub matrix D. first of all least amount of columns in the parity check matrix are to be found i.e. in , let the least amount of columns are with , those circulant columns form a sub array of order denoted by , the rank of this matrix is equal to the rank of . Now we want to form a new array for this purpose the circulant columns of are permuted. The order of the new array is . there are circulants in this matrix and the array formed is given below.

The desired matrix has the form given below and the order of this matrix is

There are two matrices i.e. . These are sub matrices,

This is a array, here is a null matrix and is a identity matrix, each entry in this matrix is circulant of order is obtained in the same manner as obtained in , and it is form.

The second sub matrix is , this is a matrix with linearly independent rows. If is a generator matrix then matrix and sub matrix has to satisfy one condition i.e. []. is a zero matrix. Here is the procedure for obtaining of , if is the number of dependent columns in in then, Q has the following form.

Here matrix in the Glaious field and is zero matrix, both of these matrices are matrices. Each element in is partial circulant, this means that each row is obtained by shifting cyclically to one position to right to the above row. And first row is the right cycle shift of the last row. As we know is in circular form. So the sub matrix is also circular matrix. Let for is the top first row of row of the sub matrix of the sub matrix and such that first part is equal to zero. And of are related to linearly dependent all columns of .the following are derived from [].[23]

is frond by solving this equation. Then is separated in parts, those parts are denoted as ,â€¦. there are repeated components of . Now by using as top row and then shifting this row cyclically to one position right is obtained. By replacing the sub matrix in formed. Now we have both the matrices, i.e. . So can easily be formed by substituting both of the sub matrices, is such a manner that last row of matrix comes the top row of .