# Several Low Density Parity Check Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The aim of any communications system is to transmit information from a source to a receiver through air, channel, and optical fibre or by any other means possible. The information that needed to be sent needs to be firstly encoded modulated and then demodulated and decoded at the receiver's end. In addition to this process noise is added from different sources while the information is being is transmitted through any of the mentioned mediums. Noise can interfere and ultimately distort the information being sent. Shannon stated through his work in information theory that reliable data transmission can be achieved if the channel capacity is larger than the data rate[1]. The aim of many researchers over the years has been to achieve the theorem stated by Shannon using different methods of error correction and error detection. Error detection and error correction is achieved by adding redundant bits to the original message. Although the addition of the redundant bits will mean a higher cost it is still cheaper and faster than retransmitting the data when an error has been detected. Another method used in special cases is to use a stronger signal for the transmission of data in order to overcome the noise signal. The negative aspect of this method is that it increases the power consumption of the system in general[2]. This method is mainly used is satellite and deep space communications.

Figure 1.1 shows a schematic of a general communications block diagram. The data is first encoded at the source using a specific algorithm based encoder (this is the stage at which redundant bits are added to the original system of bits), it is then modulated and sent through a channel. This is when noise will be added to the signal which will most probably affect the signal by changing one or more bits. Once the data reaches the receiver side (the signal is now made up of three components, original set of bits, redundant bits and noise) it is demodulated and decoded using again an algorithm which is somehow related to the encoding algorithm.

C:\Users\Owner\Desktop\Uni Work\Stage 3\Diss\Written\Figs\comms system block diagram.png

Figure -Basic Schematic of a Communications System

Over the years different error correction and detection schemes have been developed in order to encode transmitted data and decode received data. They all do the job to a certain extent but differ in complexity and performance. Some of these methods are convolutional[3], Bose-Chaudher-Hocquenghen (BCH) [2], Reed-Solomon[2], turbo[4] and low-density-parity-check codes[5]. All these codes are also discussed in detail in[6].

## 1.1.2-Linear Block Codes

CodewordAll error correcting codes without exception add extra (redundant) bits to the message. LDPC codes are a class of linear block codes. The difference between block codes and other error correcting codes is that in block codes the redundant bits are either added at the beginning or the end of the message. This is where block codes got their name, the result of the method mentioned above is a 'block' of message bits long with a 'block' of redundant bits is transmitted. Linear block codes are a special case of block codes; they can be described by other bits when combined in a certain way. In information theory and coding terms, the string of bits which are produced by the encoder are referred to as a block of length 'N'. Before the message is passed through the encoder it is a block of length 'K' bits, In other words N=K+redundant bits. The rate of any code is given by the ratio of the message bits to the redundant bits, . It should be clear to the reader that there are possible entries, so the encoder will have to store the possible entries to produce the associated N codeword. This is only feasible for a small K. When K is large a generator matrix is used to produce the codeword.

Message Bits(n)

Redundancy(k)

A code is linear only if the result of modulo-2 addition of two codewords is another codeword. This property of the code will be used to design the generator matrix G. In simple terms the generators matrix is made up of columns of size K. By using the generator matrix the encoder's space complexity has been reduced by instead of . Very simply the codeword is produced from the product of the message bit by the generator matrix, , G being the generator matrix and u being the message bits or the input vector of bits.

The received codeword 'y' can be then checked simply by , this is called the syndrome. The relationship between the generator matrix, parity-check matrix and the input vector u is, .

## 1.2-History of LDPC Codes

Like turbo codes LDPC codes perform very close to the Shannon limit. Although LDPC codes were discovered first, by Gallagher[5] in 1963, they were largely ignored because of the complexity which their computation/implementation required. When turbo codes were developed by Berrou et al in [4] using the method of iteratively estimating the probabilities of the received bits and once their success became clear this lead to the rediscovery of LDPC codes by Neal and Mackay [7] later on.

LDPC codes also used the method mentioned above (iteratively estimating the probabilities) but in a slightly different manner. Mackay and Neal[7] produced a method of continually updating bit probabilities based on an algorithm (belief propagation algorithm). This was later proven to be successful in Richard et al in [8]. When LDPC codes are compared to turbo codes the have similar decoding performance [9], nevertheless LDPC codes have several advantages over turbo codes. LDPC codes require less computation effort and they also have the advantage of parallelism decoding.

## 1.3 Representation of LDPC Codes

There are two ways by which LDPC codes can be described one is matrix representation, which can be used to represent all linear codes. The second way is a graphical representation called the tanner graph. An example of a low density parity check is given in Figure 2. The dimensions of any matrix in general are given by mxn where 'n' is the number of rows and 'm' is the number of columns.

Coded Bits

Figure -Low Density Parity Check Matrix

Parity Checks

The matrix H in figure 2 is a regular parity check matrix since the number of ones in every row and every column are the same. The weight of rows and columns are given by wc and wr respectively, where in this example wr=4 wc=2. Further to be called a low density matrix it should satisfy two conditions. The first one is that wc<<m and second wr<<n [10]. Thus the name Low-Density-Parity-Codes.

Another way of representing LDPC codes are the Tanner graphs which were introduced by Tanner in [11]. Tanner graphs demonstrated that they are capable of representing LDPC codes. A Tanner graph is a bipartite graph, which means that its vertices are divided into sets or nods. Edges can only connect two different nodes together that is, edges cannot connect two edges of the same set. The two nodes that are used in tanner graphs are variable nodes (v-nodes) and check nodes (c-nodes). Figure three shows the tanner graph correspond to the parity check matrix above.

Z0

Z1

Z22

Z3

C0

C1

C2

C3

C4

C5

C6

C7

Figure -Tanner Graph

Z0=c1âŠ•c3âŠ•c4âŠ•c7

Z1=c0âŠ•c1âŠ•c2âŠ•c5

Z2=c2âŠ•c5âŠ•c6âŠ•c7

Z3=c0âŠ•c3âŠ•c4âŠ•c6

Equations Z0, Z1, Z2 and Z3 are constructed from the parity check matrix above. Each row in the parity check matrix H is a codeword is a parity check equation. The bold lined path is an example of a short cycle. This path is one of the paths which are of girth [5] 4. Short cycles should be avoided since they destroy the decoding power of the LDPC codes.

## 1.4-Thesis overview

## Chapter 2- Literature Review

## 2.1- Generator Matrix

If the parity check matrix of LDPC code is given or can designed the generator matrix of that code can be constructed. The parity check matrix should be in a systematic form that is a parity check matrix of dimensions where m=n-k should be in the form of. If this is the case then the generator matrix G is in the form of. Hence the first step is to convert the parity check matrix into an identity matrix (usually on the right) and a matrix on the left.

C:\Users\Owner\Desktop\Uni Work\Stage 3\Diss\Written\Figs\LDPC matrix.png

Figure -Irregular Parity Check Matrix H1

Now let's take an irregular parity check matrix as an example H1 in figure 4. We first have to reconstruct H1 so that it is in row-echelon [6] form. Then H is put into form using elementary row operations [7] .

Step1- The first and second row have their ones in the right position. The first row is then added to the fourth row so that the first on in the fourth row is in the right position.

Step2- The third row is replaced with the fifth row so that there is a '1' placed in the diagonal which is in the third column.

Step3-Fourth row is added to the fifth row

We continue so that every column has only a single '1' and zeros everywhere else.

Step4- First column is already correct, second column is corrected by adding the second row to the first

Step5- The third row is added to the second row to fix the third column.

Step6-Fourth row is added to the first row so that the fourth column has its '1' in the right position

Step7-The fifth row is added to the first, second and fourth row to give the H11 or more precisely the generator matrix.

Figure Generator Matrix

## 2.2 Construction of LDPC Codes

## 2.2.1-Gallager's Construction

Gallager proposed that the matrix H would have a fixed column weight 'j'. H was then divided into j sub matrices each equal size and contained only a single one in each column. The first sub-matrix was constructed which was determined beforehand whilst the rest of the sub matrices were simply permutations of the first sub-matrix.

Figure -Low density matrix (N=20,j=3,k=4)[5]

## 2.2.2- Mackay's Construction

Mackay in [12] made it clear that his primary target was to keep the girth in the Tanner graph as small as possible . The first method was keeping a constant column weight and keeping the overlap between any two columns to 1. The second method is the same as the first but m/2 of the column weight 2. These columns are then formed so that they're identity matrices of size m/2 x m/2. Another is by deleting so chosen columns of the above matrices so that the bipartite graph doesn't have a girth less than a certain value X. [13]

## 2.3-Cyclic Codes

Cyclic codes are a sub-class of linear block codes. They gained a lot of attention because the encoding of cyclic codes is even less complex than the encoding of block codes also different decoding algorithms are easier to create. Cyclic codes are also very good at correcting errors. In addition to these advantages they also include all the advantages of linear block codes. The definition of cyclic codes states that if there is a codeword in a linear code C, then there is a another codeword in the same linear code . This means that shifting any codeword cyclically either to the left or to the right produces another codeword. So in other words all codewords can be produced by manipulation of one codeword by cycle shifts and addition. Figure five gives an example of register shift circuit for a (7,3) cyclic code. The message is 101, the bits of the message are passed through one by one in descending order (m(x) = + 1 ). The clock of the shift register is reset every time a coefficient of m is passed through until , where after is passed through, the codeword is produced.[6]

Figure -Shift register Circuit of (7, 3) codeword

## 2.4 Circulant Matrix

A Circulant matrix put very simply, is a square matrix being c, and all of the other rows are the cyclic shifts of the first row. C (the first row) is the cyclic shift of the last row.

## 2.5 Quasi-Cyclic Codes

Quasi-cyclic codes are also linear block codes but they are not fully cyclic but partially. With Quasi-cyclic codes you can cyclically shift a codeword times or a multiple of either to the left. Where is said to be the shifting length [14].

Different LDPC codes have different performances and implementation complexity. The patterns in the check and variable node interconnection affect the encoding and decoding complexity. In order to define a certain quasi-cyclic code, the interconnections between rows and columns need to be described by several or many patterns. These patterns can be viewed as identity sub-matrices and so knowing the location and the structure of one of the identity sub-matrices the reset can deduced. Hence the more patterns used to define the code, the more data the decoder will have to store and manage. The rows and columns of a quasi-cyclic code have connections (cyclic connection), and because of this property they can be encoded efficient using a similar shift register circuit like the one mentioned above [15]. In addition to this the decoder will need simpler method for producing address, so less storage is needed and can be done offline since the memory access is localised [16]. In [17] it was proved that describing quasi-cyclic LDPC codes in such a way, even if the row and columns inter connection are constrained they still achieve a close to the capacity limit performance.

## 2.5.1-Quasi-Cyclic LDPC Codes

Quasi-Cyclic LDPC Codes can be constructed by using identity sub-matrices cyclically shifted. These codes have a general structure, , etc are Circulant matrices. These structures are designed by finite geometric method [18]. Another method is the one shown in figure 6 where a shifted identity sub-matrix of dimensions is and O is an all-zeros sub-matrix of dimension again obviously. Each I is a cyclically shifted either to the left or to the right certain where is the maximum amount I can be shifted and so there can only be a max of l different identity (shifted) matrices, because then there will be some repeated identity matrices. The particular example in figure six is one for a code of rate 1/4. In 'a' the code is made up of shifted identity sub-matrices, whereas in 'b' the code is made up of identity sub-matrices and all zero sub-matrices. In 'a' the row weight (4) is equal to the number of rows and column weight (3) is equal to the number of columns, these type of codes have a proven girth of twelve[19]. In 'b' the number of matrices in every row and every column are less than the number of row weight and column weight, this is usually the structure of an irregular code and it has shown to be more effective than the one in 4.1a [20]. In general a code with more sub-matrices than the rows and columns weights, produce better result than the codes which the number of sub-matrices present are equal to the row and column weights.

Figure -(a) QC sub matrices arrangement of Identity Matrices. (b) QC sub matrices arrangement of Identity Matrices and all zero Matrices

By now it should be clear to the reader that quasi-cyclic code structure depends on two main things, the position of the sub-matrices and their value. The random shifting of the sub-matrices affect the performance of the codes, some structural methods have been developed in order to overcome this. A few examples are, using finite geometries [18], combinatorial designs[21], Circulant permutation matrices[19], Kirkman triple system[22] and algebraic construction [23] .

## 2.6 Shift Register Adder Accumulator

The shift register adder accumulator (SRAA) architecture is used because it reduces encoding complexity and also reduces the complexity of the operation[24]. As mentioned earlier the operations used on the matrices to put them in the desired format are addition and multiplication, AND gates and XOR gates respectively. This can be viewed as a more complex shift register circuit shown previously in figure 5. This section will attempt to provide a theoretical explanation to the reader of how quasi-cyclic LDPC code is passed through the SRAA along with the message (useful information to be transmitted), and how the final codeword is produced.

Firstly the quasi-cyclic LDPC code is divided into row-block matrices and column block matrices. Memory is used to store the first row of the row-block matrices. At this point the number of shift register circuits will have to be decided, the larger the code the more circuits that are going to be needed. This is done in order to minimize the time it takes to produce the codeword. The number of ROMs used will also have to be decided; usually one ROM is dedicated to one SRAA circuit. Each ROM will store the first line of the block matrix. This means that the register of the SRAA circuit will not have to wait after each bit is loaded from ROM, but will be able to read data in parallel, this will increase the encoding speed. Figure 7 shows a schematic of the SRAA circuit used for a QC LDPC code (7493, 6096), 48 row block matrices, 11 of column block matrices, memory used: 127-bit wide, 48-depth memory and 11 of SRAA circuits. The register A will store the first line of elements of the generator matrix. Every time the clock pulse, the values in the register are multiplied by the input data (message). This value is added to the value saved in register B. This goes on until all the message bit have gone through and the result is the codeword.[24]

Figure -SRAA Circuit [24]

## 2.7 Sum Product Algorithm

A popular method for decoding LDPC codes is the sum-product algorithm which uses an iteratively soft input soft output decoding algorithm this method proved to be very effective [25-27]. This section will explain the principle of the SPA decoder and briefly introduce the reader to the formulas used.

The explanation will be based upon a low density parity check matrix H of dimension. The design rate of the code which corresponds to is. The actual code rate as mentioned before,, can be less than the design rate. As mentioned earlier from H a tanner graph can be produced, with m-check nodes and n-variable nodes. H can also be irregular which means that it will not have a constant row weight or column weight. In which case, the tanner graph will be characterised by degree assignment. for the i-variable nodes and for variable check nodes, and .[28]

The tanner graph in figure 8 or any other tanner graph represents that the decoder is iterative and each node represent a soft input and soft-out processor. In this example a Binary-Input White Gaussian Noise will be used and so the received sample will be

,, bipolar representation of the transmitted code bit ,. W is additive white Gaussian noise (AWGN), spread as Î·(0,, where is the noise variance given by the . Channel bit log-likely hood ratio (LLRs) are given by

## ()

In one iteration, the variable nodes will process the input LLRs and send the output message to the appropriate check and variable nodes. The check nodes will then produce their own output messages, from their input LLRs and send them to the appropriate input variables. The message from one variable to the check nodes is given by:

## ()

is the message from check node j' to variable node i where the summation is over check nodes except node j. And the message from one check node to a variable node is given by:

## ()

is the message from variable node i' to check node j where the product is over

variable node neighbours except of the variable node i where the product is over.[28]

Figure -Tanner Graph [28]

## Chapter 3 Technical Section

This chapter will be divided into two sections. The first section will present an example of the sum-product algorithm and an example for the SRAA encoder. The second section will include pictures of the code and an explanation of what the code is doing in order to achieve its task.

## 3.1 Examples

## 3.1.1 SRAA

The SRAA example will be based on the explanation given above on page 11. The difference is that in this section actual number will be used to give the reader the practical side of the SRAA encoder. As mentioned earlier, the operations are simplified so that they're achieved only by AND and XOR gates. For the quasi-cyclic code (7493, 6090), rate of 0.8, there are 1397 (6090-7493) redundant bits and since it was mentioned before that the memory used will be 127-bit, 11 SRAA circuits will because 11X127=1397. The use of several ROMs was necessary because if one ROM was used that would mean, 11 clock cycles would have been needed for the ROM to pass the information over to register A (figure 9). If that was the case then, 11 clock cycles * 48 row block matrices that is 528 clock cycles is the total amount of time which would have been needed. As shown in figure 9, 127-bit registers used to load the first line of the generator matrix, which are multiplied by the message and added to the content of register B. While simultaneously the registers in A are shifted which mean that after 127 pulses the calculations are finished. So 48 (column block) * 127 clock cycles are needed to produce the codeword and store in register B.[24]

In the example in [24] the code created was for 'Digital Terrestrial Multimedia Broadcast'. Three encoding rates were used, 0.4, and 0.8. Different generator matrices were designed according to the rate being used.

Table -SRAA

## Rate

## Code

## Number of circuits

## Check Bits

## Clock Cycles

0.6

(7493, 4572)

23

2921

36*127

0.4

(7493,3048)

35

4445

36*127

0.8

(7493, 6096)

11

1397

24*127

These three different rates were used to achieve the required goal of transmission. Since the rates would not have been used simultaneously the circuits were re-used instead of wasting resources, in other words instead of having 69 (11 +23 +35 = 69) the job can be done with 35 circuits. Figure shows a schematic of the whole encoder made up of the control module which acts like the brain and controls the rest of the parts, the 35 SRAA circuits and RAM to store the input.

Figure - SRAA System Schematic

## 3.1.2 SPA

The decoding structure of this example will be divided into four steps, initial step, horizontal step, vertical step and termination. The example will use the following criteria.

AWGN channel

Where x is the message

And n is the noise where

BPSK modulation for the transmission of codeword

Systematic parity-check matrix,

H==

Generator Matrix,

G==

Information to be sent,

Codeword (C)

After BPSK modulation,

## Initial step

Assume the received codeword is,

Firstly the probabilities at the variable nodes have to be computed and sent to the check nodes using,

## ()

And

## ()

For AWGN,

## ()

Taking the fifth value as an example and assuming,

So the message from variable-node five to check node 3

Check node three receives another message from variable node 1

## Horizontal Step

Check node 3's target is to make sure that the third equation is satisfied by each variable node, which sent check node messages except the variable node that check node 3 will send the message to. Probability that a vector of 0's and 1's is:

## ()

is the probability of the bit at position 'i' to be a 1, and is it equal to the from the variable nodes. For message going to variable node 1,

Similarly for message

Using the same method as above, In order to get the next message from v-node 1 to c-node3 we need to find,

Using the above results,

## Vertical Step

Using the information acquired the variable node is able to calculate the probability of its respective bit.

## ()

## ()

Where k is a constant chosen in such a way so that

For v-node 5,

For

Clearly we can see that since

Now we have to calculate the message form v-node to c-node using

## ()

Another constant is chosen so that

Using the equation,

And,

## Termination

The process is continued until a valid codeword is constructed or until the maximum number of iterations is reached.

## 3.2 MATLAB

This section will present the code that was used in this project. The code will be divided into different parts, SPA decoder, cyclic parity check matrix into G= [ ,and SRAA encoder. Each part will consist of its respective piece of code and an explanation of what is happening in the code. And in the final section the full MATLAB code will presented and an explanation of how each part was interconnected with another one so that to work together and produce the final code.

## 3.2.1 SPA Decoder

The SPA decoder will be discussed first and in great detail in comparison with the other parts of the code in order to give the reader a greater understanding of the sum-product algorithm after it has been introduced in the previous chapter.

Figure -SPA 1

The SPA code follows the theory of the sum-product algorithm in the sense that it is divided into four parts as well, initialisation, horizontal step, vertical step and the final hard decision. We start by declaring the different variables, parity check matrix H dimensions which are 8X16, code rate Â½, the message length which for simplicity here is a vector of 16 zeros and finally the frames (L) and the dummy variable number which is initially set to zero.

The first loop is for the SNR (signal to noise ratio) declaring it to increase from 0-15 in this example, following that the loop for the number of frames, or how many times the message will go through. The changing the zeros to ones and the ones to minus one, this simulates BPSK modulation. Then defining noise variance with formula seen in line 25 of the code and noise with the formulas in line 29, finally the construction of codeword y made up of the modulated BPSK message and the noise.

Figure -SPA 2

The next step is to make a hard decision on y with the following instruction, if an element in y is smaller than zero then change it to a one and if it is greater than zero then change it to a zero. This is needed since the introduction of noise will affect the message from being made up solemnly of zeros and ones to decimals. The syndrome which was introduced in page 9 is used here to determine if an error is present or not. We then implement the first formula which is the initialisation step,

## ( )

Figure -SPA 3

Under the condition that an error is present and thus the syndrome is not equal to zero the next stage is the horizontal step using the formula

## ()

This is implemented in MATLAB by firstly replacing the first value of q (vector produced after noise variance is calculated) wherever there is a one present in the first column of H, the second value of 1 where there is a one present in the second column of H and so on until a matrix Q is produced. What remains is to apply the formula above and producing a matrix R by multiplying the non-zero values of each row together and placing the results in respective position in matrix R.

Figure -SPA 4

Next is the vertical step which uses the formula,

## ().

Firstly the summation of the columns of R is obtained in order to return from a matrix (R) to a vector to a vector q then the summation of the element of q with the element of R which is in the same column but not the same row. For example to calculate element the third element in q will be added to the element and and so on.

Figure -SPA 5

The fourth and final step is the hard decision after the summation of the elements of vector q with elements of R of the same columns using the formula,

## .

## ()

For example the second element of in the code is the summation of q2+R and the third element of .

## 3. 2.2 Formation of Parity Check Matrix using Cycle Shifts

This part of the code is fairly straight forward. The idea behind it is to create is to create a parity-check matrix knowing only the size of the codeword which let's say is four for the sake of our example, given an identity matrix (4X4) and a base matrix which is made up of numbers which represent the number of cycle shifts (to the right) of the identity matrix. Finally producing the parity-check matrix which will be made up of identity sub-matrices either shifted or not according to the specified base matrix.

Figure -Cyclic

In MATLAB the first thing which was done was to create the base matrix made up of the number of shifts,1 is one shift to the right, 2 is two shifts and so on. Then the final dimensions of the parity check matrix had to be declared, along with the dimensions of the identity matrix. Briefly the code then checks each row of the base matrix separately for the number of cycle shifts. Two variables are then declared hr and hc for row and columns respectively these are multiplied by the identity dimension of the identity matrix and then put together to form the final parity check matrix.

## 3.2.3 Transforming Parity-Check matrix to Generator Matrix

This part of the code transform any given parity check matrix into a generator matrix of the form G= [, by Gaussian elimination.

Figure -H

After the parity-check matrix is inserted, the code will firstly define its dimension in term of m and n. And produce a generator matrix in the form of .

## 4 Results

This chapter will present and demonstrate the outcome of each of the above code sections. This will be done by stating what will be used as input and the output will either be shown as screen shots of graphs or by any means depending on which part of the code is being.

## 4.1 Formation of Parity Check Matrix using Cycle Shifts

Here the code presented above will be used to create a parity check matrix. For simplicity a small identity matrix and a small base matrix will be used to allow the reader to easily spot the shifts. The base matrix is bm=, and the size of the identity matrix to be a 3x3. The output of the code was a parity check matrix of dimensions 9x9 with shifted identity matrices according to the number in the base matrix.

Figure -Cycle Shifts

The 3 x 3 matrix in the red square is one of the 9 shifted identity matrices. This one was shifted by 2 cycles according to the base matrix and it is also easy visualise it.

## 4.2 Transforming a parity check matrix into a Generator Matrix

Given the parity Check Matrix H=, the code will output a matrix H where the right side will be an identity matrix and the left side will be 'P', then produce another matrix which will move the identity matrix to the right side and transpose 'P'.

Figure -Parity Generator

## 4.3 SPA

The length of the parity check matrix used in this specific run of the SPA decoder was 12 x 32 so the length of n was 360 and SNR value increased from 0-15.

Figure -SPA

## 5 Discussion

Most methods of constructing LDPC codes are based on random techniques. These techniques although they have proven to perform better than structured LDPC codes, like quasi-cyclic LDPC codes, they have several disadvantages. For example, when accessing a large parity-check matrix, when encoding and also there is a problem in terms of storing. LDPC codes like BF (bit flipping algorithm) and PEG (progressive-edge-growth), which are irregular can maximise performance, rate and girth. They can also be easily designed in such a way so that they satisfy the girth, rate and length needed to achieve the desired results. Another disadvantage of these codes is that they require complex hardware installations which are also expensive. Structured codes do not have this problem because they are designed in such a way so that the connections between row and columns of the parity-check matrix are in some way predefined or known. The main objective when designing such a code is to avoid a code with four cycles. As mentioned before in this paper there are a number of methods that have been developed over the years like combinatorial, finite geometry and algebraic are methods that achieve good performance codes which can be designed to avoid four cycles. This is achieved by observing row-column constraints.[29]

## 5.1 Results

The memory problem of unstructured LDPC code can be solved by the use of Quasi-Cyclic LDPC code studied in this paper. The memory required for storing the parity-check matrix is reduced by a factor of 1/T where TXT cyclical shifted matrices are used. This is because the only thing that needs to be stored in memory is the base matrix and the matrix which is going to be shifted.

Another massive drawback of unstructured LDPC codes is the amount of calculations needed to compute a codeword. Although the parity-check matrix H is sparse, which means it will not contain a lot of ones, the generator matrix G will most probably not be sparse. As the reader should know by now the codeword C=uG , where 'u' is a string of information bits. This matrix multiplication will have the complexity of N2 operations, more precisely operations where R is the code rate. Since for LDPC codes 'n' is large (around hundred thousand bits) this will make the encoder very complex. [10]A solution to this is to not have a generator matrix G, and instead work with the parity check-matrix H to encode by using back substitution to manipulate H.[30]