Fpga Implementation Of Reed Solomon Error Correcting Code Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Reed-Solomon is an error-correcting code capable of dealing with correcting multiple errors, specifically burst errors. It is form of special non binary subclass of BCH (Bose, Chaudhari and Hocqueng) codes. It can correct up to (n-k)/2 or t symbols. Programmable-hardware devices are the best choice for Reed-Solomon codec implementation, because these devices contain an abundance of the registers that the hardware-implementation process requires. Easily mapping of equations is possible on the LUT architecture of FPGAs.

I. Introduction

Reed-Solomon error correcting codes (RS codes) are widely used in communication systems and data storages to recover data from possible errors that occur during transmission and from disc error respectively [3]. One typical application of the RS codes is the Forward Error Correction (FEC) [1].

Before data transmission, the encoder attaches parity symbols to the data using a predetermined algorithm before transmission. At the receiving side, the decoder detects and corrects a limited predetermined number of errors occurred during transmission. Transmitting the extra parity symbols requires extra bandwidth compared to transmitting the pure data. However, transmitting additional symbols introduced by FEC is better than retransmitting the whole package when at least an error has been detected by the receiver.

For a typical channel, the addition of RS coding allows the system to operate within approximately 4 dB of the Shannon capacity. The resulting benefit translates into higher data rates, lower bit-error rates (BERs), greater transmission distance, and greater immunity to interference effects [2].

RS codecs can be implemented in a number of platforms, traditionally being Application Specific Integrated Circuits (ASICs), and then shifting towards DSP and Programmable Logic Devices (PLDs) [3]. FPGAs provide the advantage of easy design reuse, faster design time especially for prototyping [4]. A selection of the most suitable FEC to be used with the system may be provided using the benefit of reprogrammability of FPGAs. In fact, programmable-hardware devices are most recommended for Reed-Solomon-codec implementation due to the abundance of registers that the hardware implementation requires. Parallel realization of equations is possible to meet time speed constraints. Equations with multipliers and power/ inversion circuits may be mapped into the look-up table (LUT) architecture of FPGAs. The RAMs available further reduce hardware resources [5].

II. Reed-Solomon Theory

A Reed-Solomon code is a block code and can be specified as RS(n,k) as shown in Fig. 1. The variable n is the size of the codeword with the unit of symbols, k is the number of data symbols and 2t is the number of parity symbols each symbol contains s number of bits.

PARATY

DATA

K 2t

n

Fig.1 The structure of a RS codeword

For a symbol size s, the maximum codeword length (n) for a Reed-Solomon code is

n = 2s - 1

The RS code allows to correct up to t number of symbol errors where t is given by

t = n - k

2

A popular Reed-Solomon code is RS (255,223) with 8-bit symbols. Each codeword contains 255 code word bytes, of which 223 bytes are data and 32 bytes are parity. For this code:

n = 255, k = 223, s = 8

2t = 32, t = 16

A. Galois Field

RS codes are based on finite fields, called Galois fields. Rather then look at individual numbers and equations, the approach of modern mathematicians is to look at all the numbers that can be obtained from some given initial collection by using operators such as addition, subtraction, multiplication and division. The resulting collection is called a field. Every element, except zero, can be expressed as a power of a primitive element, α, of the field. The non-zero field elements form a cyclic group defined based on a binary primitive polynomial. Some fields, like the set of integers, are infinite. This causes problems when they represent on a computer with a fixed length word. Galois field have the useful property that any operation on an element of the field will always result in another element of the field. Field element values are shown in Table 1.

B. Arithmatic Operation

i. Addition

Addition is defined as the bitwise exclusive or of two numbers in a field.

i.e 6+2=4, 1+2=3

1+3=2, 3+6=5

A definition of addition means that subtraction gives the same result as addition. This can be seen by the fact that adding any to numbers in the field (using the bitwise XOR addition), will give a result of 0,as all 1 bits will be changed to 0, and the 0's will remain unchanged.

Thus a number is its own additive inverse. Thus all subtraction operations can be replaced by addition operations.

Exponential

Form

Polynomial

Form

Index

Forms

(Bits)

Index Form

(Value)

0

0

0 0 0

0

1=α0

1

0 0 1

1

α1

x

0 1 0

2

α2

x2

1 0 0

4

α3

x3=1+x

0 1 1

3

α4

x4=x.x3

=x+x2

1 1 0

6

α5

x5=x2+x3

=x2+x+1

1 1 1

7

α6

x6=x3+x2+x

=x2+1

1 0 1

5

α7

x7=x3+x

=x+x+1

=1

0 0 1

1

Table 1 Field element value assignments using polynomial representation

ii. Multiplication

Multiplication is defined as value that is obtained when the exponents of α, are added and converted back into index form.

Multiples of αn can be removed from the multiplication expression, as these represent 1, which has multiplication identity, and thus has no effect. Using the Galois field some examples are as follows:

1*x = α0*αy = αy = x (where x is field element)

2*3 = α1*α3 = α4 = 6

3*7 = α3*α5 = α8 = α1*α7 =α1*1 =α1 = 2

iii. Differentiation

Let any vth degree polynomial given by:

U(x) = UvXv + Uv-1Xv-1 + …+ U2X2 + U1X + U0

Then we can define U'(x) as:

U'(x) = vUvXv-1 + (v-1)Uv-1Xv-2 + ………+ U5X4 + U3X2 + U1

Where jUj =0 when j is even and jUj=Uj when j is odd, for example

For U(x) = vX4 + v2X3 + vX2 + X + v5 then

U'(x) = v2X2+1.

C. Codeword

Reed-Solomon codes are form of linear cyclic polynomial codes. This means that the elements used in Reed-Solomon codes can be represented by polynomials, coefficients of which are members of a Galois field (GF (2M).

The output of the encoding process result in code words, which is the information with parity bytes added to the information. The notation of Reed-Solomon codes indicates that the length of the information polynomial is k bytes, while the number of parity bytes that are added is 2t. The length of the codeword (information and parity bytes) is n.

n is given by the formula : n=2M-1

n is also represented by : n=k+2t

D. Generator Polynomial

Error correcting codes require a systematic method of determining the parity bits which should be added, given a particular stream of information. To enable this systematic method of determining the parity bytes, a generator polynomial is required. This generator polynomial is used in the encoding and decoding process.

The generator polynomial is given by:

2t-1

g (X) = П ( X + αi )

i=0

Where α is the primitive element.

g(x) = (x+α0)(x+α1)(x+α2)….(x+α2t-2)(x+α2t-1)

g(x) =x2t + g2t-1x2t-1 + ….. +g3x3 + g2x2 + g1x +g0

the coefficients g0,….g2t-1 are elements of GF.

III. Reed Solomon Encoder

The code words c(x) of the Reed-Solomon code are determined by using the following formula:

c(x)=i(x).g(x)

Fig. 2 Architecture of RS encoder

Where g(x) is the generator polynomial,

i(x) is the information (data) polynomial.

A faster way of encoding information is to obtain the 2t parity symbols is to use the alternative encoding formula:

p(x)=i(x). xn-k mod g(x)

Where p(x) is the parity polynomial having length of 2t.

p(x) is calculated modulo to the generator polynomial g(x). This calculation is equivalent to obtaining the remainder when the information is shifted by 2t places, and being divided by the generator polynomial. Each byte of data is fed into the shift registers, first being multiplied by the coefficients of the generator polynomial. It is then added to the element of the shift register from the previous data element, and put in the storage register. These storage registers are all initialized to 0 at the start of the encoding algorithm.

IV. Reed Solomon Decoder

The high level architecture of the decoding data path is shown in Fig.3. Decoding RS codes involves the extraction of two information entities from the received word. These are the set of Partial Syndromes, and the Error Locator Polynomial. From these two parameters, Error Locations and Error magnitudes are extracted, and are directly applied upon the received word to extract the original, corrected message.The decoder first calculates the syndrome of the receiving codeword to detect any potential errors occurred during transmission. If the syndrome polynomial, S(x), is not zero, the receiving codeword is therefore erroneous and will be corrected if the number of erroneous symbols is less than eight.

Fig. 3 Block Diagram of RS Decoder

i. Syndrome Calculation

The received polynomial is denoted by r(x), which is the information sent (c(x)) with some noise added n(x).

i.e.: r(x) = c(x) + n(x).

αi

reg

Received data

Syndrome Si

Fig. 4 Architecture of syndrome calculator

Syndrome calculation can be done by an iterative process, such that the answer (2t syndrome symbols) is available as soon as the last parity symbol has been read in. The syndromes represent information about the noise, independent of the data that was sent.

ii. Error Polynomial

The second step is to find the error polynomial lambda. This requires solving 2t simultaneous equations, one for each syndrome. The 2t syndromes form a simultaneous equation with t unknowns. The unknowns are the locations of the errors. The Error Locator Polynomial is a polynomial whose roots directly defined the error locations in the received word. Berlekamp-Massey Algorithm is used to find out error locator polynomial. The Berlekamp-Massey algorithm is a shift-register synthesis algorithm. It takes as input the n-k partial syndromes and outputs the error locator polynomial.

iii. Error Location

After the Error Locator Polynomial has been computed, the roots of this polynomial have to be calculated in order to know the error locations. This is done by performing the Chien Search, which evaluates the Error Locator Polynomial at all elements of the GF(2m).The chien's algorithm calculates the location of the erroneous symbols in each codeword. It evaluate the polynomial Λ(x) at all the field elements of GF (2m) for a given Reed-Solomon (n,k) code.

iv. Error Magnitude

To evaluate the magnitude of the errors at the particular error locations, the error magnitude polynomial denoted by Ω(x) needs to be determined. This polynomial is defined as,

Ω (x) = S(x)Λ(x) mod X2t

The Forney algorithm is used to compute for the error magnitudes Wij, corresponding to the respective error locations. Wij can be found out using the following equation:

Wij = -Xj Ω (Xj-1)

Λ' (Xj-1)

Where, Xj is the error location (in the index form). Xj-1 is the inverse of the error location. Λ'(x) is the derivative of the error locator polynomial.

v. Error Corrector

The error corrector block takes the received code and just adds with the corresponding error magnitude that computes at the respective error location to attain the original message stream.

V. Advantages and Disadvantages

1. Advantages

i. Burst error correcting code

A well-known example of a Reed-Solomon code is RS (255, 223) with 8-bit symbols. For this specific Reed-Solomon code, each code word has 255 total bytes, with 223 bytes of data and 32 bytes for parity. So the decoder can automatically correct 16 symbol errors anywhere in the code word.

ii. Power Savings Resulting from "Coding Gain"

It allows for transmission at lower power levels. This power saving is called "coding gain," and it allows substitution of lower-cost, lower-power parts in transmitter electronics.

2. Disadvantages

Optical communication requires very fast error correcting decoders at lower signal to noise ratio. So we can't use RS Codes with Optical communication because of its comparatively slower decoding technique. Performance of Product code is better than RS(255,223) in Optical communication at lower SNR.

VI. RESULTS

i. Encoder output

Fig. 5 Encoder output waveform

ii. Decoder output

Fig. 6 Decoder output wavform

VII. Conclusion

Use of error-control codes reduces interference effects, and FECs in general, eliminate the need for retransmission of data streams. The theoretical performance of Reed-Solomon codes in bursty noise channels makes it a very good choice for FEC. With the availability of LUTs, RAMs and registers, FPGAs have become a suitable solution to RS implementations.

VIII. References

Richard E. Balahut, "Theory and Practice of Error Control Codes", Addison-Wesely Pub, 1983.

Richard E. Balahut, "Algebric Codes for Data Transmission", Cambridge Publisher, 2003

S. B. Wicker, V. Bhargava, "Reed Solomon Codes and their Applications", IEEE Press, 1994.

W.J. Ebel, W. H. Tranter, The Performance of Reed-Solomon Codes on a Bursty-Noise Channel, IEEE Transactions on Communications, Vol. 43, No. 2/3/4, February/March/April 1995

S. S. Shah, "Self-correcting ocodes counquer noise", Part II: Reed-Solomon codecs, Design Feature, Electronic Design News, March 2001.

H. Chang, C. Shung, C. Li, "A Reed-Solomon Product-Code (RS-PC) Decoder Chip for DVD Applications", IEEE Journal of Solid-State Circuits, Vol. 36, No. 2, February 2001.

S. B. Wicker, "Error control Systems for Digital Communication and Storage", Prentice Hall, 1995.

R. K. Awalt., "Making the ASIC/FPGA Decision", Integrated System Design Magazine, July1999.

G. Ahlquist, B. Nelson, and M.Rice, "Synthesis of Small and Fast Finite Field Multipliers for Field Programmable Gate Array", Proceeding of 5th annual mliltary and Aerospace programmable logic device International Conference, Sept 2002.

Frederic Rivollon, " Achieving Breakthrough Performance in Vertex-4 FPGA guide", May 2005.

Hanho Lee, "A High Speed, Low capacity Reed Solomon decoder for optical communication", IEEE Transaction on circuits and System II, 2005.

B. Thomson,"Turbo Product-Coding Hardware Makes Forward-Error Correction Possible with Significant Performance Advantages over Traditional Reed-Solomon and Viterbi Coding Approaches" ,Wireless Systems Design, June 2000

T. A. Mehta, "Programmable Logic Devices : Viable Solutions for Implementing Error-Control Coding Functions", Integrated System Design Magazine.

E. J. Weldon, "Error Correcting Codes and Trellis coded Modulation", University of Hawaii, 1996.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.