Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

Encoding Decoding Communication

Channel coding is significant elements of Digital Communication systems. Channel coding techniques are used for the betterment of communications performance by enabling the transmitted signals to better withstand the effects of various channel impairments such as noise, interference and fading. To ensure error-free data transmission and less utilization of Bandwidth, channel coding is important. Our project evaluates the performance of Convolutional Codes and simultaneously of the viterbi decoder. Convolutional Codes are among the most useful error-correcting codes that have been developed. We have done extensive simulation work to evaluate the performance of Convolutional coding and viterbi decoding.

Chapter 1 DIGITAL COMMUNICATION SYSTEM

Early communication was based on implicit assumption that messages signal is continuous varying time signal waveform. Such continuous time signals are referred as analog signals and there corresponding information sources are called analog sources. Analog signals are transmitted using carrier modulation over communication channel and accordingly demodulated at receiver. Such communication system is called analog communication systems. In digital transmission analog source output is converted to digital form. Message can be transmitted using digital modulation and at receiver demodulated as digital signal.

The basic feature of digital communication system is that during finite interval of time it sends a waveform from possible number of waveforms. Important measure of system performance in digital communication systems is probability of error.

1.2 WHY DIGITAL COMMUNICATION

Digital communication is preferred over analog communication because digital circuits have a less probability of distortion and interference than analog. Digital circuits are reliable than analog and have low cost. Digital hardware is more flexible to implement than analog. In digital signals time division multiplexing is simpler then FDM in analog signals.

In digital communication system functional operations performed at both transmitter and receiver should be expanded to add messages signal bias at transmitter and message signal synthesis or interpolating at receiver. Additional functions include redundancy removal and channel encoding and decoding.

Figure 1.2: Digital Communication Block Diagram

1.3.1 Source Nature

Information sources can be analog or digital. Information is knowledge about some thing. Information can be obtained via listening, watching, or other means of observation. The information source produces an output which interest to the receiver of information, who doesn't know that output in advance. The basic role of the communication system is to make sure that this information is transmitted to receive properly. Since the output of the information source is the time varying function it can be modeled as a random process.

Information sources are random processes, and the properties of the random process depend upon the nature of information source. For example, while modeling speech signals, the resulting random process, processes all its power in a frequency band of approximately 300-4000 Hz. Consequently, the power-spectral density of speech signal also occupies this band of frequencies.

Purpose of A/D converter is to transform the analog signal into discrete signal with minimum distortion. Nyquist theorem allows us to accomplish this conversion by sampling the analog signal every T seconds. Moreover, Nyquist theorem also allows us to reconstruct the original analog signal by interpolating sampled signal. But the analog signal must not contain any spectral content in frequency that is higher than 1/2T hertz, or there will be aliasing in the reconstructed signal. Before sampling, we should eliminate the higher frequencies by simply using a low-pass filter (LPF), namely anti-aliasing filter.

Figure 1.3.2: Analog Sampled Signal

The digital to analog process is done by interpolation. There are many schemes for interpolation such as ZOH (Zero Order and Hold), which uses constant pulses for reconstruction, and FOH (First Order and Hold), which uses linear lines for reconstruction.

Figure 1.2.2: ZOH and FOH interpolation

1.3.3 Source Encoder/Decoder

Source encoder converts analog signal into bit stream. Its objective is to generate a bit stream that carries maximum information that allows reconstruction of the original signal with minimum distortion. Well designed source encoders produce such bit streams that compresses the data. Efficient source coding allows a reduction in a bandwidth resources required to store or to convey understanding of the source data. Information content of the information source is known as the Entropy of the source. Entropy is the average amount of information per source output.

N

H(x) = E {I (xj)} = - ∑ pi log pi

i=1 

1.3.4 Quantization

It is a process of mapping a continuous range of amplitudes of a signal into a finite set of discrete amplitudes. Quantizers are the devices that remove the irrelevant information in the signal, and their operation cannot be reversed. In sampling distortion donnot occur while quantization introdues distortion. Amplitude quantization is most required step in any speech coding process, and it determines overall distortion. And also calculate the bit rate necessary to represent the speech waveform.

X8

X7

X6

X5

X4

X3

X2

X1

Figure 1.3.4: Example of 8-level quantization scheme

1.3.5 Modulator/Demodulator

This component change source symbol stream into an analog signal. The main task is to produce analog signal especially for the channel's characteristics. The modulator inputs the message, manipulates it and output the finest analog signal possible which represents the message and thus conveying required information. The demodulator does the opposite process; it takes analog signal and converts it to a message, meaning that it recovers the original information at the receiver side. It's the modulator and demodulator duty to transmit, receive and extract the information with minimal BE R (Bit Error Rate).

Figure 1.3.5: Modulation Schemes

The term noise refers to surplus electrical signal that are constantly there in electrical systems. The presences of noise cover up on a signal tend to ambiguous or facade the signal. It confines the receiver ability to make right symbol assessment, and therefore confines rate of information transmission. Noise built up from variety of possessions both man made and nature.

Good engineering design can minimize much of noise ratio or those effects which are not desired through different methods such as filtering, choice of modulation and the selection of best receiver site. Thermal noise is caused by the thermal motion of electrons in all dissipative resistors. These electrons are also responsible for thermal noise as a zero mean Gaussian random process.

CHAPTER 2

CHANNEL CODING

CHAPTER 2: CHANNEL CODING

2.1 INTRODUCTION

Channel coding is a type of signal transformations that is designed for the improvement of communications performance by letting transmitted signal to better survive effect of various channel impairments such as interference, noise and fading. These methods of signal processing are tools for accomplishing desirable system tradeoffs. By using large scale integrated circuit and high speed digital processing methods it had made possible to provide as much as 10db performance improvement at much less cost.

Shannon showed that by the addition of redundant bits to source information we introduce a method to minimize error in channel without disturbing information transmission rate provided that the information rate is less than channel capacity. Average number of information bits per unit time can be reduced by using function of the speech code. Minimum number of information bits should be transmitted over the channel to user. The input of the channel encoder is the output of speech code. Radio link performance is improved by using Channel coding in mobile communication by the addition of redundant bits to source information. At the transmitter channel code maps the digital information which is produced by a data source into a form that can be decoded by the receiver with minimum errors.

Figure 2.1: Architecture of channel encoder

Channel coding mechanism adds redundancy to the codes in a controlled manner by adding extra bits so that the receiver can do detection and correction in a noisy channel.

Channel codes which are produced are classified as block codes and convolution codes

The hamming distance (minimum), dmin of a code is used as a criteria for determining error correction ability. the hamming distance between two code words is defined as number of different bits and the minimum hamming distance is defined as smallest value of d.if minimum hamming distance is dmin ,(dmin -1)bit errors can be detected and we correct the integer [(dmin-1)/2] bit errors .raw data transmission rate can be reduced additional coded bits.

Error correction codes are very useful to use.without implementing these codes in our communication system our data delievered will be very noisy and corrupted.Below is the graph which showz comparison between uncoded and coded data error performance.

Figure: Comparison of typical coded versus uncoded error performance.

2.3 Types of Channel codes

2.3.1 Block Codes

In computer science a block code is a type of channel coding. It adds redundancy to a message so that, at the receiver, one can decode with minimal (theoretically zero) errors, provided that the information rate (amount of transported information in bits per sec) would not exceed the channel capacity.

The main characterisation of a block code is that it is a fixed length channel code (unlike source coding schemes such as Huffman and unlike channel coding methods like convolutional encoding). Typically, a block code takes a k-digit information word, and transforms this into an n-digit codeword.

Reed-Solomon codes are block-based error correcting codes with a wide range of applications in digital communications and storage. Reed-Solomon codes are used to correct errors in many systems including:

1)Storage devices (including tape, Compact Disk, DVD, barcodes, etc)

2)Wireless or mobile communications (including cellular telephones, microwave links, etc)

3) Satellite communications

4) Digital television / DVB

5) High-speed modems such as ADSL, xDSL, etc.

A typical system is shown here:

Figure: Reed-Solomon Block Diagram

The Reed-Solomon encoder takes a block of digital data and adds extra "redundant" bits. Errors occur during transmission or storage for a number of reasons (for example noise or interference, scratches on a CD, etc). The Reed-Solomon decoder processes each block and attempts to correct errors and recover the original data. The number and type of errors that can be corrected depends on the characteristics of the Reed-Solomon code.

CHAPTER 3

CONVOLUTIONAL ENCODING

Chapter 3 CONVLUTIONAL CODING

Convolutional codes are used in voice band modems like V.32, V.17, and V.34 and also in GSM mobile phones, as well as communication devices of satellite and military.

The idea is to make every codeword symbol be the weighted sum of the various input message symbols. This is like convolution used in linear time invariant systems where the output of system is found, if you know about the input and impulse response.

So in convolutional encoder we generally find the output of the system, by convolving the input bit, against the states of the convolution encoder, registers. Fundamentally, convolutional codes do not reduce much noise as compared to an equivalent block code. In most of the cases, they generally offer more simplicity of implementation over a block code of same power. The encoder is a simple circuit which contain state memory and some feedback logic, normally XOR gates. The decoder can be implemented in software or firmware.

The Viterbi algorithm is the most favourable algorithm which is used to decode convolutional codes. It is found that they generally give good results in environment of lower noise.

Convolutional codes represent one method within the general class of codes. Channel codes which are also called error-correction codes allow reliable communication of an information sequence over that channel which adds noise, bring in bit errors, or otherwise deform the transmitted signal. These codes have many applications which include deep-space communication and voice band modems. Convolutional codes are commonly précised by the following three parameters; (n, k, m).

n = output bits

k = input bits

m= memory registers

L=constraint length

The quantity k/n which is called code rate is a measure of the capability of the codes. Usually range of n and k is from 1 to 8 and range of m is from 2 to 10 and the code rate from 1/8 to 7/8 except for deep space application where the code rates as low as 1/100 or even longer have been engaged.

Often the manufactures of the Convolutional code chips specify the codes by the following parameters n, k, L. The quantity L is the constraint length of the code and is defined by Constraint length, L = k*(m-1).

The constraint length L stand for the bits in the encoder memory that effects the production of n output bits. The constraint length L is also indicated by the letter K.

3.2.1 CONVOLUTIONAL ENCODING

Convolutional codes protect by adding unwanted bits as any binary code. A rate k/n Convolutional encoder develops the input series of k-bit information symbols through one or more binary shift registers. The convolutional encoder calculates every n-bits representation (n > k) of the output series from linear process on the present input symbol and the contents of the shift register(s).

Therefore, a k-bit input symbol is processed by a rate k/n convolutional encoder and computes an n-bit out put symbol with every shift update. Figure shows a non recursive convolutional encoder having rate of 1/2.

Figure 4.1 Convolutional encoder with rate 1/2 and K=3

For the encoder above, shows state variations and resulting output code words. Sequence U for the message sequence m=1 1 0 1 1

Solution

Table 3.1

Branch word at time ti

u1 u2

State at

Time ti+1

State at

Time ti

Register

Contents

Input

Bit mi

--

0 0

0 0

0 0 0

-

1

1

1 0

0 0

1 0 0

1

1

0

1 1

1 0

1 1 0

1

1

0

0 1

1 1

0 1 1

0

0

0

1 0

0 1

1 0 1

1

1

0

1 1

1 0

1 1 0

1

1

0

0 1

1 1

0 1 1

0

1

1

0 0

0 1

0 0 1

0

U = 1 1 0 1 0 1 0 0 0 1 0 1 1 1

Sometimes, the encoder characters are characterized by generator polynomial. We can represent a convolutional encoder with a set of n generator polynomial, one for each of the n modulo-2 adders. Each polynomial is of degree K-1 or less and describes the connection of encoding shift register to that modulo-2 adder, much the same way that a connection vector does. The coefficient of each h term in the (K-1)-degree polynomial is either 1 or 0, depending on whether connection exists or doesn't between the shift registers and the modulo-2 adder. For the encoder example in figure 4.1, we can write the generator polynomial g1(X) for the upper connections and g2(X) for the lower connections as follow.

g1(X) = 1+X+X2

g2(X) = 1+ X2

The output sequence is found as follow

U(X) = m(X) g1(X) interlaced with m(X) g2(X)

Let the message vector m = 101 as a polynomial is represented as m(X) = 1+ X2

Then the output polynomial U(X) or the output sequence U, of the figure 4.1 encoder can be found for the input message m is given as under.

m(X) g1(X) = (1+ X2 )( 1+X+X2) = 1+X+X3+X4

m(X) g2(X) = (1+ X2 ) (1+ X2 ) = 1+ X4

m(X) g1(X) = 1+X+0X2+X3+X4

m(X) g2(X) = 1+0X+0X2+0X3+ X4

U(X) = (1, 1) + (1, 0) X + (0, 0) X2 + (1, 0) X3 + (1, 1) X4

U = 11 10 00 10 11

We represented the encoder with polynomial generators as used for describing cyclic codes.

Graphically there are three ways in which we can look at the encoder to gain better understanding of its operations. These are

(a) State diagram

(b) Tree diagram

(c) Trellis diagram

3.2.2 STATE DIAGRAM

Convolutional encoders are finite-state technology. Hence state diagram offers significant insight into their performance. The states shown in the boxes of the diagram symbolize the probable contents of right most K-1 stages of register, and paths among the states represent the output branch words resulting from such state changes. The states of registers are nominated as a=00, b=10, c=01 and d=11.

There are only two conversions originating from every state, corresponding to two probable input bits. Output branch word is written next to every path states that are associated with the state transition. In drawing the path, we use the convention that a solid line denotes a path associated with input bit, zero and a dashed line denotes a path associated with an input bit, one. Notice that it is not possible in a single transition to move from a given state to any arbitrary state.

Figure 3.2 Encoder state diagram (rate1/2, K=3)

3.2.3 THE TREE DIAGRAM

One cannot easily use the state diagram for tracking the encoder transitions as a function of time since the diagram cannot represent the history while the state diagram completely characterizes the encoder. The tree diagram adds the dimensions of time to the state diagram. At each successive input bit time the encoding procedure can be described by traversing the diagram from left to right, each tree branch describing an output branch word. The branching rule for finding a codeword sequence is as follows: If the input bit is a zero, its associated branch word is found by moving to the next rightmost branch in the upward direction. If the input bit is one, its branch word is found by moving to the next rightmost branch in the downward direction. Assuming that the initial contents of the encoder is all zeros, the diagram shows that of the first input bit is a zero, the output is 00 and if the first input bit is a one, the output branch word is 11. Similarly, if the first input bit is a one and the second input is a zero, the second output bit is a one; the second output branch word is 01. Following this procedure we see that the input sequence 11011 traces the heavy line drawn on the tree. This path correspond s to the output codeword sequence 1101010001.

3.3 PROPERTIES OF CONVOLUTIONAL CODES

3.3.1 FREE DISTANCE

The ultimate measure of a convolutional code's performance is the bit error rate (BER) or block error rate (BLER) of the code as a function of signal-to-noise ratio (SNR). However, free distance gives a good indication of convolutional code performance. The free distance of a convolutional code is the minimum distance (either Hamming or Euclidean) between two distinct valid output sequences. Unlike algebraic block codes, which are designed to have distance properties, good convolutional codes are identified by an exhaustive search over encoders with a given number of memory elements. Often free distance is the metric used in these searches. For simplicity, we will restrict the discussion of free distance to free Hamming distance. For BPSK, this restriction imposes no loss of generality since the Hamming and squared Euclidean free distances are related by a constant.

The set of distances from a codeword to each of its neighbors is the same for all code words. Hence, the free distance is the distance from the all-zeros output sequence to its nearest-neighboring codeword. A Viterbi decoding operation with some special restriction efficiently performs this computation. Viterbi decoding is performed on the undistorted all-zeros received sequence, but the first trellis branch associated with the correct path is disallowed. Thus prevented from decoding the correct sequence, the Viterbi algorithm identifies the nearest-neighbor sequence. Since the received sequence is noiseless, the path metric associated with the decoded sequence is the distance between that sequence and all-zeros sequence, which is the free distance.

3.3.2 CATASTROPHIC ENCODERS

A convolutional encoder is catastrophic if a finite number of errors in the output sequence can cause an infinite number of errors in the input sequence. With such an encoder, the consequences of a decoding error can be truly catastrophic. Catastrophic encoders are certainly undesirable, and they are never used in practice. In fact, they are easily avoided because they are not minimal encoders. Hence if an encoder is catastrophic it also uses more memory elements than does a minimal equivalent encoder, which is not catastrophic.

An encoder is catastrophic if and only if its state diagram has a loop with zero output weight and nonzero input weight. Catastrophic encoders still have a free distance as defined by the limit of the column distance, but this free distance is seldom equal to the minimum survivor path metric to the zero state. Usually, some of the survivor path metrics for nonzero states never rise above the minimum survivor path metric to the zero state. An additional stopping rule for the Viterbi decoding computation of free distance resolves this problem: If the column distance does not change for a number of updates equal to the number of states, the free distance is equal to that column distance.

No catastrophic encoders may also require this additional stopping rule if they have a nontrivial zero-output weight loop. Such a loop does not force catastrophic behavior if it is also a zero-input weight loop. Such a situation only occurs with feedback encoders since feed forward encoders do not have loops with zero output weight and zero input weight except the trivial zero-state self-loop. In cases where this stopping criterion is required, the analytic decision depth of Anderson and Balachandran is not well defined.

However, a practical place to start simulating decision depths is the path length at which the Viterbi computation of free distance terminates. Because nontrivial zero-output-weight loops indicate a no minimal encoder, their free distance is not often computed. However, there are circumstances where computation of the free distance is still interesting.

CHAPTER 4

VITERBI DECODING

CHAPTER 4VITERBI DECODER

4.1 VITERBI CONVOLUTIONAL DECODING ALGORITHM

The viterbi decoding algorithm was revealed by Viterbi in 1967. The Viterbi algorithm performs maximum likelihood decoding. By taking the advantage of the special structure in the code trellis it also reduces the computational load. The benefit of Viterbi decoding is that the difficulty of a Viterbi decoder is not a function of the information of symbols in the code word sequence. The algorithm includes calculating a distance, or measure of resemblance between the received signal, at time ti, and all the trellis paths entering each state at time ti.Those trellis paths that could not possibly by candidates for the maximum likelihood choice, viterbi algorithm removes them from consideration When two paths are entering the same state then the one having the best metric is selected and that path is called the surviving path. This choice of surviving path is performed for all the states.The complexity of the decoder is reduced by the early rejection of the unlikely paths. The decoder continues in this way to go forward deeper into the trellis, making decision by eliminating the slightest likely paths. In fact in 1969, Omura also demonstrated that the Viterbi algorithm is maximum likelihood. The objective of selecting the optimum path can be articulated, as choosing with the maximum likelihood metric, or as choosing the codeword with the minimum distance metric.

4.2 EXAMPLE OF VITERBI CONVOLUTIONAL DECODING

Binary Symmetric Channel is assumed is assumed for simplicity thus hamming distance is a suitable distance measure. The encoder for this example is shown in figure 4.2 .A similar trellis can used to represent the decoder, as shown in figure 4.5. We set up at time ti in the 00 state (flushing the encoder between messages provides the decoder with starting-state knowledge). Because in this example there are only two likely transitions leaving any state, not all branches need to shown firstly. The full trellis structure starts after time t3. The basic idea behind the decoding procedure can be best be demonstrated by seeing the figure 4.1 encoder trellis in contrast with the figure 4.2 decoder trellis. It is suitable at each time interval, for the decoder trellis to label each branch with the hamming distance between the received code symbols and the branch word matching to the same branch from the encoder trellis.The example in figure 4.2 shows the equivalent codeword sequence U, a message sequence m, and a noise corrupted received sequence Z = 11 01 01 10 01 ……. The branch words seen on the encoder trellis branches characterize the encoder and are known a priori to both the encoder and the decoder. Code symbols that would be expected to come from the encoder output as a result of each of the state transitions are the encoder branch words

Figure 4.1 Convolution encoder (rate 1/2 , K=3)

Figure 4.2 Decoder trellis diagram (rate 1/2, K=3)

Accumulated by the decoder on the fly are the labels on the decoder trellis branches.

That is, each branch of the decoder trellis is labeled with a metric of likeness (Hamming distance) between the received code symbols and each of the branch words for that time interval, as the code symbols are received. From the received sequence Z, we see that the code symbols received at time t1 are 11, shown in figure 4.2. With the aim of labeling the decoder branches at time t1 with the appropriate Hamming distance metric, we look at the encoder state diagram figure encoder trellis. At this point we see that a state 00-00 transition yields an output branch word of 00. But we received 11. Consequently, on the decoder trellis we label 00—00 transition Hamming distance between them namely 2. Looking at the encoder trellis again, we see that a state 00—10 transition yields an output branch word of 11, corresponds exactly with the code symbols we received at t1. Therefore, on the decoder trellis, we label the state 00—10 transition with a Hamming distance of 0.

So, the metric entered on the decoder trellis branch tells about the distance between what is received and what "should have been" received associated with the branch transmitted with the branch word. To all intents and purposes, these metrics describes a correlation- like measure between each of the candidate branch words and a received branch word. We go on with labeling the decoder trellis branches in this way as the symbols are received at each time t1. The decoding algorithm finds the most likely (minimum distance) path through the trellis to use these Hamming distance metrics. The foundation of Viterbi decoding is the following observation: If any two paths in the trellis come to a single state, one of them can always be discarded in the search of an optimum path. For example, Figure 4.3 below shows two paths merging at time t5 to state 00. Let us explain the increasing Hamming path metric of a given path at time ti as the sum o the branch Hamming distance metrics along that path up to time ti. In figure 4.3 the upper path has metric 4; the lower has metric 1.Lower path, which enters the same state, has a lower metric so the upper path cannot be a portion of the optimum path. This surveillance holds because of the Markov nature of the encoder state. The present state shortens the encoder history in the sense that previous states cannot affect future output branches.

Figure 4.3 Path metrics for two merging paths

4.3 Decoder Implementation

In the decoding context the transitions during any of the time interval can be combined into 2^v-1 disjoint cells, where each cell is dissipating four of the possible transitions, where v is called the encoder memory.

4.3.1 Add-Compare-Select Computation

Starting with the K=3, 2—cell example, figure 4.4 below shows the logic unit that corresponds to cell 1. The logic executes the special purpose calculation called add-compare-select (ACS). The state metric is calculated by adding the previous-time state metric of state a, to the branch metric and the previous-time state metric of state c, to the branch metric, this fallout in two possible path metrics as candidates for the new state metric. These two results are compared in the logic units of figure 4.4. The biggest likelihood (smallest distance) of the two path metrics is saved as the new state metric for the state a. Also shown in the figure 4.4 is the cell-1 add compare select logic that tells the new state metric and the new path history. This ACS process is also performed for the paths in other cells. The oldest bit on the path with the smallest state metric forms the decoder output.

Figure 4.4 Logic units that implemented the add-compare-select functions

Corresponding to cell # 1

4.3.2 Add-compare-select as seen Trellis

Consider the same example for describing viterbi decoding. The codeword sequence was U = 1101010001, the message sequence was m = 11011 and received was Z = 1101011001. Figure 4.5 give a picture of a decoding trellis diagram. A branch that labels each branch is the Hamming distance between the received code symbols and the corresponding branch word from the encoder trellis. Additionally the figure 4.5 trellis indicates a value at each state x, and for each time from time t1 to t6, which is a state metric Tx. We perform the add-compare-select (ASC) operate when there are two transitions entering a state, as there are for times t4 and later. For example at time t4, the value of the state metric foe state a is obtained by incrementing the state metric Ta = 3 at time t3with the branch metric ( ) = 1 yielding a candidate value of 4. Simultaneously, the state Tc = 2 at time t3is incremented with the branch metric ( ) = 1 yielding a candidate value of 3. the select operation of ASC process selects the largest-like hood (minimum distance) path metric as the new state metric ; hence for the state a at the time t4, the new state metric is Ta' = 3. The winning is shown with a heavy line and the path that has been dropped is shown with a lighter line. On the trellis of figure 4.7, observe the state metric from left to right. Verify that at each time, the value of each state metric is obtained by incrementing the connected state metric from the previous time along the winning path(heavy line) with the branch between metric them. At some point in the trellis (after a time interval of 4 to 5 times the constraint length); the oldest bits can be decoded. As an example, looking at time t6 in figure 4.7, we see that the minimum-distance state metric has a value of 1, from this state d, the winning path can be traced back to time t1, and one can verify that the decoding message is the same as the original message, by the convection that dashes and solid lines represent binary ones and zeros respectively.

Figure 4.5 Add-compare-select computations in viterbi decoding

4.4 Soft-Decision Viterbi Decoding

For a binary convolutional code system, the demodulator delivers two code symbols at a time to the decoder. For hard-decision (2-level) decoding, each pair of received code symbol can be depicted on a plane, as one of the corners of a square, as shown in Figure 4.6a. The corners are labeled with the binary numbers (0, 0), (0, 1), (1, 0) and (1, 1), representing the four possible hard-decision values that the two code symbols might have. For 8 level soft-decisions decoding, each pair of code symbols can be similarly represented on a equally spaced 8 level by 8 level plane, as a point from the set of 64 points shown in Figure 4.6b. In this soft-decision case, the demodulator no longer delivers firm decisions; it delivers quantized noisy signals (soft decisions).

The main difference between hard-decision and soft-decision Viterbi decoding is that soft decision algorithm metric because of its limited resolution cannot use a Hamming distance. A distance metric with the needed resolution is Euclidean distance, and to facilitate its use, the binary numbers of 1 and 0 are transformed to the octal numbers 7 and 0, respectively. This can be seen in Figure 4.6c , where the corners of the square have been re-labeled accordingly; these allows us to use a pair of integers, each in the range of 0 to 7, for describing any point in the 64-point set. Also shown in Figure 4.8c is the point 5, 4, representing an example of a pair of noisy code-symbol values that might stem from a demodulator. Imagine that the square in figure in 4.8c has coordinates x and y .then, what is the Euclidean distance between noisy point 5, 4 and the noiseless point 0, 0? It is

((5-0)2+ (4-0)2)1/2= (41)1/2

Similarly, if we ask what is the Euclidean distance between the noisy point 5, 4 and the noiseless point 7, 7? It is

((5-7)2 + (4-7) 2)1/2= (13)1/2

Soft-decision Viterbi decoding, largely, proceeds in the same way as hard-decision decoding. The only difference is that Hamming distances are not used. Consider how soft-decision decoding is performed with the use of Euclidean distances. Figure 4.6d shows the first section of an encoding trellis, with the branch word transformed from binary to octal. Suppose that a pair of soft-decision code symbol with value 5, 4 arrives at a decoder during the first transition interval. Figure 4.8e shows the first section of decoding trellis. The metric (41), representing the Euclidean distance between the arriving 5, 4 and the 7, 7 code symbols, is placed on the dashed line. The rest of the task, pruning the trellis in search of a common stem, proceeds on the same way as hard -decision decoding. Note that in a real convolutional decoding chip, the Euclidean distance is not actually used for a soft-decision metric; instead a monotonic metric that has similar properties and is easier to implement is used. An example of such metric is the Euclidean distance-square, in which case the square-root operation shown above is eliminated.

Figure 4.6 Hard-Decision Plane (b) 8-level by 8 level soft decision plane (c) Example of soft code symbols (d) Encoding Trellis section (e) Decoding Trellis section.

CHAPTER 5

INTRODUCTION TO FIELD PROGRAMMABLE GATE ARRAY

CHAPTER 5 FPGA

5.1 INTRODUCTION

Before the programmable logic was invented, conventional logic circuits were built at the board level using typical components, or at the gate level in exclusive application-specific integrated circuits.  The FPGA is an integrated circuit that has about 64 to over 10,000 identical logic cells that can be viewed as normal components. Every logic cell can separately take on any one of a restricted set of personalities.  The matrix of wire interconnects the individual cell and programmable switches.  A design of user is implemented by specifying straightforward logic function for every cell and close the switches selectively in interconnect matrix.  A fabric of basic building block is formed by array of logic cells and interconnects for logic circuits.  Combination of these basic blocks to create the desired circuit creates complex design.

5.2 FUNCTION OF LOGIC CELL

The logic cell structural design varies among different device families.  On the whole, every logic cell combines a small number of binary inputs to one or two outputs according to a Boolean logic function specified in user program.  In most families, the user also is provided with the choice of registering the combinatorial output of the cell, in order that clocked logic can be implemented easily.   The cell's combinatorial logic may be physically implemented as LUT (small look-up table) memory or as a set of multiplexers and gates. LUT is bit more elastic and supply more inputs per cell than multiplexer cells.

5.3 FIELD PROGRAMMABLE

Field Programmable means that the user program defines FPGA's function rather than by the manufacturer .A characteristic integrated circuit performs a particular task defined at the time of manufacture.  On the contrary, the function of FPGA is defined by a program written by someone other than the apparatus company. ..the program is either 'burned' in  permanently or semi-eternally as part of a board assembly process, or is loaded from external memory every time the device is powered up.  This user programmability gives user access to complex integrated designs without the high engineering.

5.4 FPGA PROGRAMS

By defining many switch connections and cell logic functions individually would be a discouraging task.  Fortunately, special software handles this job.  The software translates schematic diagrams of user or textual hardware explanation language code then places and routes translated design. Most software packages have hooks to let the user to control execution, placement and routing to get better performance and utilization of the device.  The design process is further simplified by Libraries of more complex function macros by providing common circuits that are previously optimized for speed or area.

CHAPTER 6

SIMULATION METHODOLOGY

CHAPTER 6 SIMULATION METHODOLOGY

6.1 MATLAB SIMULATION

6.1.1 CONVOLUTONAL ENCODER & VERTERBI DECODER

We have implemented Convolutional encoder and viterbi decoder as source code. Matlab code also compares our viterbi decoder output with the built in decoder output by compairing bit error rates in our project. Making Matlab code and generating different code words for different symbols using convolutional codes and then decoding them with errors using viterbi decoder was the first step in our project.

Figure 5.1: Block diagram of Matlab code

We have taken input from the user which will be coded by the convolutional encoder. Here we have generated random bits. Then the coded data will be decoded at the viterbi decoder. At the decoder side we have corrupted different bits by simply inverting them manually. Just to check what will be the bit error rate if different bits will be corrupted. Then we have compared our built in decoder function with our decoder code efficiency.

In the receiver side we have used viterbi decoding algorithm to decode the transmitted signal. After these two steps (encoding and decoding) original data is obtained, which have errors if low SNR is used.

6.2 VHDL SIMULATION

Our second step regarding to this project was to make synthesizable code of encoder and decoder in vhdl. For this we have used modelsim. Here we have implemented same logic

6.3 FPGA

In the end we have burned our code in field programmable gate array.

CHAPTER 7

FUTURE ENHANCEMENTS

CHAPTER 7: FUTURE ENHANCEMENTS

There are some limitations to the correction of number of data using viterbi decoding compared to the new techniques so i would like to implement new codes in fpga, which have great correction capabilities then the viterbi algorithm.

REFERENCES

REFRENCES

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.


More from UK Essays

Doing your resits? We can help!