Transmission Methods In Industrial Telecommunication Systems Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The purpose of this assignment report is to investigate a developing method for coding to check the correctness of the bit stream transmitted, error control coding. The bit stream can be represented be codeword of that symbol, this codeword transmitted through different transmit channels in noisy, therefore errors can be occurred during the transmission, this errors leading to damage receiving data, there are some different types of error control mechanisms, linear block code is one of the mechanisms of error control coding. Firstly, this report outline a description of encoding data, some important definitions (hamming distance, hard and soft decoding), example for error detection and correction and creating hamming code(5,2). In the second part which is investigative and designing work in Matlab for hamming code(5,2). The use of using of this method for digital transmission showed a very good performance of error detection and correcting as it can be seen from the considering example in this report.

Table of Contents

Introduction:

Error control coding enable efficient and reliable data transmission over noisy channels. Fig 1 shows a block diagram of the communication process with error control. The encoder takes the message stream from the source and incorporates some redundant information. During transmission through the channel, noise may introduce errors into the data. When the signal is received, the decoder uses the add information to attempt detecting and correcting any errors that occurred. Thus, the original message may be recovered, free of transmission errors [1]. Established theoretical bounds on the performance of such error control codes. From that time, coding theorists have striven (with success) to realize codes that approach these limits of performance. These capacity achieving code schemes, however, suffer from complexity in the encoding and decoding.

Source

Encoder

Channel

Decoder

Destination

Noise

Figuer 1: block digram of error control coding

.The purpose of this report is to investigate a developing method for coding to check the correctness of the bit stream transmitted, error control coding. This report has been divided into two parts; the first part contains five questions, the second part contains investigate work and design work using Matlab.

Part1:

Q1: Show a table to encode 3-bit words by even parity. Give the expression for respective word error rate probability.

For single bit error detection, the most common way to achieve error detection is by means of a parity bit, to encode 3-bit words by even parity add -bit (parity bit) (1 or 0) to the binary word. Parity bit is an extra bit (redundant bit) adds to the message to make the number of 1's transmitted even. The table 1 blow shows the 3-bit words and corresponding even parity bits. [2].

Datawords

Even-parity

Codewords

000

0

0000

001

1

0011

010

1

0101

011

0

0110

100

1

1001

101

0

1010

110

0

1100

111

1

1111

Table : Encode 3-bit word by even parity

This message with even parity is transmitted over channel to the receiver, if the parity checks of the receiver end record any change in the message was sent, that means at least one bit has changed. [2].

each word in the data word represented by code word as it can be seen in table (1), in transmission errors occurs randomly with probability P of i errors in n-bits gives by the function [3]:

For 4-bits code word with I error probability we the expression for error rate probability P as following:

Q2: Explain by your own words

a) Hamming distance:

Hamming distance is the number of bits which different between two strings and of the same length, in other word, it measures the minimum number of substitutions required to change one string into the other, or the number of errors that transformed one string into the other, and denote by.

Where and are regards as words of length 1. and

To calculate the hamming distance between and by XOR-ing the two strings as follow

000 ⊕ 011 = 011

Hamming distance = 0+1+1 = 2

b) Hard decoding:

When binary coding is used, the modulator has only binary inputs. If binary demodulator output quantization is used, the decoder has only binary inputs. In this case, the demodulator is said to make hard decisions. Decoding based on hard decision made by the demodulator is called "hard decision decoding".

Hard-decision decoding is much easier to implement than soft-decision decoding. However, soft-decision decoding offers much better performance.

Figure : hard decision decodingHard decision almost goes after polarity like if your code is 1/3 repetition code( every bit becomes three bits i.e. 0 becomes 000 and 1 becomes 111) and the 000 is sampled by -1,-1,-1 volts and the 111 bits fore +1,+1,+1 volts and sent over the channel, if the received voltage is -0.1,-0.1,1.5 then the decoder using Hard decision will take it as a 0 because the number of -ve voltages are greater than the positive.C:\Users\mare\Desktop\Hard_decision_decoding.JPG

c) Soft-decision decoding:

If the output of demodulator consists of more than two quantization levels or is left unquantized, the demodulator is said to make soft decisions. Decoding based on soft decision made by demodulator is called soft-decision decoding.

Figure : Soft decision decodingSoft decision decoder is that it sums them up so the overall voltage is +1.3 and that is more close to +3 volts of the 111 than the -3 volts of the 000. AND Both uses Maximum Likelihood decision making, but in hard decision the decoder treats every bit (of the three repeated bits) as an independent indentify without taking any info from the de-modulator about the other two bits, and that's not the case in Soft Decision. 

Of course Soft decision is more developed and provides better performance but it's also more cost. So it's a debate and you have to choose between cost and performanceC:\Users\mare\Desktop\Soft_decision_decoding.JPG

The difference between hard and soft decision decoder is as follows:

In Hard decision decoding, the received codeword is compared with the all possible codewords and the codeword which gives the minimum Hamming distance is selected

In Soft decision decoding, the received codeword is compared with the all possible codewords and the codeword which gives the minimum Euclidean distance is selected. Thus the soft decision decoding improves the decision making process by supplying additional reliability information (calculated Euclidean distance or calculated log-likelihood ratio.

Q3. A block code consists of the following codes: 10011, 11101, 01110, and 00000.

a) How many errors can be detected/corrected by this code?

b) Is this a linear code?

Based on hamming distance rule we can calculate the error detection and error correction capabilities of the given code words, as long as the word received is not equal to the code word we can detect errors. The ability to detect errors is guaranteed as long as the number of errors is less than the minimum hamming distance. Error detectable is given by:

The error can be detected with the minimum hamming distance is given by [4]:

To calculate how many errors can be detected in this block codes (10011, 11101, 01110, and 00000), first find the hamming distance between each two strings as following:

d(10011, 11101) = 3, d(10011, 01110) = 4, d(10011, 00000) = 3,

d(11101, 01110) = 3, d(11101, 00000) = 4, d(01110, 00000) = 3.

There are two errors can be detected in this block code as following:

The minimum hamming distance = 3

- 1 = 3 - 1 = 2

The error can be corrected in this block code by using the following equation:

- 1)

= - 1) = 1

In a block code to inform it is a linear block word it must have the two following conditions:

In any linear block code the XORing any tow valid code word creates anther existing code word.

The minimum hamming distance of linear block code is equal to the weight of the minimum non-zero code word.

The tow conditions are applicable; the exclusive OR (XOR) of the second and third code is equal to the fourth code. And the minimum weight of non-zero code in equal to hamming distance which is equal to 3, therefore this block code is a linear block code. [5].

Q4. What kind of channels interleaving should be used? Why?

Interleaving is used in multipath fading channels, because of the bursty errors that caused by fading channel, for the reason that the time variant nature of the channel makes the level of the received signal fall below the noise level and the fade duration contains several transmitted data [6], interleaving is a effective way to transform the bursty channel into a channel with random errors, interleaving reordered the code word and then transmitted through the bursty channel, at the receiver the codeword reordered by the deinterleaving and then passed to decoder, by this method the interleaving and deinterleaving make the error to be random with in codeword.[6].

Q5. Create (5, 2) Hamming code, determine its code words and determine based on them its Hamming distance thus proving it is indeed a Hamming code.

For linear block code, the encoding can be represented in terms of matrix multiplication, to encode to c code word by using, where G is the generator matrix [7].

A code called systematic if each codeword consist of k original information plus n-k check bits, the generator matrix can be expressed as:

Where I is a k*k identify matrix, P is k*(n-k) parity matrix.

Figure : Generator equationGenerator Matrix G

Message Vector m

Code Vector c

After defined the basic parameters for encoding using matrix, hamming code (5,2) can be created by the use of generator matrix, first create the generator matrix as follows:

Form the information given in the question:

dataword k=2, codeword n=5 the additional bits n-k=3.

Where:

= ,

the elements of parity matrix are calculated as follows[8]:

We create generator matrix G:

Here, the first two columns the identity submatrix results in the dataword appearing as the first four bits of the codeword, while the last three columns form the parity bits by each XOR-ing a different subset of the information bits [9].

Now multiple the message word x (data word) by the generator matrix G to get the code vector c (code word), code vector takes the form:

= ()

By applying the equation we can get the code vector:

00.

The table below shows the hamming code (5,2).

Dataword x (message)

Codeword c (code vector)

00

00000

01

01011

10

10110

11

11101

Table : Encoder for hamming code(5,2)

Hamming distance as follows:

The minimum hamming distance is

It is indeed a hamming code because the minimum hamming distances is and also it contains the all-zero vector.

Part2

Matlab assignments

Investigate functions available in Matlab for GF (2) calculus and repeat problem Q.5 by using Matlab.

The linear space over the field of two elements GF(2) consisting of all n-tuples of elements from GF(2) is denoted by H(n, 2) and is known as the binary Hamming space of length n, the simplest prime field uses modulo-2 arithmetic and it is defined as:

Where 0 and 1 are the additive and multiplication identity elements respectively, these elements must satisfy the field postulates [10].

Matlab code

>> clear

n = 5 %# codeword bits per block

k = 2 %# message bits per block

A = [1 1 0; 0 1 1]; %Parity submatrix

G = [ eye(k) A ] %Generator matrix

H = [ A' eye(n-k) ] %Parity-check matrix

% ENCODER%

msg = [0 0; 0 1;1 0;1 1] %Message block vector

code = mod(msg*G,2) %Encode message

The output of Matlab code

n =

5

k =

2

G =

1 0 1 1 0

0 1 0 1 1

H =

1 0 1 0 0

1 1 0 1 0

0 1 0 0 1

msg =

0 0

0 1

1 0

1 1

code =

0 0 0 0 0

0 1 0 1 1

1 0 1 1 0

1 1 1 0 1

Design syndrome decoder for Q.5 by using Matlab.

The syndrome is a (n-k)-tuple that has a one to one correspondence with the correctable error patterns. The syndrome depends only on the error pattern and is independent of the transmitted codeword; most codes do not use all of the redundancy that has been added for error correction. The only two codes known to do this are Hamming (,) and Golay (23, 12) codes. These codes are called perfect codes [7].

For a linear block code with generator matrix the parity check matrix is , where transpose of parity matrix and identity matrix [7].

is formed by interchanging the rows with columns, where the first row becomes the first column and the first column becomes the first row, second row becomes second column and second column becomes second row[9].

H parity check matrix is now formed by appending the identity matrix to parity matrix as follow:

Where is the transpose of .

The syndrome is the receive sequence multiplied by the transposed parity check matrix H.

The received codeword can be represented by the transmitted codeword plus possible error vector with modul-2 addition:

Substituting for c in the first equation of s gives:

The multiplied of the transmitted codeword with parity check matrix, therefore equation (4) becomes:

If we take the code word [0 1 0 11]:

The syndrome for this code is 0 0 0, that means there is no error occurred and it is valid codeword, assume that an error occurred in first bit and we received [1 1 0 1 1] instead of [0 1 0 11]:

We assumed that we received codeword with in error in the first bit, we collect the syndrome for the received codeword, by using the look-up table for hamming code(5,2) table 3, we can easily correct the error in the received codeword as following:

Where c is the original codeword, r received codeword, e error pattern.

This codeword is a valid codeword (transmitted codeword). Compute the syndrome and set up the syndrome decoder in table below [5]:

Syndrome

Error Pattern

0 0 0

0 0 0 0 0

1 1 0

1 0 0 0 0

0 1 1

0 1 0 0 0

1 0 0

0 0 1 0 0

0 1 0

0 0 0 1 0

0 0 1

0 0 0 0 1

Table : syndrome decoder for hamming code(5,2)

Matlab code:

>> clear

n = 5 %# codeword bits per block

k = 2 %# message bits per block

A = [ 1 1 0;0 1 1]; %Parity submatrix

G = [ eye(k) A ] %Generator matrix

H = [ A' eye(n-k) ] %Parity-check matrix

% ENCODER%

msg = [0 1] %Message block vector

code = mod(msg*G,2) %Encode message

% CHANNEL ERROR (add one error to code)%

code(1)= ~code(1);

% code(2)= ~code(2);

%code(3)= ~code(3);

%code(4)= ~code(4);

recd = code %Received codeword with error

% DECODER%

syndrome = mod(recd * H',2)

M=[0 0 0]

%if mo == syndrome

% disp(['received message is correct', syndrome]);

%end

%Find position of the error in codeword (index)

find = 0;

for ii = 1:n

if ~find

errvect = zeros(1,n);

errvect(ii) = 1;

search = mod(errvect * H',2);

if search == syndrome

find = 1;

index = ii;

end

end

end

if M == syndrome

disp(['received message is correct', syndrome]);

else

disp(['Position of error in codeword=',num2str(index)]);

correctedcode = recd;

correctedcode(index) = mod(recd(index)+1,2) %Corrected codeword

%Strip off parity bits

msg_decoded=correctedcode

msg_decoded=msg_decoded(1:5)

end

The output of Matlab code

n =

5

k =

2

G =

1 0 1 1 0

0 1 0 1 1

H =

1 0 1 0 0

1 1 0 1 0

0 1 0 0 1

msg =

0 1

code =

0 1 0 1 1

recd =

1 1 0 1 1

syndrome =

1 1 0

M =

0 0 0

Position of error in codeword=1

correctedcode =

0 1 0 1 1

msg_decoded =

0 1 0 1 1

msg_decoded =

0 1 0 1 1

>> syndrome

syndrome =

1 1 0

1 1 0

1 1 0

1 1 0

Conclusion:

The purpose of this assignment report was to investigate a developing method for coding to check the correctness of the bit stream transmitted. Simply to detect errors that may occur during the transmission through noisy channel by adding extra bits to the data to allow error detection and correction. Hamming code is an error control method allowing correction of one bit errors, hamming code (5,2) has 2-bits message encoded into 5-bits, there are three adding bits ( error control bits) codeword this example showed ability to detect 2 errors but only can correct 1 error. However, error control coding gives a very good performance in industrial communication. Finally error control coding is important part of digital communication since error is cause by interference due to noisy.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.