Building Blocks Of Communication System Computer Science Essay

Published:

The objective of this technical report is to analyze the modern digital communication. In the first part of the report a communication system model is discussed. As data compression is the most challenging problem. The communication system in today world is creating algorithms and techniques to compress the size of data to much smaller extant in order to transmit more data in smaller size on the network, in order to maximize the speed and using less bandwidth. Different types of algorithm are used for the data and voice compression. In this technical report a type of data compression technique is discussed that is Lempel Ziv Welsh Encoding. Basically the Lempel Ziv Welsh encoding technique is used for data compression. This report discusses the Lemple Ziv Welsh encoding techniques and also discusses the Lempel Ziv Welsh encoding technique for the compression of voice. My objective in this technical report is to present an idea on the voice compression using the LZW technique.

Communication System Model

Introduction:-

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

In this part communication system model will be discussed. First there will be a quick overview of the complete system and then the different blocks of the communication system are discussed one by one. Before I begin to describe the communication system and its different basic building blocks it is very necessary and important to understand why the wireless personal communication system industry is moving from the traditional analog FM technology to digital technology. There are many advantages for migration towards the digital technologies which include quality of service, increased capacity, privacy etc. digital systems provide higher quality of services then the traditional analog systems. Digital communications system transmits its information in discrete units namely 1 and -1 (or 1 and 0). Source information must be converted to discrete units as well. In the voice communication system the source information is human speech, speech is inherently analog and must be converted to digital before it can be transmitted via the digital system. Thus the coding of source information or the source coding can use speech compression to reduce the number of bits necessary to represent the speech. As Is-95 CDMA system uses a vocoders or voice coder to convert human speech to 9.6kbps to digital system thus it is also another reason for migration towards the digital technology as these systems increased the capacity. Privacy is another issue in any communication system. In digital communication system which provides a ready platform where encryption techniques may be used to safeguard the information transmitted over the air so due to all these reasons there is a need for migrating towards digital systems.

Overview of Communication System:-

The information source such as human speech which is inherently analog is first converted to digital form by A/D converter. Then the channel encode function encode the digital information for the purpose of combating various degrading effects of the channels. Then the modulate function converts the base band signal to the pass band signal or waveform that can be transmitted by the transmitter. The AWGN block adds the additive white Gaussian noise to the pass band waveform and the Rayleigh block will combat for random errors which may occurs in the communication system. On the receiving side the band pass waveform is intercepted by the receiver. The signals are first demodulated from RF to base band. Then the channel decode block attempts to correct the errors that have been introduced by the channel. The D/A or source decoding block converts the base band information back to analog speech.

Fig 1. Principle Components of Digital Communication System.

Building Blocks of Communication System:-

In this section the main focus will be on the separate blocks of the communication system. The explanation of each block in the diagram that carry out specific functions in the overall scheme of moving information from the transmitter to the receiver.

Source Coding:-

Source coding scheme for speech use what is called "waveform coding" where the goal is to replicate the waveform of the source information. The coding of source information or waveform coding can use the speech compression to reduce the number of bits necessary to represent the speech. PCM is not feasible in win wireless application because there is a limited bandwidth available. Transmitting 64kbps of information over the air demands more bandwidth. There fore an attractive coding technique are needed to represent source information (human speech) using less bandwidth. Vocoder offers an attractive solution. It exploits the characteristics of human speech and uses fewer bits to represent and replicate human speech.

Channel Coding:-

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

After the source information is coded in to a digital form, redundancy needs to be added to the digital base band signal. Redundancy is included so create noise immunity in the signal. This is done to improve performance of the communication system by enabling the signal to better with stand the effects of channel impairments, such as noise and fading. The channel coding is given a desired probability of error, to reduce the required ratio of energy per bits to noise (Eb/No) or alternatively, given an achievable (Eb/No) to reduce the probability of bit error. The cost of this goal is more band width or more redundant bits that the system has to transmit. In this section, I death specifically with error correcting codes which when applied to channel coding improve the performance of the system. The purpose is to add extra bits to the information bits so that error may be found and corrected at the receivers. For example, the simplest error correcting code is to repeat the information bits. One can simply repeat the bits three times (i.e. 111 for 1). In this way we will improve the chance that the receiver correctly receives a 1 in case any one of the transmitted bits is flipped to 0 during the transmission process. In this case the receiver uses the majority of decoding. Namely the receiver will only decide a 1 if a majority of the three bits are received as 1's. This code is known as (3,1) code.

A code is some times described by its rate. The rate R of a code is defined a R=k/n.

There are two major classes of error correcting codes: Block codes and convolutional codes. Block codes as the name implies, code an information sequence one block at a time. Convolutional code on the other hand has a memory. The memory depends on constraint length K of the convolutional code.

Linear Block Codes:-

Linear block codes are the class of parity check codes that can be characterize by the (n, k) notation as (n,k) refers to the code where k is the length of the information sequence and n is the length of coded sequence. The encoder transforms a block of k message digits (a message vector) in to a longer block of n code word digits (a code vector) constructed from a given alphabet of elements. When the alphabets consists of two elements (0 and 1), the code is binary code comprising binary digits (bits).

The k bit message from 2^k distinct message sequences, referred to as k tuples (sequence of k digits). The n bit blocks can form as many as 2^n distinct sequences, referred to as n tuples. The encoding procedure assigns to each of the 2^k message k tuples one of the 2^n-tuples. A block code represents a one to one assignment, where by the 2^k message k tuples are uniquely mapped in to a new set of 2^k code word n tuples: the mapping can be accomplished via look- up table. For linear codes, the mapping transformation is, of course linear.

Convolutional Codes:-

A convolutional code is a type of error-correcting code in which (a) each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m) and (b) the transformation is a function of the last k information symbols, where k is the constraint length of the code. Convolutional codes are often used to improve the performance of digital radio, mobile phones, satellite links, and Bluetooth implementations. To convolutionally encode data, start with k memory registers, each holding 1 input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder has n modulo-2 adders, and n generator polynomials - one for each adder (see figure below). An input bit m1 is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs n bits. Now bit shift all register values to the right (m1 moves to m0, m0 moves to m-1) and wait for the next input bit. If there are no remaining input bits, the encoder continues output until all registers have returned to the zero state.The figure below is a rate 1/3 (m/n) encoder with constraint length (k) of 3. Generator polynomials are G1 = (1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits are calculated (modulo 2) as follows:

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

n1 = m1 + m0 + m-1

n2 = m0 + m-1

n3 = m1 + m-1.

Convolutioanl Encoder.

Interleaving:-

A channel that has a memory is one that exhibits mutually dependent signal transformation impairments. A channel that exhibits multipath fading, where the signal arrive at the receiver over two or more paths of different lengths; is an example of channel with memory. The effect is that the signal can arrive out of phase with each other, and the commulative received signal is distorted. Under the assumption that the channel has a the memory the errors no longer can be characterized as single randomly distributed bit errors whose occurrence is independent from bit to bit. Most block or convolutional codes are designed to combat random independent errors. Coding techniques for channel with memory has been proposed. One technique, which only requires knowledge of the duration or span of the channel memory, not its exact statistical characterization, is the use of time diversity or interleaving.

Interleaving the coded message before transmission and deinterleaving after reception causes bursts of channel errors to be spread out in time and thus to be handled by the decoder as if they were random errors. The interleaver shuffles the code symbols over the span of several block lengths (for block codes) or several constraint lengths (for convolutional code). Example of interleaver is as follow:

Original Message

"ALI ARE YOU SURE YOU ARE GOING WITH US".

Interleave Matrix

A L I A R E

Y O U S U R

E Y O U A R

R G O I N G

W I T H U S

Interleave Message

"AYERWLOYGIASUIHRUANUERRGS"

Interleave Message with Burst Errors

"AYERWLOYGIASUIHRUANUERRGS".

Reconstructed Message with Random Errors.

"ALI ARE YOU SURE YOU ARE GOING WITH US".

Modulation:-

In most media for communication, only a fixed range of frequencies is available for transmission. One way to communicate a message signal whose frequency spectrum does not fall within that fixed frequency range, or one that is otherwise unsuitable for the channel, is to alter a transmittable signal according to the information in your message signal. This alteration is called modulation, and it is the modulated signal that you transmit. The receiver then recovers the original signal through a process called demodulation so the digital bit stream has to be modulated on to a RF carrier in order for it to be transmitted. There is a different between analogue and digital modulation techniques. In analogue modulation information is contained in the continues wave form shape of the signal. Digital modulation scheme on the other hand are used to transmit discrete units of information called symbol and information may be contained in the amplitude (e.g. amplitude shift keying) or amplitude and phase (e.g. Quadrature - amplitude modulation) of the signal.

The receiver uses the same steps after receiving the signals. It uses the reverse of all these steps. It first demodulate the signal, removes the errors by convolutional De-interleaver and using Veterbi decoding.

Types of compression:

Lossy data compression: "lossy" compression is a data encoding method which discards (loses) some of the data, in order to achieve its goal, with the result that decompressing the data yields content that is different from the original, though similar enough to be useful in some way. Lossy compression is most commonly used to compress multimedia data (audio, video, still images), especially in applications such asstreaming media and internet telephony.

Lossless data compression: Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression, which only allows an approximation of the original data to be reconstructed, in exchange for better compression rates.

Lempel Ziv Welch (LZW):-

LZW compression is the compression of a file into a smaller file using a table-based lookup algorithm invented by Abraham Lempel, Jacob Ziv, and Terry Welch. Two commonly-used file formats in which LZV compression is used are the GIF image format served from Web sites and the TIFF image format. LZW compression is also suitable for compressing text files.

Lempel Ziv Welch (LZW) is basically a universal lossless data compression algorithm which uses a dynamic dictionary to achieve flexibility and significant compression.

Description of the Algorithm:-

The compressor algorithm builds a string translation table from the text being compressed. The string translation table maps Fixed-length codes to strings. The string table is initialize with all single character strings (256 entries in the case of 8 bit characters). As the compressor character serially examines the text, it stores every unique two character string in to the table as a code/character concatenation with the code mapping to the corresponding first character. As each two character string is stored, the first character is outputted. When ever a previously encountered string is read from the input, the longest such previously encountered string is determined and then the code for the string concatenated with the extension character (the next character in the input) is stored in the table. The code for this longest previously encountered string is outputted and the extension character is used as the beginning of the next string.

The decompressor algorithm only requires the compressed text as an input, since it can build an identical string table from the compressed text as it is recreating the original text.

Example of encoding the text using LZW algorithm:-

Consider the character set consisting of all capital letters, the space character and the punctuation character and period.

Now compression of the message "ALI, ALAN WANTS IS A PIE" is as follow.

Begin by initializing the buffer with all possible single character as shown in figure 2.

Since the longest match is the single character A, and the match is followed by the character L, transmit "1,12" and add the string AL to the bottom of the dictionary, as shown in figure 2

The first two characters in the message have now been encoded. starting with the third character in the message , the longest match is the single character L followed by the space character, so transmit "12,27" and add the string L to the bottom of the dictionary, as shown in figure 2

The fifth character in the message is now the first uuencoded character. The longest match in the dictionary is the string AL followed by an A, so transmit "30.1" and add the string ALA to the bottom of the dictionary, as shown in figure 2

Continue compressing

Fig 2. Example of LZW

Problem with LZW Compression:-

LZW compression provides dictionary guaranteed progress and is an extremely effective compression technique. One problem with LZW compression is that, even using a tree representation, after a while dictionary fills all available memory. At this point the designer has two options. The simplest option, which is to just stop adding new strings, has the disadvantage of reducing the flexibility, adoptability and efficiency of the code.

Implementation of LZW on voice:-

As we know that LZW is a text compression algorithm. As text is nothing but ASCII values. The main difference between text and voice is that voice contain negative quantize values and textual data doesn't contain any negative values. The other difference is that voice contain huge amount of values (quantize values) and text contain only 26 alphabets. As LZW uses dynamic dictionary if we developed such logic in our code that are simply mapping the negative quantize values to positive as well as reduces the size of dictionary by dropping the last two digits from every single quantize value so by this way the LZW algorithm works efficiently with voice compression. By mapping the negative quantize values in voice to positive quantize values as well as reducing the size of dictionary it will be able to perform compression with LZW. The block diagram of LZW algorithm implementation on voice is shown in the following figure:

Figure 3‑1 Implementation of LZW Voice.