This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Different types of spreading codes will give different results for linear detectors. This is due to the cross-correlation property of the codes. For example, Gold codes are not orthogonal, but have low cross correlation at arbitrary delay.
When channel resources are shared using spread spectrum techniques, all users are permitted to transmit simultaneously using the same band of frequencies. Each user is assigned different spreading code so that they can be encoded in the spreading process. Therefore the system design for multiple access should find set of spreading codes such that as many users as possible can use a band of frequency with as little mutual interference as possible.
The specific amount of band a user employs is related to the cross correlation of the two spreading codes.
Gold codes which were invented in 1967 are used specifically for multiple access application in spread spectrum.
Gold code sets have a certain three valued cross correlation spectrum. Those values are :
-(1/N) t(n), -1/N, (1/N)(t(n)-2)
Where t(n) = 1+ 2(n+1)/2 for n odd
= 1+ 2(n+2)/2 for n even
Consider a m sequence that is represented by binary vectors 'b' of length N and a second sequence 'b' obtained by sampling every Qth bit of 'b'. the second sequence is said to be decimation of first and the notation
b' = b[q] is used to indicate that b is obtained by sampling every qth symbol of 'b'.
Decimation of an m sequence may or may not yield another m sequence.
Consider g1(d)= 1+ D + D3
By decimation we found out g2(d) = 1+ D2 + D3
By calculating the cross correlation values , we found that it matched with the values calculated by using the formula.
1 1 1 0 1 0 0
0 0 1 0 1 1 1
1 1 0 0 0 1 1
1 1 1 1 1 1 0
1 0 0 0 1 0 0
0 1 1 0 0 0 0
0 0 0 1 0 1 0
0 1 0 1 1 0 1
Kasami codes :
These are binary sequences of length 2N, N being an integer. Kasami sequences have optimal cross-correlation values touching the Welch lower bound.
The steps to be followed to generate a kasami sequence are:
Take an m-sequence named X.
Decimate the sequence X to get a sequence Y, where Y= X(S(m))
where S(m) is 2m/2+1 and N=2m-1. N is the period of sequence X. m is the degree of g(D) used to get m-sequence X.
Period of sequence Y is 2m/2-1.
Kasami sequence is given by:
S=[X,X xor Y, X xor YD-1â€¦..]
Number of sequences in this set is 2m/2
G(D)= 1 + D + D4
S(m)= 5, Period of Y= 3
Following are the sequences:
A Pseudo-random Noise (PN) sequence is a sequence of binary numbers, which seems to be random; but it is perfectly deterministic. The sequence appears to be random in the sense that the binary values and groups or runs of the same binary value occur in the sequence in the same proportion they would if the sequence were being generated based on a fair "coin tossing" experiment. In the experiment, each head could result in one binary value and a tail the other value. The PN sequence appears to have been generated from such an experiment. A software or hardware device designed to produce a PN sequence is called a PN generator.
The flip-flop circuits when used in this way is called a shift register since each clock pulse applied to the flip-flops causes the contents of each flip-flop to be shifted to the right. The feedback connections provide the input to the left-most flip-flop. With N binary stages, the largest number of different patterns the shift register can have is 2N. The all-binary-ones state does not cause a similar problem of repeated binary ones provided the number of flip-flops input to the module 2 adder is even. The period of the PN sequence is therefore 2N-1, but IS-95 introduces an extra binary zero to achieve a period of 2N, where N equals 15.
Starting with the register in state 001 as shown, the next 7 states are 100, 010, 101, 110, 111, 011, and then 001 again and the states continue to repeat. The output taken from the right-most flip-flop is 1001011 and then repeats.
Multirate systems and filter banks play a very important role in source coding and compression for communication applications, and many of the key design issues in such applications have been extensively explored. The main problem with communication theory is to transmit signals over unreliable channels. Recent developments on the role of multirate filter banks and wavelets in channel coding and modulation for some important classes of channels. Some examples of emerging potential applications are described.
The communications problem is sub-divided into two parts, namely:
Data compression or source coding is the process of encoding information using lesser number of bits than an unencoded form would use, through use of specific encoding techniques.
Compression is handy because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the receiver side, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications.
The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced, and the computational resources required to compress and uncompress the data.
Channel Coding: The aim of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect errors. While not mutually exclusive, performance in these areas is a trade off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. Thus codes are used in an interleaved manner. The data is spread out over the disk. Although not a very good code, a simple repeat code can serve as an understandable example. Suppose we take a block of data bits (representing sound) and send it three times. At the receiver we will examine the three repetitions bit by bit and take a majority vote. The twist on this is that we don't merely send the bits in order. We interleave them. The block of data bits is first divided into 4 smaller blocks. Then we cycle through the block and send one bit from the first, then the second, etc. This is done three times to spread the data out over the surface of the disk. In the context of the simple repeat code, this may not appear effective. However, there are more powerful codes known which are very effective at correcting the "burst" error of a scratch or a dust spot when this interleaving technique is used.
Algebraic coding theory is basically divided into two major types
Linear block codes
Our main aim is the error-free transmission of the information bearing signal from source(transmitter) to destination(receiver) through an unreliable channel. To achieve this, we employ three function blocks, namely Interpolator, Filter bank and Decimator.
The interpolator basically up-samples the input signal using a certain sampling rate based on the order of the filter bank used by inserting L-1 number of zeros where L is interpolation factor.
The function of the decimator is to down-sample the received signal by the same sampling rate as used in the interpolator. L-1 number of zeros are inserted where L is the sampling rate.
The basic block diagram if the transmitter-receiver system is as shown below:
The FIR Interpolation block resamples the discrete-time input at a rate L times faster than the input sample rate, where the integer L is specified by the Interpolation factor parameter. This process consists of two steps:
The block upsamples the input to a higher rate by inserting L-1 zeros between samples.
The block filters the upsampled data with a direct-form FIR filter.
The FIR Interpolation block implements the above upsampling and FIR filtering steps together using a polyphase filter structure, which is more efficient than straightforward upsample-then-filter algorithms. See N.J. Fliege, Multirate Digital Signal Processing: Multirate Systems, Filter Banks, Wavelets for more information.
The FIR filter coefficients parameter specifies the numerator coefficients of the FIR filter transfer function H(z).
The coefficient vector, [b(1) b(2) ... b(m)], can be generated by one of the Signal Processing Toolboxâ„¢ filter design functions (such as fir1), and should have a length greater than the interpolation factor (m>L). The filter should be lowpass with normalized cutoff frequency no greater than 1/L. All filter states are internally initialized to zero.
The FIR Interpolation block supports real and complex floating-point and fixed-point inputs except for complex unsigned fixed-point inputs. This block supports triggered subsystems when you select Maintain input frame rate for the Framing parameter.
Polyphase FIR interpolator-
The FIRInterpolator object upsamples an input by the integer upsampling factor, L, followed by an FIR anti-imaging filter. The filter coefficients are scaled by the interpolation factor. A polyphase interpolation structure implements the filter. The resulting discrete-time signal has a sampling rate L times the original sampling rate.
Specify the integer factor, L, by which to increase the sampling rate of the input signal. The polyphase implementation usesL polyphase subfilters to compute convolutions at the lower sample rate. The FIR interpolator delays and interleaves these lower-rate convolutions to obtain the higher-rate output. The property value defaults to 3.
FIR filter coefficients
Specify the numerator coefficients of the FIR anti-imaging filter as the coefficients of a polynomial in z-1. Indexing from zero, the filter coefficients are:
To act as an effective anti-imaging filter, the coefficients must correspond to a lowpass filter with a normalized cutoff frequency no greater than the reciprocal of the Interpolation Factor. The filter coefficients are scaled by the value of theInterpolation Factor property before filtering the signal. To form the L polyphase subfilters, Numerator is appended with zeros if necessary. The default is the output of fir1(15,0.25).
The FIR Decimation block resamples the discrete-time input at a rate K times slower than the input sample rate, where the integer K is specified by the Decimation factor parameter. This process consists of two steps:
The block filters the input data using a direct-form FIR filter.
The block downsamples the filtered data to a lower rate by discarding K-1 consecutive samples following every sample retained.
The FIR Decimation block implements the above FIR filtering and downsampling steps together using a polyphase filter structure, which is more efficient than straightforward filter-then-decimate algorithms. See Fliege  for more information.
The FIR filter coefficients parameter specifies the numerator coefficients of the FIR filter transfer function H(z).
The length-m coefficient vector, [b(1) b(2) ... b(m)], can be generated by one of the filter design functions in Signal Processing Toolboxâ„¢ software, such as the fir1 function used in Example 1 below. The filter should be lowpass with normalized cutoff frequency no greater than 1/K. All filter states are internally initialized to zero.
The FIR Decimation block supports real and complex floating-point and fixed-point inputs, except for complex unsigned fixed-point inputs. This block supports triggered subsystems when you select Maintain input frame rate for the Framing parameter.
FIR polyphase decimator-
The FIR Decimator object resamples vector or matrix inputs along the first dimension. The object reseamples at a rate M times slower than the input sampling rate, where M is the integer-valued downsampling factor. The decimation combines an FIR anti-aliasing filter with downsampling. The FIR decimator object uses a polyphase implementation of the FIR filter.
Specify the downsampling factor as a positive integer. The FIR decimator reduces the sampling rate of the input by this factor. The size of the input along the first dimension must be a multiple of the decimation factor. The default is 2.
FIR filter coefficients
Specify the numerator coefficients of the FIR filter in powers of z-1. The following equation defines the system function for a filter of length L:
To prevent aliasing as a result of downsampling, the filter transfer function should have a normalized cutoff frequency no greater than 1/DecimationFactor. You can specify the filter coefficients as a vector in the supported data types. The FIR decimator does not support dfilt or mfilt objects as sources of the filter coefficients. The default is fir1(35,0.4).
FIR FILTER DESIGN
FIR filters are filters having a transfer function of a polynomial in z
and is an all-zero filter in the sense that the zeroes in the z-plane determine the frequency response magnitude characteristic. The z transform of a N-point FIR filter is given by
FIR filters are particularly useful for applications where exact linear phase response is required.
The FIR filter is generally implemented in a non-recursive way which guarantees a stable filter.
FIR filter design essentially consists of two parts
(i) approximation problem
(ii) realization problem
The approximation stage takes the specification and gives a transfer function through four
steps. They are as follows:
(i) A desired or ideal response is chosen, usually in the frequency domain.
(ii) An allowed class of filters is chosen (e.g. the length N for a FIR filters).
(iii) A measure of the quality of approximation is chosen.
(iv) A method or algorithm is selected to find the best filter transfer function.
The realization part deals with choosing the structure to implement the transfer function which may be in the form of circuit diagram or in the form of a program.
There are essentially three well-known methods for FIR filter design namely:
(1) The window method
(2) The frequency sampling technique
The Window Method
In the Window Design Method, one designs an ideal IIR filter, then applies a window function to it - in the time domain, multiplying the infinite impulse by the window function. This results in the frequency response of the IIR being convolved with the frequency response of the window function - thus the imperfections of the FIR filter (compared to the ideal IIR filter) can be understood in terms of the frequency response of the window function.
Some window examples are as follows:
Thus the unit sample response of the FIR filter becomes
h(n) = hd(n) w(n)
= hd(n) ; 0< n <M-1
= 0 ; otherwise
The Frequency Sampling Technique
In this methodthe desired frequency response is provided as in the previous method. Now the given frequency response is sampled at a set of equally spaced frequencies to obtain N samples. Thus , sampling the continuous frequency response Hd(w) at N points essentially gives us the N-point DFT of Hd(2pnk/N). Thus by using the IDFT formula, the filter co-efficients can be calculated using the following formula-
Where H(K) is-
Now using the above N-point filter response, the continuous frequency response is calculated as an interpolation of the sampled frequency response. The approximation error would then be exactly zero at the sampling frequencies and would be finite in frequencies between them. The smoother the frequency response being approximated, the smaller will be the error of interpolation between the sample points
For N(order of a filter)=4 , Downsample = Upsample =6 ( Interpolator-Decimator system)
For N(order of a filter)=1 , Downsample and upsample = 3
Design of filter banks , interpolation, decimation and estimation of PSD for the transmission of Gold , Kasami and Walsh-Hadamard Codes.
Theory of wavelets.
Multirate signal have played a significant role in channel and source coding. Multirate systems, filter banks have vast applications in communication systems. The examples demonstrated in this project suggest a tremendous potential in field of communication systems.
The various applications include:
Decomposition of a signal into M components containing various frequency bands.
Communication over ISI and Colored Noise Channels
Implementation of high performance filtering operations, where a very narrow transition band is required.