# Adaptive Signal Processing Filters Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Signal processing is one of the most important tools used in modern engineering applications. In most applications such as long distance communication, echo cancelation the characteristics of signal or system are either unknown or change with time. Conventional signal processing techniques are not ideal for these applications as there is a need of adaptation of the signal processing algorithm in accordance with the change in the signal or system.

## 1.1 Adaptive Filters

Filtering is a very important signal processing technique used in communication systems. Filter is a system that is used to extract valuable data from a noisy and degraded signal. Conventional filters are designed for a particular channel (in communication systems) with predefined parameters so they work fine if the characteristics of the channel do not change significantly. In many cases the characteristics of the environment change with time significantly and the performance of the filter is inadequate. Adaptive filters are a different class of filters that are immune to the above stated shortcoming of the conventional filters as they are constantly self designing themselves to adapt to the continuously changing environment.

As the processing power of digital signal processing systems has increased dramatically in the past two decades as a result of improvements in large scale integration techniques and use of parallel processing techniques, adaptive filters have become practically implementable as the modern signal processing equipments can handle the stern processing requirements of these algorithms. Adaptive filters are now routinely used in devices such as mobile phones and other communication devices and also in some medical monitoring equipment. Adaptive filters are based on recursive algorithms which work adequately in a statistically uncertain or unknown environment. These recursive algorithms start with some initial parameters that are set according to the prior information about the statistical nature of the environment and then adapt themselves with every iteration.

There are a number of adaptive algorithms in use now a day in adaptive filtering applications. Adaptive filtering or adaptive signal processing algorithm differ with each other on several fronts so there selection for a particular application is an important task and needs careful analysis of the needs of a particular application. The main factors in choosing an adaptive algorithm for a particular application are as follows:

Rate of convergence

Processing requirements

Structure

Tracking

Misadjustment

Robustness

Numerical properties[r]

## 1.1.1.1 Rate of convergence

This rate of convergence is number of iterations required for the algorithm to achieve the value of error that will render the output close enough to the optimum wiener solution in the mean square error sense. Rapid convergence rates ensure the adjustment of the algorithm swiftly to a stationary environment of unknown statistics.

## 1.1.1.2 Misadjustment

For an algorithm under consideration misadjustment is a quantitative measure of the amount by which the terminal value of the mean-square error, averaged over a number of adaptive filters, strays from the optimum minimum mean-square error value associated with the Wiener filter.

## 1.1.1.3 Tracking

When an adaptive filtering algorithm is engaged in a statistically non stationary environment, the algorithm is required to track the constantly changing statistical characteristics in the environment. The tracking performance of the algorithm depends upon two main factors: (a) rate of convergence, and (b) steady-state fluctuation due to algorithm noise.

## 1.1.1.4 Robustness

Robustness of an adaptive filter is termed as the ability of the algorithm to tolerate disturbances. A robust algorithm will respond to disturbances in a proportional manner. The disturbances may arise from a variety of factors that can be inherent to the algorithm structure or due to external factors.

## 1.1.1.5 Computational requirements

The computational requirements of an algorithm are collectively formed of three factors: (a) the number of mathematical operations required in completing an iteration of the algorithm (b) the number of registers required to store the data and program, and (c) the complexity required in coding the algorithm in a computer.

## 1.1.1.6 Structure

This structure of an algorithm is the factor indicating the information flow in the algorithm this is the chief factor in determining the manner in which it is implemented in hardware.

## 1.1.1.7 Numerical Properties

There are two main numerical properties of an adaptive filter: (a) Numerical stability and (b) Numerical accuracy. Numerical stability is an inherent characteristic of an adaptive filtering algorithm whereas Numerical accuracy is determined by the number of bits used in the numerical representation of data.

Types Of Adaptive Signal Processing Algorithm

There are two main types of adaptive signal processing algorithms:

Blind Algorithms

Non Blind Algorithms

## Blind Algorithms

Adaptive signal processing algorithms that do not require a desired signal for their working are known as blind algorithms. The need for blind algorithms or unsupervised adaptive filtering is very evident in applications such as communication channel equalization and system identification. Blind or unsupervised adaptive filtering algorithms are further classified into three categories:

Higher Order Statistics (HOS) based algorithms

Cyclostationary Statistics based algorithm

Information-theoretic algorithms [r]

Higher order statistics based algorithms can be divided into two sub categories, Implicit higher order statistics based algorithms and explicit higher order statistics based algorithms. Implicit HOS based algorithms exploit higher-order statistics of the input signal in an implicit sense, examples include constant modulus algorithm (CMA). Explicit HOS-based algorithms use higher-order statistics or their discrete-time Fourier transforms known as polyspectra. Traditionally HOS base algorithms are computationally demanding with few exceptions. Cyclostationary statistics-based algorithms exploit the second-order cyclostationary statistics of the input signal. The property of cyclostationarity is known to arise in a modulated signal that results from varying the amplitude, phase or frequency of a sinusoidal carrier which is basis of electronic communication process. Information-theoretic algorithms use the likelihood function or Kullbeck-Leibler divergence[r].

Examples of unsupervised adaptive filtering algorithms include Godard algorithm also known as constant modulus algorithm (CMA), Santo algorithm, multiple signal classification (MUSIC).

## Non Blind Algorithms

Figure 1Non Blind Adaptive Algorithm

A non blind algorithm works on a principal of supervised learning i:e it requires a desired response for its training that is the reason for which the desired response is sometimes also known as the training sequence. The algorithm computes output after every iteration using the weight vector and compares the result with the desired response. The error is calculated that represents the difference between the computed output and the desired output. The weight up gradation is done according to this calculated error.

## 1.3 Adaptive Filtering Applications:

The ability of an adaptive filter to operate satisfactorily in an unknown environment and track time variations of input statistics makes the adaptive filter a powerful device for signal processing and control applications. Indeed, adaptive filters have been successfully applied in such diverse fields as communications, radar, sonar, seismology, and biomedical engineering. Although these applications are quite different in nature, nevertheless, they have one basic feature in common: An input vector and a desired response are used to compute an estimation error, which is in turn used to control the values of a set of adjustable filter coefficients. The adjustable coefficients may take the form of tap weights, reflection coefficients, or rotation parameters, depending on the filter structure employed. However, the essential differences between the various applications of adaptive filtering arise in the manner in which the desired response is extracted. Main applications of adaptive filtering are as follows:

System Identification

Layered Earth Modeling

Communication Channel Equalization

Predictive Coding

Power Spectrum Analysis

Acoustic noise cancelation

Beam Forming

## 1.3.1System Identification

In system identification applications an unknown dynamical system, the purpose of the adaptive filter is to design itself with coefficients that provides an approximation to the behavior of the given system.

## 1.3.2 Layered Earth Modeling

In the study of earth's crust geologists make a detailed and layered model of the earth to unravel the complexities of the earth's surface and its deeper layers.

## 1.3.3 Communication Channel Equalization

In telecommunication systems the impulse response of a channel is usually unknown or constantly varying so an adaptive equalizer is made to operate on the channel output such that the cascade connection of the channel and the equalizer provides an approximation to an ideal transmission medium.

## 1.3.4 Predictive Coding

The adaptive prediction is used to develop a model of a signal of interest; rather than encode the signal directly, in predictive coding the prediction error is encoded for transmission or storage. Typically, the prediction error has a smaller variance than the original signal, hence the basis for improved encoding.

1.3.5 Power Spectrum Analysis

In this application, predictive modeling is used to estimate the power spectrum of a signal of interest.

## 1.3.6 Noise Cancellation

The purpose of an adaptive noise canceller is to improve the signal to noise ratio by removing unwanted noise from the signal using an adaptive filter. Echo cancellation, implemented on the telephone circuits, is also a form of adaptive noise cancellation. Noise cancellation is also used in medical applications such as diagnostic tests.

## Adaptive Beamforming

Adaptive Beamforming is a technique in which an array of sensors is used to achieve special filtering i: e maximum reception in a specified direction. The special filtering is achieved by estimating the signal arrival from a known direction in the presence of noise regardless of its relative power level, while signals of the same frequency from other directions are rejected. This is achieved by updating the weights of the adaptive algorithm which uses input from each of the sensors in the array. The basic concept of special filtering is that although the signals transmitted from different sources occupy the same frequency channel, they still arrive from different directions. This spatial characteristic of the signal is exploited to separate the desired signal from the interfering signals. In adaptive beamforming the optimum weights of the adaptive filter are iteratively computed using different algorithms and the updating procedure is stopped when a satisfactory mean squared error value is achieved.

## Applications of Adaptive Beamforming

Following are the main applications of adaptive beam forming:

## RADAR:

Figure 2 Phased Array RADAR

## SONAR:

Figure 3 SONAR

## Smart antenna systems:

Figure 4 Smart Antenna System

## Noise Cancelation:

Figure 5 Noise Cancelation in Cocktail Party Scenario

## Astronomy applications:

Figure 6 Owens Valley Radio Observatory, 6 dish sub millimeter interferometer

## Medical applications:

Figure 7 Beamforming in Tumor Detection

## Wireless Internet:

Figure 8 WiFi Router with Beamforming

## GSM systems:

Figure 9 Beam Forming in GSM

## Military Applications:

Figure 10 Military Applications

## Home HD Systems:

Figure 11 Beamforming in Home HD system

## GPS:

Figure 12 GPS Receiver With Beamforming

## Conclusion:

In this chapter the basics of adaptive signal processing were discussed. The importance of filtering was explained and the difference between adaptive and conventional filtering techniques was explained. The main parameters of adaptive filters were discussed next that included computational efficiency, Robustness, Rate of convergence and others. The main applications of adaptive signal processing were discussed next which included adaptive beamforming. Pictorial representation of different adaptive beamforming applications was also described.

## Chapter Number Two

## Standard Algorithms

Adaptive signal processing is a broad field with wide ranging applications. The different applications of adaptive signal processing have different constraints and requirements that are very different from each other. There are a number of adaptive algorithm and their different variants in use today. The diversity of applications demands a wide array of algorithms but there are some algorithms that can be used in almost all of the adaptive signal processing applications. Two of the most commonly used algorithms that will be covered in this document are:

Least Mean Square Algorithm

Recursive Least Square Algorithm

These two algorithms are very different from each other and have their own pros and cons.

## 2.1 Least Mean Square Algorithm

Figure 13 LMS Algorithm

The LMS algorithm is a linear adaptive algorithm and it consists of two main steps:

The filtering process

The adaptation process

## 2.1.1 The filtering process:

The filtering process is divided into two main steps:

Compute the output of the filter against a given input vector using the weight vector

Calculating the error by comparing the filter output with the desired response

The second steps determine the convergence of the algorithm i: e the value of the error below the threshold level will indicate the convergence of the algorithm.

## 2.1.2The Adaptation process:

This step involves the adaptive adjustment of the filter weights in accordance with the estimation error calculated in the filtering process.

## 2.1.3 Mathematical Formulation:

For the mathematical derivation of the LMS algorithm we will define some parameters first.

u(n)=current input vector

y(n)=current output vector

d(n)=desired response

w(n)=current weight vector

The output y (n) of the filter at any time is given by:

y(n) = wH(n)u(n)

Where wH(n) is representing the hermitian transpose of the weight vector.

Where:

Also:

The weight up gradation equation of LMS algorithm is given by:

w(n + 1) = w(n) +1/2 Î¼[âˆ’âˆ‡J(n)] (a)

âˆ‡J(n) is the instantaneous gradient vector. The symbol Î¼ represents the step size parameter that controls the convergence rate and has a value between 0 and 1.the value assigned to this step size parameter is very important as a very small value of the step size parameter will result in slow convergence but good eventual approximation. There is a constraint placed on the value of the step size parameter given by:

0 < Î¼ <1/ Î»max

An exact calculation of instantaneous gradient vector âˆ‡J(n) is not possible as prior information of covariance matrix R and cross-correlation vector p is needed. So an instantaneous estimate of gradient vector âˆ‡J(n) is used :

âˆ‡J(n)= âˆ’2p(n) + 2R(n)w(n)

The covariance matrix R and cross-correlation vector p are defined as:

R(n) = u(n)uH(n)

p(n) = d*(n)u(n)

By substituting the values of gradient vector âˆ‡J(n), the covariance matrix R and cross-correlation vector p in (a) the weight vector is found to be:

w(n + 1)=w(n) + Î¼[p(n) âˆ’ R(n)w(n)]

=w(n) + Î¼u(n)[d*(n) âˆ’ uH(n)w(n)]

=w(n) + Î¼u(n)e*(n)

So the three equations governing LMS algorithm are as follows:

y(n) = wH(n)u(n)

e(n) = d(n) âˆ’ y(n)

w(n + 1) = w(n) + Î¼u(n)e*(n)

2.1.4 Summary of LMS Algorithm:

The LMS algorithm can be summarized as follows:

Parameters:

M = Number of taps (i.e., filter length)

Î¼ = Step-size parameter

Initialization:

w(0) = 0

Data:

Given:

u(n) = M-by-1 tap-input vector at time n

u(n) = [u(n), u(n- 1),â€¦., u(n -M + 1)]T

To be computed:

w(n + 1) = estimate of tap-weight vector at time n + 1

Computation:

For n = 0, 1, 2â€¦ Compute

e(n) = d(n) - wH(n)u(n)

w(n + 1) = w(n) + Î¼u(n)e*(n)

## 2.2 Recursive Least Square Algorithm:

The RLS algorithm is one of the most famous adaptive filtering algorithms. The main reason for the fame of RLS algorithm is its rapid rate of convergence. The rapid rate of convergence of the RLS algorithm is achieved at the cost of computational complexity. RLS is a very complex and somewhat difficult to implement algorithm as compared to other standard adaptive algorithms such as LMS.

## Chapter Number Three

## Anti-Jamming

## Jamming

In this modern age we relay very heavily on wireless electronic communication. Modern wireless communication systems are bringing instant communication resources to the masses but they are also a very important tool in modern warfare. Militaries around the world have relied upon RF communication for years for command and control message transmission. Due to the importance of command and control messages RF communication systems are of huge interest to attackers and security officials. There are two main forms of communication sabotage:

Interception of critical information

Denial of successful transmission of information

The act of denying any data transmission over an RF network is known as jamming. Jamming techniques are commonly employed in today's warfare by modern armies and naturally there are anti-jamming techniques employed to prevent the denial of information transport.

Figure 14RADAR Jamming

Following are some commonly employed jamming techniques:

Noise jamming

Tone jamming

Swept jamming

Pulse jamming

Smart jamming

Barrage jamming

## Noise Jamming:

The noise jammers works on the simple principal of communication that the signal with lower power will be considered as noise by the communication system. These jammers basically learn and replicate. The jammer is equipped with a frequency scanner system which identifies the operating frequencies of the target system. The jammer emits signals which have similar characteristics to the signal being transmitted by the target system, thus the receiver being noise jammed is now receiving two similar signals, one from the original source and the other from the jammer. This complicated the decoding process and hampers its ability to distinguish the true signal from the false one.

## Tone jamming:

Tone jamming also known as spot jamming is a king of noise jamming in which the jamming system concentrates its power at a single frequency. This will hamper the ability of the target system to communicate using that frequency due to the high power being transmitted by the jammer at that frequency. This technique is commonly employed to jam RADAR signals. Usually frequency agile RADARS are used to counter this type of jamming. The main drawback of this technique is that one jammer can only jam one frequency so to jam a whole range of frequencies a lot of resources will be required which is not practical.

## Swept jamming:

In swept jamming a jammers concentrates all its power to a single frequency just like in tone jamming it renders that particular frequency useless for data transmission. The swept jammer differs to the tone jammer in one aspect that it shifts its transmitted power to different frequencies in quick succession. The swept jammer can jam a single frequency at a time but due to its shifting capability it is more effective then tone jammer.

## 4.1.4 Smart jamming:

All of the jammers discussed above are called "Dumb" jammers i:e the jamming system knows the spectral width of the signal but has no idea about the exact location of the signal on the spectrum(in case of frequency hopping spread spectrum) at a particular time. These jammers monitor the target signal from the side lobe of the transmitting antenna. These jammers have very high processing capabilities so they are able to quickly concentrate their power to the instantaneous bandwidth of the target signal.

## Pulse jamming:

In pulse jamming the jammer does not transmits a constant jamming signal but it transmits high power pulses at different frequencies. The transmitted pulses interfere with the pulses from a valid source and hence degrade the RADAR performance.

## Barrage jamming:

The barrage jammer transmits at multiple frequencies at the same time. The main drawback of barrage jammer is that its power is distributed over the whole band so it is not as effective as tone jammer at individual frequencies. The power transmitted over each frequency is dependent upon number of frequencies being jammed.

## Anti-jamming:

Anti jamming signals and systems are designed to make it difficult for the jamming system to successfully perform its function. There are three main types of anti jamming signals and systems.

Low probability of detection systems

Low probability of intercept systems

Low probability of exploitation systems

## 4.2.1 Low probability of Detection Systems:

The low probability of detection or LPD systems are designed to somehow hide the transmitted signal from undesired receivers i: e make it difficult for a jamming system to ascertain the presence of its signal. The main reason for doing so is to either communicate in secrecy or to make it difficult for the jamming system to ascertain the spectrum of the signal. Direct Sequence Spread Spectrum is an example of such system.

## 4.2.2 Low Probability of Intercept systems:

If a signal is unable to achieve Low probability of detection then it is accessible to all undesired systems. The signal can still be protected by some techniques employed by systems called LPI systems. In LPI systems all of the undesired listeners can receive the transmitted signal but they cannot decode it frequency hopping spread spectrum is an example of LPI system.

## 4.2.3 Low Probability of Exploitation systems:

In low probability of exploitation systems no attempt is made to hide the signal from unwanted receivers or to make it difficult for the receiver to get data. In LPE system the data itself is made unavailable to an unwanted receiver. Encryption is an example of LPE technique.

## 4.3 Conventional solution:

The main techniques used to avoid jamming of a RF system are:

Direct Sequence Spread Spectrum

Frequency Hopping Spread Spectrum

Time Hopping

Hybrids

The first two techniques namely DSSS and FHSS are widely used the third technique Time hoping is also available although not used very often used.

## 4.3.1 Direct Sequence Spread Spectrum:

The Direct Sequence Spread Spectrum technique is a LPD system. In DSSS we spread the signal over a very wide bandwidth. As there is a tradeoff between transmitted power and bandwidth so by using wide bandwidth the power transmitted over each individual frequency is very small. The DSSS systems use the entire bandwidth instantaneously i: e signal is being transmitted over the entire band simultaneously unlike in FHSS discussed later. The power transmitted over each individual frequency is so low that it is comparable to the thermal noise present in the system so the listening system considers the transmitted signal noise and is unable to detect the transmitted signal. Due to the low transmitted power of the signal special signal processing techniques are required to extract the signal.

Figure 15 DSSS Channelization

## 4.3.2 Frequency hopping Spread Spectrum;

In Frequency Hopping Spread Spectrum (FHSS) the signal occupies a very narrow band at a single instant. FHSS is a LPI technique i: e it is easily detectable but getting the data is not simple. FHSS is usually employed in the VHF band and single channel bandwidth is limited to 25 KHz. In the VHF band there are 2400 channels but only a subset of these channels is used for FHSS. The number of channels used is usually in power of two and is called "Hop Set".

There are two main types of FHSS:

Fast FHSS

Slow FHSS

The two types only differ in a very minor detail of number of bits per hop. If there are multiple bits per hop then the hopping is termed as slow FHSS and if there are multiple hops per bit then the hopping is called fast FHSS. FHSS systems have an inherent advantage of frequency diversity. The fading characteristics of the channel are not same at different frequencies so as FHSS uses different frequency channels there is less probability of signal fading.

The most common modulation technique used by FHSS systems is frequency shift keying (FSK) specifically binary frequency shift keying BFSK with incoherent detection. In BFSK a data bit is sent over one of two tones which are offset by some amount above or below a carrier frequency that is constantly changing location in the frequency spectrum.

Figure 16 FHSS

## 4.3.3 Time Hopping:

Time hopping is a technique that is used to avoid correct decision of a transmitted bit by an unwanted receiver. The instrument used to detect a frequency hopping signal is called a Radiometer. The Radiometer monitors the bandwidth for a certain period of time and measures the power level transmitted and then decides if a mark or a space was sent by the transmitter. The time for which a radiometer monitors the bandwidth before decision making is called integration time. In time hopping the time of transmission is moved randomly so that a radiometer detects noise most of the time.

Modern ultra wide band (UWB) systems use time hopping to achieve anti jamming characteristics. UWB systems allow multiple users to occupy the same spectrum.UWB systems spread the signal over a very large bandwidth that can interfere with other signals in the same spectrum.

## 4.3.4 Hybrid:

The above stated techniques are sometimes combined to gain the advantages of these techniques. The most commonly used hybrid technique is the combination of DSSS with FHSS. The DSSS-FHSS system exploits the stealthy nature of the DSSS system and the frequency diversity of the FHSS technique. The base band signal is first converted to a wideband signal using DSSS and the frequency hopped so the resulting signal is harder to detect and is more reliable the either of the two techniques.

Figure 17DSSS-FHSS Hybrid

## 4.4 Adaptive Solution:

The adaptive solution for anti jamming applications is of beamforming. The adaptive algorithms are provided with the special information of the source and main interferers (jammers) i: e angle from the receiver. The beam forming algorithms form a beam towards the desired source and place nulls in the direction of known interferers.

As the beamforming receiver tries not to receive interference coming from the undesired directions so the underlying circuitry does not needs to filter out the bulk of interference later. The anti jamming characteristics of three main adaptive algorithms are studied and compared these three algorithms are:

Least Mean Square Algorithm

Recursive Least Square Algorithm

Least Mean Square Algorithm with Optimum Step Size

The LMS algorithm and the RLS are discussed in detail in earlier chapters and the LMS-OSS algorithm is described later.

## 4.4.1 Least Mean Square Algorithm with Optimum Step Size:

The Optimal Robust Adaptive LMS Algorithm without Adaptation Step-Size utilizes input data and error to find the optimum step-size where as Conventional LMS algorithm uses a predetermined step-size. The computational complexity of the proposed algorithm remains the same as of the conventional LMS. As weight update equation of conventional LMS algorithm is

w (n + 1) = w(n) + 2Î¼u(n)e*(n) (a)

w(n) is the weight vector, u(n) is the input data vector, e(n) is the error. Cost function can be represented as

JLMS = arg|e|^2min

E [|e (n) |2] = E [|d (n) âˆ’ y (n) |2]

= E{[d(n) âˆ’ y(n)][d(n) âˆ’ y(n)]*} (1)

For simplicity, eliminating time index and expected value notation

|e|2 = |d âˆ’ y|2

|e|2 = (d âˆ’ y) (d âˆ’ y)* (2)

Putting y = wH u in (2)

JLMS = (d âˆ’ wHu) (d âˆ’ wHu) Ì (3)

Substituting (a) in (3) gives

JLMS = |d|2 âˆ’ duH(w + 2Î¼ue*) âˆ’ (w + 2Î¼ue*)ud*

+(w + 2Î¼ue*)HuuH(w + 2Î¼ue*) (4)

To determine the optimum step-size we will differentiate equation (4) and equate the result to zero.

JLMS

Î¼ = âˆ’2uHude* + 8Î¼uHuuHuee* âˆ’ 2uHude*

+2wHuuHue* + 2uHuuHwe*

JLMS

Î¼ = âˆ’4(uHude*) + 4(uHuye*)

+8Î¼uHuuHuee* (5)

JLMS

Î¼ = 0

Where (X) is the real part of X. Equating (5) equal to zero gives

Î¼opt = 1 .

2uHu

Substituting the value of Î¼opt in equation (a) gives

w(n + 1) = w(n) + u(n)

u(n)uH(n)e*(n) (6)

Simplifying equation (6) we get

w(n + 1) = w(n) + e * (n)

uH(n)

A small positive value Îµ is added to the denominator in (21) to avoid the situation in which when the value of the instantaneous sample of the input vector goes to zero the weight vector becomes infinite.

w(n + 1) = w(n) + e* (n)

uH(n) + Îµ

Empirical results have shown that a step size parameter is still needed for the algorithm to converge. So the weight up gradation equation can be written as

w(n + 1) = w(n) + Î¼ e*(n) .

uH(n) +Îµ

## 4.4.2 Simulations and Results:

A comparative analysis was performed of the anti jamming characteristics of the three adaptive algorithms mentioned above MATLABÂ® was used to simulate different scenarios to analyze the anti jamming characteristics of these algorithms. An antenna array of eight elements was simulated with the following scenario:

Source at the angle of 45Â° from the array

Interfering source at an angle of 60Â°

Signal to noise ratio of 15dB

Signal to interference ratio of 3dB

Step size parameter for LMS algorithm is 0.55

Step size parameter for Optimized LMS is 0.55

The Forgetting Factor parameter for RLS algorithm is 0.999

Figure 18 Convergence plot

The above figure shows the convergence characteristics of the three algorithms under consideration. It is evident from the convergence characteristics of the three algorithms that RLS is the first one to converge. RLS achieves convergence in around three iterations whereas optimum LMS id the second one to converge it takes 75 iterations to converge LMS algorithms exhibits a Brownian motion around the minimum value.

To analyze the Beamforming or anti jamming abilities of the three algorithms three different scenarios were analyzed.

## 4.4.2.1 Scenario 1:

In the first scenario all the parameters of the algorithms are set as the previous simulation but with one modification. In this simulation interference is at 40Â°.The number of iterations for each algorithm is 500.

Figure 19 Polar and rectangular plot with interference at 40Â°

It is evident from the above figure that RLS has placed the deepest null at the in the direction of the interfering signal (-50dB). The optimum LMS algorithm has placed a -30dB null in the direction of the interfering signal whereas LMS algorithm has placed a -15dB null at slightly different angle then the interfering signal. An RLS algorithm outperforms both LMS and optimum LMS algorithms in this scenario.

## 4.4.2.2 Scenario 2:

In the second scenario all the parameters of the algorithms are set as the previous simulation but with one modification. In this simulation interference is at 60Â° and again the number of iterations for each algorithm is 500.

Figure 20 Polar and rectangular plot with interference at 60Â°

In the above figure we can observe that RLS has placed the deepest null at the in the direction of the interfering signal (-50dB). The optimum LMS algorithm has failed to form a beam or place a null in the direction of the interfering signal whereas LMS algorithm has placed a -20dB null at the angle of the interfering signal. An RLS algorithm outperforms both LMS and optimum LMS algorithms in this scenario but the performance of optimum LMS algorithm is very poor as it failed to form any beam or place any considerable null in the direction of the interfering signal.

## 4.4.2.3 Scenario 3:

In the third scenario the interfering signal is moved to an angle of 90Â° so that we now have a difference of 45Â° between the desired source and the interfering source. Simulations were performed for 500 iterations for each algorithm.

In the above scenario RLS algorithm shows the best beamforming characteristics followed by LMS algorithm but the performance of the optimized LMS algorithm is not satisfactory in comparison with the other two algorithms.

## Chapter Number Five

## Improved Gain Vector Recursive Least Square Algorithm

In this chapter we will describe the improvement proposed to improve the convergence and beamforming capabilities of recursive least square adaptive algorithm. The improved algorithm that will be described in this chapter is:

Improved gain vector recursive least square algorithm

The proposed algorithm is improved versions of the standard algorithms used for adaptive signal processing in general and adaptive beamforming in particular. The fact that this improvement is made to the algorithms that is not confined merely to the adaptive beamforming applications but is used for many other applications make these improvements applicable to any other adaptive signal processing application but we will confine our discussion mainly to adaptive beamforming.

The main consideration of our research work was to ensure that the performance improvements achieve do not put a heavy computational burden on the system i:e the proposed algorithm should not become difficult or computationally demanding to implement. These proposed algorithms was also thoroughly analyzed using different scenarios such as using different number of array elements (in beamforming) and using different SNR conditions so that any potential shortcomings of this algorithms are exposed and highlighted.

the proposed algorithm was simulated in MATLABÂ® and its convergence plots was compared with the standard algorithms under different conditions. The results was analyzed and the achieved improvements in convergence speed were noted and presented.

RLS (recursive least square) algorithm is one of the most effective adaptive algorithm in terms of performance. The convergence speed of RLS algorithms surpasses other standard algorithms such as LMS and N-LMS. RLS algorithm although outperforms other algorithms in many ways but under some circumstances RLS algorithm takes longer to converge then it is desirable. Many schemes are propose to achieve enhanced convergence performance from RLS algorithm. There is no doubt that RLS algorithm outperforms many other algorithms but such performance comes at a cost, the main drawback of RLS algorithm is its high computational requirements not only these computational requirements put enormous strain on the system but also limit the possible improvements in the algorithm. Modified RLS algorithms that add to the already high computational requirements of the algorithm are impractical as they would not be cost effective in implementation.

The proposed modified RLS algorithm differs from the standard algorithm in one respect that is the way the RLS gain vector k(n) is implemented the standard RLs used the gain vector:

The proposed algorithm has a gain vector that is conditional to the inverse of the magnitude of the error vector. The gain vector implemented in the proposed algorithm is described by:

In simple terms the gain vector proposed considers the gain vector of the slandered algorithm as a minimum gain vector and scales the gain vector if needed to accelerate the convergence process.

## 5.1.1 Simulations and Results:

For the analysis purpose the simulations were performed for 150 iterations for both the proposed and the standard RLS algorithm in the same scenario. The effect of the signal to noise ratio (SNR) and the forgetting factor parameter (Î») were observed by simulating the algorithms for different values of these parameters.

The three SNR values considered are:

SNR 18dB

SNR 10dB

SNR 0dB

For each SNR value the algorithms are simulated for three forgetting factor parameter values:

Î»=0.999

Î»=0.98

Î»=0.97

## For SNR 18dB:

Figure 21 Convergence Plot at Î»=0.99

Figure 22 Convergence plot at Î»=0.98

Figure 23 Convergence Plot at Î»=0.97

The plots clearly show that the proposed algorithm outperforms the standard algorithm in good SNR condition for all three valued of the forgetting factor parameter. If we consider that 5% error is tolerable by the system then for the forgetting factor parameter value 0.99 the proposed algorithm converges in 100 iterations whereas the standard algorithm takes 150 iterations to do so.

## For SNR 10dB

Figure 24 Convergence Plot at Î»=0.99

Figure 25 Convergence Plot at Î»=0.98

Figure 26 Convergence plot at Î»=0.97

For SNR of 10dB the proposed algorithm is still performing better then the RLS algorithm. If we again consider the 5% error level tolerable by the system then the proposed algorithm converges in 75 iterations whereas the standard algorithm takes 140 iterations to achieve the same error level so the proposed algorithm is performing better in relatively poor SNR conditions.

## For SNR 0dB

Figure 27 Convergence Plot at Î»=0.99

For poor SNR conditions i:e 0dB the proposed algorithm converges faster than the standard algorithm as shown by the graph above furthermore for different values of the forgetting factor parameter the proposed algorithm is performing adequately.

Figure 28 Convergence Plot at Î»=0.98

Figure 29 Convergence Plot at Î»=0.97

The summary of the proposed algorithm is:

Algorithm is initialized by setting

w(0) = 0

P(0) = Î´âˆ’1I

and

Î´=

small positive constant for high SNR

Large positive constant for low SNR

For each instant of time, n=1,2,3..... Compute

Ï€(n) = P(n âˆ’ 1)u(n),

Î¾(n) = d(n) âˆ’ wH(n âˆ’ 1)u(n)

w(n) = w(n âˆ’ 1) + k(n) Î¾ *(n)

and

P(n) = Î»âˆ’1P(n âˆ’ 1) âˆ’ Î»âˆ’1k(n)uH(n)P(n âˆ’ 1)

The proposed algorithm adds only minimal computational complexity to the RLS algorithm and outperforms it in all of the simulated conditions. The performance of the proposed algorithm is also considerably better in low SNR conditions so the proposed algorithm is a good choice for use in poor SNR conditions.

## Chapter Number Six

## Robust and Efficient Least Mean Square algorithm

The Least Mean Square algorithm is most commonly employed adaptive signal processing algorithm. The LMs algorithm is very famous due to its relatively low computational requirements and good performance. LMs algorithm has one drawback that is its stability. LMs algorithm is prone to divergence under some conditions. The propose Robust Efficient Least mean Square algorithm RELMS seeks to remove this drawback of the LMS algorithm.

LMS algorithm is described in some detail in earlier chapters and the RELMS algorithm inherits most of its structure from the LMS algorithm the only difference between the two algorithms is the way the weight up gradation is done. The weights update equation of the standard LMS algorithm is given by:

w(n + 1) = w(n) + Î¼u(n)e*(n)

w(n+1) is the new weight vector

w(n) is the previous weight vector

u(n) is the current input vector

Î¼ is the step size parameter

It is evident from the above equation that the conventional LMS algorithm only uses one previous error vector for the computation of the new weight vector. The RELMS algorithm uses the two previous error vectors as well as the current error vector for the weight up gradation process. The previous error vectors are given by:

e(n âˆ’ 1) = d(n âˆ’ 1) âˆ’ y(n âˆ’ 1)

e(n âˆ’ 2) = d(n âˆ’ 2) âˆ’ y(n âˆ’ 2)

the current input vectors of the two previous iterations are also involved so the weight update equation of the RELMS algorithm becomes:

w(n + 1) = w(n) + Î¼ [Î³1u(n)e*(n) + Î³2u(n âˆ’ 1)

e*(n âˆ’ 1)+ Î³3u(n âˆ’ 2)e*(n âˆ’ 2)]

In the above equation three new parameters are introduced namely Î³1, Î³2 and Î³3 these three parameters are known as ratio parameters and they determine the contribution of the previous error and input vectors to the weight update process. The product vectors of previous input and error vectors are the main parameter on which the performance of RELMS algorithm is dependant for simplicity we introduce a notation for these parameters:

Ï1 = u(n)e*(n)

Ï2 = u(n âˆ’ 1)e*(n âˆ’ 1)

Ï3= u(n âˆ’ 2)e*(n âˆ’ 2)

Using the above described notation the weight up gradation equation of the RELMS algorithm can be written as:

w(n + 1) = w(n) + Î¼[Î³1 Ï1 + Î³2 Ï2 + Î³3 Ï3]

There are some constraints placed on the selection of ratio parameters these constraints are:

0< Î³1<1

0 < Î³2< 1

0 < Î³3<1

These constraints are there to ensure that there is no negative multiplier with any of the product vectors. There is one additional constraint that is to be followed that constraint is given by:

Î³1+ Î³2+ Î³3 =1

Now if we put Ï2= Ï3=0 in the RELMS weight update equation we get:

w(n + 1) = w(n) + Î¼ Ï3

Substituting the value of Ï1 in the above equation we get:

w(n + 1) = w(n) + Î¼ u(n)e*(n)

The above equation is the weight up gradation equation of the conventional LMS algorithm so we can consider the conventional LMS algorithm as a special case of the RELMS algorithm.

Figure 30 Simplified Block Diagram of RELMS

The RELMS algorithm can be summarized as follows:

## Parameters:

M = Number of taps (i.e., filter length)

Î¼ = Step-size parameter

Î³1 = First ratio parameter

Î³2 = Second ratio parameter

Î³3 = Third ratio parameter

Ï1 = First product vector

Ï2 = Second product vector

Ï3 = Third product vector

## Initialization:

w(0) = 0

Ï1 = Ï2 = Ï3 = 0

## Data:

Given:

u(n) = M-by-1 tap-input vector at time n

u(n) = [u(n), u(n- 1),â€¦., u(n -M + 1)]T

To be computed:

w(n + 1) = estimate of tap-weight vector at time n + 1

Computation:

For n = 0, 1, 2â€¦ Compute

Ï2= Ï1

Ï3 = Ï2

e(n) = d(n) - wH(n)u(n)

Ï1= u(n)e*(n)

w(n + 1) = w(n) + Î¼[Î³1 Ï1 + Î³2 Ï2 + Î³3 Ï3]

## 5.2.1 Simulations and Results:

Simulations were performed to gauge the comparative=e performance of the RELMS algorithm with the conventional algorithm. The scenario for the simulations is as follows:

A linear array consisting of 10 isotropic elements

A sinusoidal source signal arriving at 90Â°

An Additive White Gaussian Noise (AWGN) channel with a sinusoidal interference signal arriving at 45Â°

All weight vectors are initially set to zero

Signal to Noise Ratio (SNR) of 15dB

Signal to Interference Ratio (SIR) of 3dB

Figure 31 Convergence Plot

In the above plot the step size parameter of both LMS and RELMS algorithms are set to a value of 0.12 and the three ratio parameters of the RELMS algorithm have a value of 0.33 each. It is clear from the above plot that the LMS algorithm has failed to converge in the given scenario but the RELMS algorithm has remained stable.

An important thing to note in the above plot is the value of the error for the RELMS algorithm dipping below the value of zero this is due to the fact that the logarithmic error is plotted so the value of error is not going below zero but is only between zero and one.

Figure 32 Convergence Plot

In the above plot the step size parameter of both LMS and RELMS algorithms are set to a value of 0.11 and the three ratio parameters of the RELMS algorithm have a value of 0.33 each. The error for the LMS algorithm is rising continuously that is an indicator of divergence but the RELMS algorithm has remained stable.

Figure 33 Enhanced Convergence Rate of RELMS

the convergence plot of LMS and RELMS algorithms for a scenario is shown in the above figure. The step size parameter for both algorithms in this scenario is 0.08 and the three ratio parameters of the RELMS algorithm are set to a value of 0.33.it is clear from the above plot that not only does the RELMS algorithm converges swiftly but also exhibits less Brownian motion around the optimum weiner solution as compared to the LMS algorithm.

Figure 34 Convergence Plot with non Uniform Ratio Parameters

Up till now we have not assigned different values to the ratio parameters of the RELMS algorithm but the convergence plot shown above is different. The step size parameters of both LMS and RELMS algorithm are set to 0.08. the first ratio parameter of RELMS algorithm is assigned a value of 0.8 and the rest of the ratio parameters are assigned a value of 0.1 i:e 80% of the contribution to the weight update is from the current sample.

The plot clearly shows the rapid convergence of RELMS algorithm and its lack of Brownian motion.

To analyze the effect of SNR on the performance of RELMS algorithm the SNR was reduced to 8dB and the ratio parameters were given a uniform value the resultant performance plot is shown below:

Figure 35 Performance in Reduced SNR

The RELMS algorithm is still superior in convergence performance and Brownian motion even in reduced SNR.

Figure 36 Rectangular and Polar Plot of Array Factor

The above figure is showing the beamforming performance of LMS and RELMS algorithm in comparison. The two algorithms are fairly similar in terms of beamforming performance but the RELMS algorithm has placed relatively deeper nulls then the LMS algorithm.