# A Perfomance Evaluation Of Speech Enhancement Computer Science Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This thesis is to ensure the quality of service (QOS) in speech enhancement service under adverse noise environment. It is necessary that the speech recognitions performance in noisy environment should be investigated. In this paper the approach is to estimate the speech enhancement performance with different noise reduction algorithms using adaptive filters like LMS, RLS. This acoustic noise cancellation algorithm (ANC) can represents the difference between noisy speech and its original clean version under adverse noisy environment. In my approach the two different noise cancellation algorithms are analysed and their performance of these algorithm is estimated. Previously, other researchers have estimated the performance methods using the, signal-to-noise ratio (SNR) and various been proposed but on the other hand, their estimation accuracy has not been verified for the case for real time. Therefore, in this thesis the effectiveness of these noise reduction algorithms evaluated by experiments using the MATLAB 7.9 software tool and develop stimulink for different noise reduction algorithms and analyse their performance to know the better noise cancellation algorithm suits in adverse noisy condition.

This dissertation is about testing the performance of speech recognition in adverse environments. The performance is evaluated using different acoustic noise cancellation algorithms. The purpose of this thesis is to learn various adaptive filter algorithms and its application used to improve the noise cancellation problems. In my approach two of most frequently applied algorithms for noise cancellation are taken for evaluation they are least mean squares (LMS), recursive least squares (RLS). Considering these two different algorithms, my approach is to perform simulation on these algorithms in MATLAB - SIMULINK and analysis which of these algorithms perform well under noisy condition and evaluate the performance of the algorithm.

The speech recognition technology has grown widely in telecommunication. There are lots of speech enhancement software's in the market which improves the performance of the speech recognition but speech recognizers have various problems like noise distortion, noise interference which degrades the speech signal in communication system. This degradation depends upon the various thing caused by the background noisy environment so to minimize this various types of noise reduction algorithm are used in speech recognition technology mostly noise reduction algorithm decides the quality of speech signal in a communication system.

If we see the human beings can bare sound up to a range of frequency that is 20 KHz more than this frequency can damage human ears. The most common problem is the noise interference this makes the speech signal to degrade its quality the process is done by masking the signal. The adverse noise are experienced in ore day to day life like, traffic, crowds and commonly, reverberation and echoes and some due electronically so called thermal noise. This noisy signal is expressed in decibels calculated using signal-to-noise ratio this SNR is nothing but expressed as difference between the power of the signal and the masked signal if the SNR value increases from 0db the noise signal is higher than the speech sound.

In speech enhancement there are some functions that enhance the speech process they are echo cancelling, noise suppression and noise cancellation each functions have different properties to improve the speech signal. The echo cancellation method are helpful to the cancel the reverberation caused in an empty room or in an auditorium even in mobile phones this type of noise cancelling algorithm are called echo cancellation algorithm. Noise suppression is type of technique which does not cancel the noise but reduce the db level in the speech signal these techniques are not used in digital signal but help full in real time. The noise cancellation are used widely in speech enhancement there several types of noise cancellation technique but the type of algorithm involved in the noise cancellation plays a vital role. These algorithms have different properties some algorithm are effective in noise cancellation due to the type of flitter they use. One of the filtering type is called adaptive filters used in many fields like telecommunication, geophysical signal processing, biomedical signal processing they are used in different application like beam formation, noise cancellation and blind equalisation. These application help in speech enhancement but still the speech signal needs to be clear and special care should be taken for the hearing impaired who require more audio clarity than other normal listeners for this the system should have a greater SNR and most of us use mobile phones where the speech communication is still having some masking problems and therefore its necessary to guaranty a good quality of service should be offered in speech recognition in a satisfactory way. This means speech enhancement performance should investigated in the target background noise environment.

This paper investigates and analyses the noise cancellation algorithm by experiment aimed to evaluate the quality of different noise cancelation algorithms based on different real time samples. In this an experiment is designed and implemented with the help of MATLAB tool to find out best methodology for noise cancellation algorithm. In this way my report will compare different methods of noise cancellation algorithm used in adaptive filters and with help of the stimulated output will estimate the advantages and disadvantage of the noise cancelation algorithms used in adverse nosiy enveronment

Research Question

Does the investigation of Noise cancellation algorithm will help in improving the performance of speech recognition in adverse environment.

Aims & Objectives

Analyse of different types acoustic noise in signal

- To study about the introduction to speech signals

- To study about the different types of noise affecting speech enhancement

Different types of adaptive filtering algorithms

- To Study about adaptive algorithm

- To study about the LMS,

- - To study about the RMS algorithm

Proposed Solution

- To implement and LMS algorithm in matlab

- To implement and design RMS algorithm in matlab

- To study the formulation to calculate the percentage of noise cancellation

Research problems

Interference of background noise in the speech signal.

To improve the speech enhancement in adverse noisy environment.

There are many software developed to minimize the noise ratio in speech signals but the effective noise cancellation algorithm used in that software makes the difference so there is a need to test the performance of the noise cancellation algorithm

To improve the quality of speech signals.

Research methods:

Adaptive filtering technique is used to study the different types of noise cancellation algorithm. Design and implementation different noise cancellation algorithm in MATLAB. Mean square error (MSE) adaptive algorithm like least mean square (LMS), and recursive least square (RLS) are used for noise cancellation in a communication system with output of these three algorithm are tested for performance evaluation by comparing the signal-to-noise ratio (SNR) by comparing the performance best noise cancellation algorithm is estimated

Chapter Preview

In this chapter, a brief introduction about the project and speech enhancement using noise reduction algorithm and main features, problems faced, existing solutions, proposed solution, research question, the aims and objectives and approach and methodology

Chapter -2 Analyse of different type's noise in signal

## 2-What is noise signal?

Noise is defined as inaudible sound heard by human being or in other words redundant signal which holds up with communication system. The noise are found in round above all the environments like in mobile communication system there are numerous mixture of noise which corrupt the quality of the signal in communication some general noise that affects the speech signals can be due to various reasons but in digital signal they can be separated as three main types following:

1. White noise,

2. Coloured noise,

3. Impulsive noise.

## 2.1.1 White noise:

The white noise is a sound created when audiable frequency of sound with combined time and density. The white noise can incorporate 20000 frequency which is known as white sound for example whine noise can be compared to rainfall or ocean waves which is very present to hear the white noise are used to mask other noise and sound because it has very low tones to high pitches with equal intensity the phase of the noise between 0 and 2Ï€ is uncertain in white noise. When the noise signal generated from two different sources the resultant of noise signal power is equal to the sum of component power.

Figure 1: Illustration of (a) white noise, (b) its autocorrelation, and (c) its power spectrum.

When the continuous time is at null 0 the variance of the white noise is represented as therefore the equation is given as

N(t+ Ï„)]= Î´(Ï„)

The power spectrum of the white noise is given as

## 2.1.2 Coloured noise:

The coloured noise is the opposite of white noise. The co variance Rx(L) between these have sample values at different time index so there covariance is not zero for lags L. Therefore the Rx(L) will decreases when L is increased

We can generate coloured noise from white noise by sending white noise signal through low pass filter this type of filter is known as shaping filter.

There are different types of coloured noise such as brown noise, pink noise, orange noise etc. This depends on the power density function (PSD) of the noise.

The noise such as traffic, noise from computers, and crowd noise in background have a low freq spectrum. A white noise is said to be a coloured noise when the shape of the spectrum is changed. The two widely used coloured noises are pink noise and brown noise which are shown in figure 2 and 3

Figure 2: (a) A pink noise signal and (b) its magnitude spectrum.

Figure 3: (a) A brown noise signal and (b) its magnitude spectrum.

## 2.1.3 Impulsive noise:

Impulsive noise is defined as a sudden burst of noise with high amplitude for example noise created on a circuit that can cause by voltage spike in device is an impulsive noise. These noises generate a click sound in a signal their noise is of short duration 1/100 of a second. The impulsive noise have very low effect on voice signal however can cause error in the communication system. The impulsive noise is also known as contaminated Gaussian noise.

Figure 4: Time and frequency sketches of: (a) an ideal impulse, (b) and (c) short duration pulses.

Figure 5: sketch of variations of the impulse response of a non-linear system with the increasing amplitude of the impulse.

The impulsive noise instigates in speech signal in time and space which then enters in through the receiver signal and contaminates the signal if the time period changed drastically and the shape by the channel then its said to be the impulse response. The communication signal is subjected to linear or non-linear may be time variant. The play back system is a time invariant communication channel. The above figure 5 has impulse variation in channel where they generate a decaying transient impulse

## 2.1.4 Thermal Noise:

The thermal noise is formed due to the temperature variation in molecules. The thermal noise are produced by the means of for heat example gas molecules which move randomly in containers or in simple way electrons moving in a container. This random movement can cause fluctuations the fluctuation are random when they are above a specific level of pressure. The thermal noise is increased when temperature increases which makes the kinetic energised molecule to fluctuate randomly.

Correspondingly to the gas conductors the electrical conductors too have high no of free flow of electrons in them, when the ions move randomly they affect the equilibrium of the electrons moving this liberal movement in electron causes thermal noise which averages at zero. When the temperature is increased in the conductor the current flow increases randomly therefore the resistor are used to control the flow using this the mean value of the thermal noise is given as

=4kTRB

## 2.1.5 Acoustic noise:

The acoustic noise are generated from the source such as vibration , colliding etc these are the common types of noise present in day to day environment like noise generated in traffic and electronic noise fan noise background noise and natural noise like wind rain, etc

## 2.2 DISCRETE TIME SIGNALS:

The speech signals are generally analog and continuous signals. The most common audio signal which heard by the human ears are continuous waveform signals which fluctuate in air pressure at various frequencies which we hear as sound. Though in present communication system these sound signals are represented as discrete numeric sequences each sequence represent a value which is same value of the continuous signal. This value is represented as Ts

known as sampling period

i.e, If X(t) is considered as continuous waveform signal we need to convert the waveform to a discrete time vector to process digitally. The value in each vector represents the same value in waveform each integer multiplies with sampling period. If the sequence corresponds to n times of sampling period the value of X(t) is represented as X(n)

X(n)=X(nTs)

## 2.3 RANDOM SIGNALS:

The random signals are expressed as X(t) random variable function. This waveform don't have a precise description but these random process are expressed by statistical or by probabilistic models. The random variable signals are unpredictable. If we take each variable denoted by n, if so then the random signal is represented as two variable X(t,n)

The other main characteristic of random signal is expectation of a random signal. The mean value of random variable is referred as E[x(t)] the X(t) which represents input of the random variable. The number of input process in acoustic noise cancellation is denoted as 1. The E[x(n)] is used to derive the various algorithm in adaptive filter.

## 2.4 SPEECH SIGNALS:

In general speech signal are divided into three types of sound they are voice, fricative and plosive sounds. In voiced sound are formed by the vocal tract with periodic pulse waveform. The second type fricative sound are caused by constricting the vocal track in airflow this makes noise like impression due to turbulence. The third type plosive sound are created by closing up the vocal tract, this is created by forcing the air behind and release it in full throat for example letter p.

Figure: Speech signal representation.

The figure represents a speech signal with discrete time this signal is a non stationary due to random vary of time and the mathematical model cannot be predicted for random process above all three types of sound the speech signal is considered as linear composite all theses remain stationary and constrain over intervals 30 to 40 ms all the adaptive filters input signal usually remains stationary because speech signal are not stationary every time.

## Chapter -3 Adaptive filtering algorithms

## 3.1 Different types of adaptive filtering algorithms

Adaptive filters are used in adaptation low processing delay environments. The application of adaptive filters is most commonly used in noise reduction, signal channel equalizer, acoustic echo cancellation, interference in speech and delayed speech signal. The adaptive filters are used to know the dissimilarity of desired signal and adaptive filter output e(n). In adaptive filter the error signals are changed to algorithmic to reduce the function difference known as cost function. The best possible output of the adaptive filter has the similar value as redundant echo signal in the case of acoustic noise cancellation.

This chapter examines various adaptive filter algorithms. This thesis handles two different adaptive filter algorithms they are least mean square (LMS), recursive least square (RLS)

## 3.2 ADAPTIVE FILTERS

In adaptive filters the mean squared error (MSE) technique is used there are different types of MSE, they are least mean squared algorithm, normalised mean squared algorithm, recursive least squared algorithm in this LMS and RLS algorithm are used in this thesis to test their performance to filter the noise signal. These algorithm are explained in the following.

## 3.2.1 LEAST MEAN SQUARE (LMS) ALGORITHM:

The least mean square is one of the most commonly used algorithm in adaptive filter. The LMS algorithm is also known as stochastic gradient based algorithm for the reason that it applies gradient vector to the filter taps to meet the optimal wiener solution. This algorithm is commonly used for computational simplicity. This simplicity makes this algorithm more efficient. The LMS algorithm has iteration and the filter taps weights of the adaptive filter is written as the following formula

w(n +1) = w(n) + 2Î¼e(n)x(n)

where the input vector is x(n) has a delay in values is considered as x(n) = [x(n) x(n-1) x(n-2) ..x(n-N+1)]T. The coefficients of the adaptive filter is represented as w(n) = [w0(n) w1(n) w2(n) .. wN-1(n)] n is the no of time in filter tap weight vector. The step size parameter is denoted as Î¼ which has a less positive constant. The step size parameter is help full in selecting a suitable value for Î¼ and to control the updating factor its vital for the performance of LMS algorithm because if the time taken by the adaptive filter will be longer to converge the optimal solution on the other hand the adaptive filter comes to a unstable state if the Î¼ is very large and the output diverge.

## 3.2.2 Derivation of LMS algorithm:

This derivation is done using the theory of wiener solution with optimal filter tap weights. The formula which updates the adaptive filter coefficient for tap weight vector

Î¾(n).

w(n=1)=w(n)- Î¼

Î¾(n)

Where Î¾(n)= E[]

Each recursion shifts the value of the filter coefficients closer toward their optimum value, which corresponds to the minimum achievable value of the cost function, Î¾(n). This derivation is derived from Diniz 1997, pp. 71-3 and Farhang-Boroujeny 1999, pp.139.41. the expectation of error signal is unknown therefore instantaneous value are taken. Then the descent algorithm equation is given as

w(n=1)=w(n)- Î¼

Î¾(n)

Where Î¾(n)=

On the other hand

Î¾(n) is also articulated as the following equation.

Î¾(n)=

## =

= 2e(n)

=2e(n)

=-2e(n)

= -2e(n)x(n)

Substituting this value in filter taps weights formula we obtain the recursion for the LMS adaptive algorithm.

w(n +1) = w(n) + 2Î¼e(n)x(n)

## 3.2.3 NORMALISED LEAST MEAN SQUARE (NLMS) ALGORITHM

The normalised least mean square (NLMS) algorithm is the expansion of the LMS. The LMS has some disadvantage like step size parameter which cannot be change the iteration there are many factors which will affect the performance of signal input power and amplitude the step size value is represented as Î¼(n).

"This step size is proportional to the inverse of the total expected energy of the instantaneous values of the coefficients of the input vector x(n). This sum of the expected energies of the input samples is also equivalent to the dot product of the input vector with itself, and the trace of input vectors auto-correlation matrix, R" (Farhang-Boroujeny 1999, p.172)

tr[R]=

=E[

The NLMS algorithm is formulated as following equitation,

W(n+1)=W(n)+X(n)

## 3.2.4 The Problem with NLMS:

Even though NLMS is advanced to LMS it has some draw backs they are near-end speech exists, when the performance of NLMS algorithm is tainted due to double talk this is due to the change in ratio flanked by the noise signal and the remote signal

W(k+1)=w(k)+

The coefficient is unstable when the near end speech is present, the weights diverges when the error signal is high at the near end signal whereas LMS doesn't have these near end signal problem.

The coefficient goes to the unstable sate when the input is nearly zero this occurs when there is a the temporary halt in doubletalk or when near-end speech exists on the other hand the error signal also increases due to the near-end signal's existence. This cause divergence in the tap filters weights, whereas these kinds of problems are not seen in LMS algorithm. So in this thesis NLMS algorithm is not used due to its various poor performances with speech signal.

## 3.2.5 RECURSIVE LEAST SQUARES (RLS) ADAPTIVE FILTERS.

The second type of adaptive filter used in this thesis is Recursive Least Squares (RLS) algorithms. The main advantage of this algorithm is it tries to minimise the cost effect. If the RLS algorithm has a time period of k=1 and constant Î» at very less value near 1. When Î»<1 the error is calculated as (Farhang-Boroujeny p.419)

Î¶(n) =

the RLS tap filter weight is updated using the given equation

W(n)= )

The gain vector is used to compute the weights, K(n)=u(n)/(Î»+(n)u(n))

u(n)=(n-1)x(n)

The Î» is calculated constant Î» at very less value near 1 or a miniature optimistic invariable value which is very close to, however lesser than one.

The previous output sample signal in RLS is calculated by with tap weights and noise signal. When you see the previous algorithm LMS derivatives and the RLS algorithm value of the error calculated is done straight away in RLS algorithm. Due to this RLS algorithm perform well under random time changing environment but they have some problems in high computational and stability

## 3.2.6 Derivation of the RLS algorithm.

In this part the RLS algorithm is to increases the amount of time progress the previous error value is calculated because the capability of memory is limited which makes the algorithm more function faster. On the other had in this derivation it is assumed that the data value is processed only finite numbers are taken in to the account when it is in theoretical value N is taken as FIR filter corresponding to the RLS algorithm

The recursive least square algorithm derivation is summarised from the Farhang-Boroujeny pages 419 to 425.

The yn(k) is the output FIR filter, and the input vector is given as previous time period k. Where en(k) is the error value with this we can calculate the variation of the desired output value at time k, and the value of output FIR filter yn(k). These and other appropriate definitions are expressed in equation, for k=1,

d(n) = [d(1),d(2)â€¦d(n)]T

y(n) = [yn(1),yn(2)â€¦yn(n)]T

e(n) = [en(1),en(2)â€¦en(n)]T

e(n) = d(n) - y(n)

The X(n) is defined as the matrix with input column vector time therefore they(n) output filter FIR is also defined as the following equation

X(n) = [x(1),X(2),â€¦,x(n)]

y(n) = (n)w(n)

The cost function is computed in the matrix vector form using Î›(n), this matrix vector consists of tap weights so its estimated in the following equation as.

Î¶(n) =

= n)e(n)

(n) =

By substituting values from the above equation , the cost function is further shirked to the following equation.

Î¶(n) = (n)e(n)

= y -

= - T + T

= - 2kT w + Î» w

Where Î» (n) = X(n)

Î»(n) = X(n)(n)d(n)

With the cost function the gradient of tap weights filter is calculated and taking initial force as 0 therefore coefficients of the filter is calculated in this equation ,

Where w(n) is the minimum cost function

Î» (n) = Î»(n)

= Î»-1 (n)Î»(n)

To make the above equation into a recursive form the matrix Î¨(n)is further expanded and re structured, therefore they are reformed in the matrix inversion equation, with the inverse matrix equation we can estimate the filter tap weight vector. Where the vector k.(n) denotes the vector gain it is used to decrease the order calculation.

Î»-1 (n) = Î» Î»-1 (n - 1) + X(n)

= Î»-1 Î»-1 (n - 1) -

## =

Where K(n)=

## =

The output vector Î¸Î»(n) of RLS algorithm is derived by substituting in the Î¨-1(n) we can determine the filter weight update in RLS algorithm by following equation

+ X(n)d(n)

= - K(n)+K(n)d(n)

## =

= (d(n) - (n -1)X(n))

= (n)

Where (n) = d(n) - (n -1)X(n)

## 3.3 Applications of Adaptive Filters:

The adaptive filter is very popular concept in many divisions. These adaptive filters have property which makes their performance effective they keep on updating the post iteration value in the filter weight. The adaptive filters are used in many applications such as telecommunication systems, signal engineering and in military purpose this is because they can be even implemented in unknown environment.

## 3.3.1 Signal enhancement:

The most important application of adaptive filter is for speech reorganisation. Figure below explains the signal source have two inputs one is x(n)+w1(n)

Which is corrupted signal in this the X(n) denotes desired signal with the additive noise and the other is input signal w2(n) . these two signal is passed into the channel and w2(n) is passed through the adaptive filter and at the end the correlated output and filtered is summed to mean square output y(n) this type of adaptive filter are used in the signal enhancement applications for hearing aids and noise cancellation in headphones.

Figure : Conceptual adaptive system for signal enhancement

## 3.3.2 Signal prediction:

The adaptive filters are used in signal prediction application where in communication system some signals at the receiver end are received with half duplex due to delay or as unknown signal these signal are masked and fulfilled

This filter is built in by delay in input x(n) where the filter calculate the input signal using post signal value. This technique is used in speech coding parameters to predict the coding

Figure : Conceptual adaptive system for signal prediction

## 3.3.3 Inverse modelling:

The inverse modelling adaptive filters are most commonly used in noisy system. The input signal x(n) comes together when the transfer function of the adaptive filter is greater than equal to the reciprocal of the noise signal transfer function. These type of adaptive filters are used in noise reduction the below figure shows the filter design.

Figure : Conceptual adaptive system for inverse modelling

## 3.3.4 System for identification:

The system identification algorithm is used in the application of acoustic echo cancellation. The figure shows filter design built h(n) and other system with H(n) this filter is built with a linear model. Where both the system get more or less same input values. Where y(n) is used for desired input this helps in good error estimation e(n) this is done when each desired input response have its iteration .

Figure 3.6: Conceptual adaptive system for identification

## Chapter - 4: Implementation and design.

## 4.1 Implementation and design:

This chapter discuss about the design of all three algorithm using the MATLAB and how its implemented using the adaptive filter to decrease the external noise in the primary source signal. In this thesis we have tried to built RLS algorithm in MATLAB simulink and result are shown where this stimulation is generated using noise interference with wideband signal as input where as in other two algorithms Gaussian noise are used to test the filtered output. First step we discus about the design and implementation of least mean square LMS algorithm.

## 4.2 Implementation of the LMS algorithm:

In this the least mean square algorithm is implemented with three in steps at first the FIR filter output is estimated and in second step the error value is calculated and in third step the tap weights are prepared for next iteration

First the FIR filtered output Y(n) is calculated by equation

Y(n)=

In the second step the rate of error signal is estimated by equation

E(n)=d(n)-y(n)

In the third step the filtered weights are prepared for subsequently iteration is derived as

W(n+1)=w(n)=2Âµe(n)X(n)

Summary of the LMS algorithm

Input :

tap-weight vector, W (n)

input vector, x(n)

desired output, d(n)

Output :

Filter output, y(n)

Tap-weight vector update, W (n +1)

The LMS algorithm is used as one of the most common adaptive algorithm is due to its computational simplicity. This makes LMS easier to implement on the adaptive filter algorithm. In LMS algorithm two times sum and 2N+1 multiplication are used to estimate the Y(n) output signal and 2Î¼e(n) for vector multiplication

## 4.2.1 Flow char of the algorithm:

Start

Filter parameters

Signal frequency and noise frequency are added

Initialization

Generate source signal and input noise (noise + signal)

Generated input is shift to new input array and random noise is added

Calculate the filtered output with LMS algorithm by compute error and update the tap weight

Plot the noise and filtered output

End process

## 4.2.2 Design of Least mean square algorithm:

The least mean square algorithm is designed with the help of MATLAB first the algorithm starts by giving the filter parameters in this the number of taps ate given as M=20, step size parameter is given as mu=0.05 and epochs time is taken as maximum of 500. The signal frequency assumed as Fs=0.02 and noise is assumed as Fn=0.05. The next step is generating signal where the desired signal is generated with noise signal added it this is done by running for loop with the input sine wave signal and random noise function. The least mean square algorithm needs the inputs and desired signal that is generated now to estimate the error and taps update weight for iteration is generated using the for loop by w(n)=w(n)+mu*u(n)*e. Then the LMS output sequence as u(1) and e as error output . The graphs are plot for input signal

## 4.2.3 Output estimation:

## 4.2.3.1 Phase 1:

The figure shows the sine waveform output of the given input discrete sine wave signal which is represented as primary signal in bottom of the figure the signal is titrated till max number of E-max time period that is 500 and maximum signal frequency length of 1 to -1 the graph is plot between the frequency and the time period of the signal.

Figure - Input signal and noise added signal

On the other hand the top graph shows the noise mixture signal to the primary signal where a random Gaussian noise is additionally to the primary source signal this makes the signal distorted as in real time signal which is distorted by background noise. This signal should be the more or less similar to adverse noise condition. 2hz frequency is used in this signal generation and 5herts of noise frequency is generated.

## 4.2.3.2 Phase 2:

In phase 2 the LMS output signal is generated in this we can see the noise distortion is reduced to an extent. This graph is plotted between the maximum input signal vs. time. The bottom graph represents the LMS error where the error rate decreases gradually. This graph is plot using the calculated SNR value plotted according to the maximum time sequence.

## 4.3 Implementation of the RLS algorithm:

The recursive least square algorithm has a finite no of value equivalent to the tap weight filter vector in RLS algorithm there is a need for matrix inversion but

In the case of applying the algorithm no need for matrix inversion calculation by this way we save a lot of computation in the algorithm. When compared to the LMS algorithm the iteration variables are used from the pre iteration variables

To implement the RLS algorithm, the following steps are executed in the following order.

From the tap weight the prior iteration value and the existing input vector are used to estimate the output of the filter value.

The intermediate gain is calculated using equation

U(n) =

K(n) = u(n)

The error value is derived using the following equation

The gain is calculated with the filtered tap weight is updated value by

W(n) = K(n)

The inverse matrix is calculated using equation.

## 4.3.1 Design of the recursive least square algorithm (RLS):

The recursive least mean square algorithm is designed in matlab version 7.9. The code stars with the generation of input signal as sin wave with the length of 2048 declared as N. The additive Gaussian noise is generated to the signal at a random variable. The lambda value is taken 1 where it should me inverse according to the derivation formula. To start the filter at the 0 the zero pads are included. Then for loop is created in this we add extra random noise to the signal the derived formula for RLS to calculate the output error signal.

w(n+1,:) = w(n,:) + k'*output(n)

Then the graphs is plotted for the primary signal, filtered output of RLS and mean squared output, the noise at early values are calculated for the output which may be very large and the SNR is measured for the RLS filtered output

The SNR is calculated in two ways one is Pre SNR and other is Post SNR in this the pre SNR is calculated for the filtered output and the post SNR is calculated for the input primary signal using this we can calculate the difference between the post primary signal and filtered output of RLS. Where in this experiment we measured a Post SNR value as 9.0375 and the Pre SNR value as 0.2097 where we can see there is a random change in the Signal to noise ratio nearly 90% is decreased in the filtered output.

Input primary source signal

Added noise with the signal

Desired signal and the noisy signal is added

Filtered output signal is calculated

Mean square error is calculated

Output estimation:

The figure shows the three different graphs the top first graph is to plot the input primary signal with noise this graph shows the signal representing with impulse noise. Iteration length of the signal is generated to the maximum length of the signal the signal is added with additive Gaussian noise. The second middle graph represents the filtered output of the RLS algorithm in this signal we can see the error ratio of the signal is reduced the input sin wave is clear as output. The third graph at the bottom last represents the mean squared output of the RLS algorithm this mean squared error is measured below 0.1 which represents the good filtered capacity of this RLS algorithm.

## Comparison of least mean square (LMS) algorithm and recursive least square (RLS):

The simulation results are achieved using real time speech input signal in MATLAB environment. The simulation results show that more than LMS algorithm and RLS algorithm in the area to cancel the noise has very good results, to complete the task of noise reduction. LMS filters filtering results is relatively good, the requirements length of filter is relatively short, it has a simple structure and small operation and is easy to realize hardware. But the shortcomings of LMS algorithm convergence rate is slower, but the convergence speed and noise vector there is a contradiction, accelerate the convergence speed is quicker at the same time noise vector has also increased. Convergence of the adaptive for the choices of gain constant Î¼ is very sensitive. The noise signal and signal power when compared to larger, LMS filter output is not satisfactory, but we can step through the adjustment factor and the length of the filter method to improve. RLS algorithm filter the convergence rate is faster than the LMS algorithm, the convergence is unrelated with the spectrum of input signal, filter performance is superior to the least mean squares algorithm, but its each iteration is much larger operation than LMS. The required storage capacity is large, is not conducive to achieving a timely manner, the hardware is also relatively difficult to achieve the task of noise reduction.

## Chapter -5: Critical Appraisal, Recommendations and Suggestions.

## Advantage and disadvantage of LMS algorithm:

## (1) Simplicity in implementation

## (2) Stable and robust performance against different signal conditions

## (3) slow convergence ( due to eigenvalue spread )

## Due to the relative computational simplicity and desirable numerical qualities the LMS algorithm recevd a great deal of attention in the field of noise acncellation.

## Its convergence is being surpassed by several techniques. The implementation of an LMS algorithm is very simple but cannot be pipelined because of the recursive loop in its filter update formula and hence prevents it from using in system with higher data rates the modified vertion of the LMS algorithm called delayed LMS algorithm inserts certain delays in the error feed back loop and hence pipe lined im

## Advantage and disadvantage of RLS algorithm:

## While the RLS algorithm has the advantages of a fast convergence rate and low sensi-

## tivity to input eigenvalue spread, it has the disadvantage of being computationally intensive:

## The matrix by vector multiplications in (E.6 - E.8) requires O(n2) operations per iteration.

## It also suffers from possible numerical instability and from bad tracking performance in non-

## stationary environments when compared to LMS. LMS is intrinsically slow because it does

## not decorrelate its inputs prior to adaptive filtering but preprocesses the inputs by an esti-

## mate of the inverse input autocorrelation matrix in the fashion of RLS, and this leads to the

## problems cited above. The solution we propose in the next section consists of preprocessing

## the inputs to the LMS filter with a fixed transformation that does not depend on the actual

## input data. The decorrelation will only be approximative, but the computational cost will

## remain of O(n) and the robustness and tracking ability of LMS will not be affected.

## Although the RLS algorithm has its advantages such as implementing real-time identi-

## cation systems which makes it become a key element in adaptive control, it has some

## disadvantages or limitations in practical implementations:

## 1. If the system is unstable or if the testing signal is not persistently exciting enough,

## the estimated parameters will diverge and the estimation will be invalid.

## 2. The accuracy of the estimated model is subject to initial model selection and the

## input exciting signal.

## 3. The sampling frequency can a

## ect the estimated model.

## 4. Drift, trend, o

## set and seasonal variations cause harmful in

## uences.

## Conclusion:

Adaptive filtering is an important basis for signal processing, in recent years has developed rapidly in various fields on a wide range of applications. In addition to noise cancellation, adaptive filter the application of space is also very extensive. For example, we know areas that the system identification, adaptive equalizer, linear prediction, adaptive antenna array, and many other areas. Adaptive signal processing technology as a new subject is in depth to the direction of rapid development, especially blind adaptive signal processing and use of neural networks of non-linear

FUTURE WORK

The application can be extended for the noise reductin in the speech for the hearing aids in the noisy environment like crowd noise, car noise, cockpit noise, aircraft noise etc. With modified RLS and LMS algorithm convergence speech can be increased as per the real time requirement fast algorithm can be developed.

C:\My Documents\adf\adf\fig5_2.jpg

1. In the LMS algorithm, the correction that is applied in updating the old estimate of the coefficient vector is based on the instantaneous sample value of the tap-input vector and the error signal. On the other hand, in the RLS algorithm the computation of this correction utilizes all the past available information.

2. In the LMS algorithms, the correction applied to the previous estimate consists of the product of three factors: the (scalar) step-size parameter m, the error signal e( n-1), and the tap-input vector u(n-1). On the other hand, in the RLS algorithm this correction consists of the product of two factors: the true estimation error h(n) and the gain vector k(n). The gain vector itself consists of F-1(n), the inverse of the deterministic correlation matrix, multiplied by the tap-input vector u(n). The major difference between the LMS and RLS algorithms is therefore the presence of F-1(n) in the correction term of the RLS algorithm that has the effect of decorrelating the successive tap inputs, thereby making the RLS algorithm self-orthogonalizing. Because of this property, we find that the RLS algorithm is essentially independent of the eigenvalue spread of the correlation matrix of the filter input

3. The LMS algorithm requires approximately 20M iterations to converge in mean square, where M is the number of tap coefficients contained in the tapped-delay-line filter. On the other band, the RLS algorithm converges in mean square within less than 2M iterations. The rate of convergence of the RLS algorithm is therefore, in general, faster than that of the LMS algorithm by an order of magnitude.

4. Unlike the LMS algorithm, there are no approximations made in the derivation of the RLS algorithm. Accordingly, as the number of iterations approaches infinity, the least-squares estimate of the coefficient vector approaches the optimum Wiener value, and correspondingly, the mean-square error approaches the minimum value possible. In other words, the RLS algorithm, in theory, exhibits zero misadjustment. On the other hand, the LMS algorithm always exhibits a nonzero misadjustment; however, this misadjustment may be made arbitrarily small by using a sufficiently small step-size parameter m

5. The superior performance of the RLS algorithm compared to the LMS algorithm, however, is attained at the expense of a large increase in computational complexity. The complexity of an adaptive algorithm for real-time operation is determined by two principal factors: (1) the number of multiplications (with divisions counted as multiplications) per iteration, and (2) the precision required to perform arithmetic operations. The RLS algorithm requires a total of 3M(3 + M )/2 multiplications, which increases as the square of M, the number of filter coefficients. On the other hand, the LMS algorithm requires 2M + 1 multiplications, increasing linearly with M. For example, for M = 31 the RLS algorithm requires 1581 multiplications, whereas the LMS algorithm requires only 63.