# Mathematical Operation Where Future Values Computer Science Essay

Published:

Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples. In Digital signal Processing, linear prediction is often called Linear Predictive Coding (LPC) and can thus be viewed as a subset of filter theory. In system analysis (a subfield of Mathematics), linear prediction can be viewed as a part of mathematical modeling or optimization. Linear prediction is one of the most powerful tools used, where a signal is the output of a system considering the unknown signal x with the relation,

where G is the parameter of a hypothesized system.

In above equation we can see that the output is a linear combination of the past samples at the system exit and past and present inputs. The name Linear Prediction comes from this formula, which shows that a signal yn can be predicted from linear combinations of past and present outputs and inputs. This equation can be also written in the frequency domain, taking the z-transform on both sides. If H(z) is the transfer function of the system, then we have

### Professional

#### Essay Writers

Get your grade

or your money back

using our Essay Writing Service!

(4.2)

Where

The roots of the numerator and denominator polynomials of H(z) are the zeros and poles of the model, respectively. There are two special cases of the model

All-zero model: ak = 0, 1 â‰¤ k â‰¤ p known as Moving Average (MA) model

All-pole model: bl = 0, 1 â‰¤ l â‰¤ q known as Auto Regressive (AR) model

The estimation of model parameters can be derived in the time domain and in the frequency domain. In general, we don't know the input signal, we have to predict it, , as a linear weight combination of the past samples,

(4.3)

where ak are the predictor coefficients, p is the model order and the minus sign is for convenience. If we want to know the error with this method,

(4.4)

where yn is the original signal, a 0 = 1 and en is called residual.

The idea is to get an error as small, close to zero, as possible; this measures the quality of the predictor. If we denote the total squared error by E, where

(4.5)

Then, to minimize E:

This method is called method of least squares and the parameters ak are calculated as a result of the minimization of the mean or total squared error with respect to each of the parameters (it is called autocorrelation criterion).

## 4.2 ADAPTIVE FILTER

## Digital signal processing (DSP) has been a major player in the current technical advancements such as noise filtering, system identification, and voice prediction. Standard DSP techniques, however, are not enough to solve these problems quickly and obtain acceptable results. Adaptive filtering techniques must be implemented to promote accurate solutions and a timely convergence to that solution.

## An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. By way of contrast, a non-adaptive filter has a static transfer function. Adaptive filters are required for some applications because some parameters of the desired processing operation (for instance, the locations of reflective surfaces in a reverberant space) are not known in advance. The adaptive filter uses feedback in the form of an error signal to refine its transfer function to match the changing parameters. Generally, the adaptive process involves the use of a cost function, which is a criterion for optimum performance of the filter, to feed an algorithm, which determines how to modify filter transfer function to minimize the cost on the next iteration. As the power of digital signal processors has increased, adaptive filters have become much more common and are now routinely used in devices such as mobile phones and other communication devices, camcorders and digital cameras, and medical monitoring equipment.

Adaptive Filtering System Configurations: There are four major types of adaptive filtering configurations,They are adaptive system identification, adaptive noise cancellation, adaptive linear prediction, and adaptive inverse system. All of the above systems are similar in the implementation of the algorithm, but different in system configuration. All 4 systems have the same general parts an input x(n), a desired result d(n), an output y(n), an adaptive transfer function w(n), and an error signal e(n) which is the difference between the desired output u(n) and the actual output y(n). In addition to these parts, the system identification and the inverse system configurations have an unknown linear system u(n) that can receive an input and give a linear output to the given input.

### Comprehensive

#### Writing Services

Plagiarism-free

Always on Time

Marked to Standard

Block diagram: The block diagram, shown in figure 4.1, serves as a foundation for particular adaptive filter Realizations, such as Least Mean Squares (LMS), Normalized Least Mean Squares (NLMS) and Recursive Least Squares (RLS). The idea behind the block diagram is that a variable filter extracts an estimate of the desired signal.

The input signal is the sum of a desired signal d(n) and interfering noise v(n)

(4.6)

The variable filter has a Finite Impulse Response (FIR) structure. For such structures the impulse response is equal to the filter coefficients. The coefficients for a filter of order p are defined as

(4.7)

d(n)

e(n)

dË†(n)

âˆ†Wn

X(n)

Up date Algorithm

Variable filter

Wn

## Figure 4.1: Basic Adaptive Filter

The error signal or cost function is the difference between the desired and the estimated signal

(4.8)

The variable filter estimates the desired signal by convolving the input signal with the impulse response. In vector notation this is expressed as

(4.9)

Where

is an input signal vector. Moreover, the variable filter updates the filter coefficients at every time instant

(4.10)

Where is a correction factor for the filter coefficients. The adaptive algorithm generates this correction factor based on the input and error signals. LMS and RLS define two different coefficient update algorithms.

## 4.3 PERFORMANCE MEASURES IN ADAPTIVE SYSTEMS

Six performance measures will be discussed in the following sections convergence rate, minimum mean square error, computational complexity, stability, robustness, and filter length.

## 4.3.1 CONVERGENCE RATE

The convergence rate determines the rate at which the filter converges to it's resultant state. Usually a faster convergence rate is a desired characteristic of an adaptive system. Convergence rate is not, however, independent of all of the other performance characteristics. There will be a tradeoff, in other performance criteria, for an improved convergence rate and there will be a decreased convergence performance for an increase in other performance. For example, if the convergence rate is increased, the stability characteristics will decrease, making the system more likely to diverge instead of converge to the proper solution. Likewise, a decrease in convergence rate can cause the system to become more stable. This shows that the convergence rate can only be considered in relation to the other performance metrics, not by itself with no regards to the rest of the system.

## MINIMUM MEAN SQUARE ERROR

The minimum Mean Square Error (MSE) is a metric indicating how well a system can adapt to a given solution. A small minimum MSE is an indication that the adaptive system has accurately modeled, predicted, adapted or converged to a solution for the system. A very large MSE usually indicates that the adaptive filter cannot accurately model the given system or the initial state of the adaptive filter is an inadequate starting point to cause the adaptive filter to converge. There are a number of factors which will help to determine the minimum MSE including, but not limited to quantization noise, order of the adaptive system, measurement noise, and error of the gradient due to the finite step size.

## 4.3.3 COMPUTATIONAL COMPLEXITY

Computational complexity is particularly important in real time adaptive filter applications. When a real time system is being implemented, there are hardware limitations that may affect the performance of the system. A highly complex algorithm will require much greater hardware resources than a simplistic algorithm.

## 4.3.4 STABILITY

Stability is probably the most important performance measure for the adaptive system. By the nature of the adaptive system, there are very few completely asymptotically stable systems that can be realized. In most cases the systems that are implemented are marginally stable, with the stability determined by the initial conditions, transfer function of the system and the step size of the input.

## 4.3.5 ROBUSTNESS

The robustness of a system is directly related to the stability of a system. Robustness is a measure of how well the system can resist both input and quantization noise.

## 4.3.6 APPLICATIONS OF ADAPTIVE FILTERS

Signal prediction

Adaptive feedback cancellation

Echo cancellation

Adaptive equalization

### This Essay is

#### a Student's Work

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our workSpeech coding

Adaptive spectrum analysis

Adaptive noise cancellation.

## 4.4 LMS ALGORITHM

The Least Mean Square (LMS) algorithm is an adaptive algorithm, which uses a gradient-based method of steepest decent. LMS algorithm uses the estimates of the gradient vector from the available data. LMS incorporates an iterative procedure that makes successive corrections to the weight vector in the direction of the negative of the gradient vector which eventually leads to the minimum mean square error. Compared to other algorithms LMS algorithm is relatively simple it does not require correlation function calculation nor does it require matrix inversions.

The gradient vector in the above weight update equation can be computed as

(4.11)

In the method of steepest descent the biggest problem is the computation involved in finding the values r and R matrices in real time. The LMS algorithm on the other hand simplifies this by using the instantaneous values of covariance matrices r and R instead of their actual values i.e.

(4.12)

(4.13)

Therefore the weight update can be given by the following equation

(4.14)

The LMS algorithm is initiated with an arbitrary value w(0) for the weight vector at n=0. The successive corrections of the weight vector eventually leads to the minimum value of the mean squared error.

## 4.4.1 CONVERGENCE AND STABILITY OF THE LMS ALGORITHM

The LMS algorithm initiated with some arbitrary value for the weight vector is seen to converge and stay stable for

(4.15)

where is the largest Eigen value of the correlation matrix R. The convergence of the algorithm is inversely proportional to the Eigen value spread of the correlation matrix R. When the eigen values of R are widespread, convergence may be slow. The eigen value spread of the correlation matrix is estimated by computing the ratio of the largest eigen value to the smallest eigen value of the matrix. If Î¼ is chosen to be very small then the algorithm converges very slowly. A large value of Î¼ may lead to a faster convergence but may be less stable around the minimum value. And also provides an upper bound for Î¼ based on several approximations as Î¼ <= 1/(3trace(R)).

4.4.2 DEPENDENCY OF THE STEP-SIZE PARAMETER Î¼

The step-size parameter or the convergence factor Î¼ is the basis for the convergence speed of the LMS algorithm. For the LMS algorithm to converge and be stable equation repeated below gives the allowable range of Î¼. The LMS algorithm is most commonly used adaptive algorithm because of its simplicity and a reasonable performance. Since it is an iterative algorithm it can be used in a highly time-varying signal environment. It has a stable and robust performance against different signal conditions. However it may not have a really fast convergence speed compared other complicated algorithms like the Recursive Least Square (RLS). It converges with slow speeds when the environment yields a correlation matrix R possessing a large eigen spread. Usually traffic conditions are not static, the user and interferer locations and the signal environment are varying with time, in which case the weights will not have enough time to converge when adapted at an identical rate.

## 4.4.3 NORMALIZED LEAST MEAN SQUARES (NLMS) ALGORITHM

The Î¼ is the step-size needs to be varied in accordance with the varying traffic conditions. There are several variants of the LMS algorithm that deal with the shortcomings of its basic form. The Normalized LMS (NLMS) introduces a variable adaptation rate. It improves the convergence speed in a non-static environment. In another version, the Newton LMS, the weight update equation includes whitening in order to achieve a single mode of convergence. For long adaptation processes the Block LMS is used to make the LMS faster. In block LMS, the input signal is divided into blocks and weights are updated block wise.

(4.16)

Where gives the variable adaptation and Î´ is regularization constant. The value of Î´ is very small and is approximately the variance of input speech signal.

The Main difference of NLMS algorithm to standard LMS algorithm is a time-varying step size Î¼(n). By this, it can establish that amplitude of the mean square error of the error signal is lesser than that of the standard LMS algorithm. This needs 'N' more multiplication operations. Also it is found that the impulse response has peaks of double the amplitude of the LMS algorithm after the same number of iterations, which implies the higher convergence rate of the NLMS than standard LMS.

## 4.4.4 SIMULATION RESULTS

In LMS and NLMS algorithms, The step size plays the key role which determines the amount of correction applied, as the filter adapts from one iteration to the next iteration.

If the adaptive filter having the step size is too small that raises the filter time to converge on a set of coefficients, which affects the speed and accuracy of the filter.

If it is too large that, it may campaign the adapting filter to diverge, and never reaching convergence, which results the filter might be unstable.

Experimentation was done and it was found from that the results are highly depending on the step size value. By testing various step values with the different datasets, it clears that to cope with the characteristics of the unknown to adapt the smaller step sizes improve the accuracy of the convergence of the filter. It is also observed that a faster response attains for larger step size, but if it is too large, the result is not satisfactory. For the experimentation we choose the step size as 0.25 and 0.5 of LMS and NLMS algorithms respectively for white noise and vice versa for the other noises.

The separate noise corpus from NOIZEUS were collected and added to the clean Speech signals for the experimentation. At different noisy levels, performances of these evaluated for speech enhancement. Babble noise, Car noise and White noise at 0, 5, 10, and 15 dB SNR were experimented. A total of 12 datasets were generated for this work and are tabulated in table 4.1

## 4.5 RECURSIVE LEAST SQUARES FILTER

The Recursive Least Squares (RLS) adaptive filter is an algorithm which recursively finds the filter coefficients that minimize a weighted linear least squares cost function relating to the input signals. This is in contrast to other algorithms such as the Least Mean Squares (LMS) that aim to reduce the mean square error. In the derivation of the RLS, the input signals are considered deterministic, while for the LMS and similar algorithm they are considered stochastic. Compared to most of its competitors, the RLS exhibits extremely fast convergence. However, this benefit comes at the cost of high computational complexity, and potentially poor tracking performance when the filter to be estimated (the "true system") changes. The idea behind RLS filters is to minimize a cost function C by appropriately selecting the filter coefficients Wn updating the filter as new data arrives. The error signal e(n) and desired signal d(n) are defined in the negative feedback diagram below figure 4.4.

d(n)

e(n)

dË†(n)

âˆ†Wn

X(n)

Up date Algorithm

Variable filter

Wn

## Figure 4.4: RLS algorithm implementation

The error implicitly depends on the filter coefficients through the estimate

(4.16)

The weighted least squares error function C - the cost function we desire to minimize -being a function of e(n) is therefore also dependent on the filter coefficients

(4.17)

The smaller Î» is, the smaller contribution of previous samples. This makes the filter more sensitive to recent samples, which means more fluctuations in the filter coefficients. The Î» = 1 case is referred to as the growing window RLS algorithm.

The Steps involved in RLS algorithm are:

Initialize the algorithm by setting:

## ,

And

For each instant of time, k=1, 2, â€¦., compute

Ï€(k) = P(k-1) u(k) (4.18)

(4.19)

(4.20)

(4.21)

And (4.22)

## 4.5.1 SIMULATION RESULTS

The separate noise corpus from NOIZEUS were collected and added to the clean Speech signals for the experimentation. At different noisy levels, performances of these evaluated for speech enhancement. Babble noise, Car noise and White noise at 0, 5, 10, and 15 dB SNR were experimented. A total of 12 datasets were generated for this work and are tabulated in table 4.1.

## KALMAN FILTER

## 4.6.1 NAMING AND HISTORICAL DEVELOPMENT

The filter is named after Rudolf E. Kalman, though Thorvald Nicolai Thiele and Peter Swirling developed a similar algorithm earlier. Richard S. Bucy of the University of Southern California contributed to the theory, leading to it often being called the Kalman-Bucy filter. Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. It was during a visit by Kalman to the NASA Ames Research Center that he saw the applicability of his ideas to the problem of trajectory estimation for the Apollo program, leading to its incorporation in the Apollo navigation computer. This Kalman filter was first described and partially developed in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).

Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. It is also used in the guidance and navigation systems of the NASA Space Shuttle and the attitude control and navigation systems of the International Space Station.

## 4.6.2 OVERVIEW OF THE CALCULATION

The Kalman filter uses a system's dynamics model (i.e., physical laws of motion), known control inputs to that system, and measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using any one measurement alone. As such, it is a common sensor fusion algorithm.

All measurements and calculations based on models are estimates to some degree. Noisy sensor data, approximations in the equations that describe how a system change, and external factors that are not accounted for introduce some uncertainty about the inferred values for a system's state. The Kalman filter averages a prediction of a system's state with a new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty is "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies in between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that the Kalman filter works recursively and requires only the last "best guess" - not the entire history - of a system's state to calculate a new state.

When performing the actual calculations for the filter, The state estimate and covariances are coded into matrices to handle the multiple dimensions involved in a single set of calculations. This allows for representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.

## TECHNICAL DESCRIPTION

The Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models, and is an important topic in control theory and control systems engineering. Together with the Linear-Quadratic Regulator (LQR), the Kalman filter solves the Linear-Quadratic-Gaussian problem (LQG). The Kalman filter, the linear-quadratic regulator and the linear-quadratic-Gaussian controller are solutions to what probably are the most fundamental problems in control theory.

In most applications, the internal state is much larger (more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state. In control theory, the Kalman filter is most commonly referred to as Linear Quadratic Estimation (LQE).

## 4.7 SPEECH ENHANCEMENT USING KALMAN FILTER

The use of Kalman Filter for speech enhancement in the form that is presented here was first introduced by Paliwal (1987). This method however is best suitable for reduction of white noise to comply with Kalman assumption. In deriving Kalman equations it normally assumed that the process noise (the additive noise that is observed in the observation vector) is uncorrelated and has a normal distribution. This assumption leads to whiteness character of this noise. There are, however, different methods developed to fit the Kalman approach to colored noises. It is assumed that speech signal is stationary during each frame, that is, the AR model of speech remains the same across the segment. To fit the one-dimensional speech signal to the state space model of Kalman filter we introduce the state vector as:

(4.23)

where x(k) is the speech signal at time k. Speech signal is contaminated by additive white noise n(k)

y(k) = x(k) + n(k) (4.24)

The speech signal could be modeled with an AR process of order p

(4.25)

(4.26)

where ai's are AR (LP) coefficients and u(k) is the prediction error which is assumed to have a normal distribution N(0,Q). Substituting equation (4.23) into equation (4.25) we get

(4.27)

Kalman Filter

X(k)=AXX(k-1)+gXu(k)

n(k)=AnX(k-1)+gnu(k)

(4.28)

## Figure 4.3 Block diagram for Kalman Filter

where,

G has a length of p (LP order) and the observation equation would be

(4.29)

H = GT

n(k) as stated earlier has a Gaussian distribution N(0,R). The remaining formulation of the filter has described in kalman filter theory

## 4.7.1 KALMAN FILTER THEORY

Kalman Filter is an adaptive least square error filter that provides an efficient computational recursive solution for estimating a signal in presence of Gaussian noises. It is an algorithm which makes optimal use of imprecise data on a linear system with Gaussian errors to continuously update the best estimate of the system's current state. Kalman filter theory is based on a state-space approach in which a state equation models the dynamics of the signal generation process and an observation equation models the noisy and distorted observation signal. For a signal x(k) and noisy observationÂ y(k), equations describing the state process model Â and the observation model are defined asÂ

(4.30)

(4.31)

where, x(k) is the P-dimensional signal vector, or the state parameter, at time k, A is a P Ã- PÂ dimensional state transition matrix that relates the states of the process at times k -1 and k, w(k) process noise is the P-dimensional uncorrelated input excitation vector of the state equation. w(k) is assumed to be a normal Gaussian process p(w(k))~N(0, Q), Q being the P Ã- PÂ covariance matrix of w(k) or process noise covariance. y(k) is the M dimensional noisy observation vector, H is a M Ã- PÂ dimensional matrix which relates the observation to the state vector. n(k) is theÂ M-dimensional noise vector, also known as measurement noise, n(k) is assumed to have a normal distributionÂ p(n(k))~N(0, R)) and R is the MÃ-MÂ covariance matrix of n(k) (measurement noise covariance).Â

We define x(k|k-1)Â to be our a priori estimate (prediction) at step k from the previous trajectory of x, and x(k|k)Â to be our a posteriori state estimate at step k given measurement y(k). Note that x(k|k-1)Â is a prediction of the value of x(k) which is based on the previous values and not on the current observation at time k. x(k|k)Â on the other hand, uses the information in the current observation (the notation |k is used to emphasize that this value is an estimation of x(k) based on the evidence or observation at time k). The a priori and a posteriori estimation errors are defined as

(4.32)

(4.33)

A priori estimate error covariance is

(4.34)

and theÂ a posteriori estimate error covariance is

(4.35)

In deriving Kalman filter formulation, we begin with the goal of finding an equation that computes an a posteriori state estimate as a linear combination of an a priori estimate (prediction) and a weighted difference between an actual measurement and a measurement prediction (innovation). Hence, each estimate consists of a fraction which is predictable from the previous values and does not contain new information and a fraction that contains the new information extracted from the observation.

(4.36)

The difference y(k)-Hx(k|k-1)Â in (4.29) is called the measurement innovation. The innovation reflects the discrepancy between the predicted value and the actual measurement. The PÃ-M matrix, K(k), in (4.29) is chosen to be the gain or blending factor that minimizes the a posteriori error covariance (4.28). This minimization can be accomplished by first substituting (4.29) into the above definition for e(k), substituting that into (4.28), performing the indicated expectations, taking the derivative of the trace of the result with respect to K, setting that result equal to zero, and then solving for K. One form of the resulting K(k) that minimizes (4.28) is given by

(4.37)

In equation (4.30) we see that as the measurement noise covariance, R, approaches zero, the gain, K, weights the innovation more heavily.

(4.38)

On the other hand as the a priori estimate error covariance P-(k), approaches zero the gain, K, weights the innovation less heavily. Specifically

(4.39)

Another way of interpreting weighting by K is that as the measurement error covariance approaches zero, the actual measurement is "trusted" more, while the predicted measurement is trusted less. On the other hand, as the a priori estimate error covariance approaches zero the actual measurement is trusted less, while the predicted measurement is trusted more.

The time update equations are responsible for projecting forward (in time) the current state and error covariance estimates to obtain the a priori estimates for the next time step. The measurement update equations are responsible for the feedback i.e. for incorporating a new measurement into the a priori estimate to obtain an improved a posteriori estimate.Â The time update equations can also be thought of as predictor equations, while the measurement update equations can be thought of as corrector equations. Indeed the final estimation algorithm resembles that of a predictor-corrector algorithm for solving numerical problems as shown below Figure 4.8.

Time updates(predict)

x(k)-AÌ‚ x(k-1)

pÌ„(k)=Ap(k-1)AT+Q

Measurement update(correct)

K(k)=pÌ„(k)HT (HpÌ„(k) HT+R)-1

xÌ‚=xÌ‚âˆ’(k)+K(k)(y(k)-h xÌ‚âˆ’(k))

p(k)=(I-K(k)HpÌ„(k)

## Figure 4.8: The adaptive operation of the Kalman Filter, illustrating the interaction of the prediction and correction steps

## 4.7.2 ADVANTAGES AND DISADVANTAGES

Advantages: It avoids the influence of possible structural changes on the result. The recursive estimation starts from an initial sample and updates the estimations by adding a new observation until the end of the data. This implies that the most recent coefficients estimation is affected by the distant history in presence of structural changes the data series can be cut. This cut can be corrected through the sequential estimations but with a biggest standard error. Like this, the Kalman filter, like other recursive methods, uses all the series history but with one advantage it tries to estimate a stochastic path of the coefficients instead of a deterministic one. In this way it solves the possible estimation cut when structural changes happen.

The Kalman filter uses the least square method to recursively generate a state estimator on k moment, which is unbiased minimum and variance linear. This filter is in equal terms with Gauss-Markov theorem and this gives to Kalman filter its enormous power to solve a wide range of problems on statistic inference. The filter is distinguished by its skill to predict the state of a model in the past, present and future, although the exact nature of the modeled system is unknown. The dynamic modeling of a system is one of the key features which distinguish the Kalman method.

Disadvantages: Among the filter disadvantages we can find that it is necessary to know the initial conditions of the mean and variance state vector to start the recursive algorithm. There is no general consent over the way of determinate the initial conditions.

The Kalman filter development is mainly depends on wide knowledge about probability theory, specifically with the Gaussian condition for the random variables, which can be a limit for its research and application. When it is developed for autoregressive models, the results are conditioned to the past information of the variable under study. In this sense the prognostic of the series over the time represents the inertia that the system actually has and they are efficient just for short time term.

## 4.8 SIMULATION RESULTS OF KALMAN

As measurement noise reduces from iteration to iteration, kalman filter shows the good improvement in enhancing the quality of the speech signal. The separate noise corpus from NOIZEUS were collected and added to the clean Speech signals for the experimentation. At different noisy levels, performances of these evaluated for speech enhancement. Babble noise, Car noise and White noise at 0, 5, 10, and 15 dB SNR were experimented. A total of 12 datasets were generated for this work and are tabulated in table 4.1.

## 4.9 Overall Simulation Results Analysis

The simple spectral subtraction processing comes with a price. The subtraction process needs to be done carefully to avoid any speech distortion. If too much is subtracted, some speech information might be removed as well, if too little is subtracted, much of the interfering noise remains. Hence for better performance a speech detection algorithm is needed to distinguish between these 2 types of frames (based on energy/dynamic range/statistical properties). LMS and NLMS algorithms The step size plays the key role which determines the amount of correction applied, as the filter adapts from one iteration to the next iteration.

If the adaptive filter having the step size is too small that raises the filter time to converge on a set of coefficients, which affects the speed and accuracy of the filter.

If it is too large that, it may campaign the adapting filter to diverge, and never reaching convergence, which results the filter might be unstable.

Experimentation was done and it was found from that the results are highly depending on the step size value. By testing various step values with the different datasets, it clears that to cope with the characteristics of the unknown to adapt the smaller step sizes improve the accuracy of the convergence of the filter. It is also observed that a faster response attains for larger step size, but if it is too large, the result is not satisfactory. Kalman, as increasing the measurement noise covariance, gives the better performance.

The separate noise corpus from NOIZEUS were collected and added to the clean Speech signals for the experimentation. At different noisy levels, performance of these evaluated for speech enhancement. Babble noise, Car noise and White noise at 0, 5, 10, and 15 dB SNR were experimented. A total of 12 datasets were generated for this research work are shown in Table 4.1.

The main objective of the adaptive filters is the error signal e(k) minimization. Its success will clearly depends on the length of the adaptive filter, the nature of the input signals, and the adaptive algorithm used. The signal is perceived by listeners reflects the subjective measure of quality of speech signals. At 0 dB the two signals are of equal strength and positive values are usually connected with better intelligibility where as negative values are connected with loss of intelligibility due to masking. Positive and higher SNR values are found in all the algorithms. The performances are measured based on the metrics namely MSE and SNR for all the algorithms.

## TABLE 4.1: MSE and SNR comparison for lms, NLMS ,RLS & KALMAN algorithms for for White, Babble and Car noises .

Noise Type

SNR (dB)

Enhancement Method

LMS

NLMS

NLMS

Kalman

MSE

MSE

MSE

SNR

MSE

SNR

MSE

SNR

White

0

-0.9868

-0.99072

-0.99072

18.75125

-1.00005

23.9304

-1.0033

30.9404

5

-0.31222

-0.31338

-0.31338

18.71422

-0.31527

21.2528

-0.31704

27.1886

10

-9.80E-02

-0.09835

-0.09835

16.23554

-0.09861

16.7422

-0.09994

20.9792

15

-3.12E-02

-0.03137

-0.03137

13.50328

-3.15E-02

13.9873

-0.03221

17.2202

Babble

0

-0.00128

-0.00128

-0.00128

12.2837

-0.00136

21.5262

-0.00135

20.6054

5

-0.00035

-0.00039

-0.00039

9.710676

-0.00043

18.937

-0.00043

19.3705

10

-6.27E-05

-9.75E-05

-9.75E-05

5.43E+00

-0.00013

1.52E+01

-0.00014

1.98E+01

15

2.92E-05

-6.80E-06

-6.80E-06

7.44E-01

-3.97E-05

1.10E+01

-4.22E-05

1.66E+01

Car

0

-0.00128

-0.0013

-0.0013

13.40597

-0.00135

19.6678

-0.00134

17.499

5

-3.53E-04

-3.84E-04

-3.84E-04

9.521231

-0.00042

17.4039

-0.00042

17.7101

10

-6.40E-05

-9.78E-05

-9.78E-05

5.463718

-0.00013

15.636

-0.00014

19.5113

15

2.86E-05

-7.62E-06

-7.62E-06

0.842539

-4.00E-05

11.3779

-4.27E-05

19.3836