General Description Of Adaptive Filters Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.


Adaptive filters are digital filters capable of self adjustment. These filters can change in accordance to their input signals. An adaptive filter is used in applications that require differing filter characteristics in response to variable signal conditions.

The adaptive filter requires two inputs:

_ the input signal x(n)

_ the reference input d(n)

An adaptive filter has the ability to update its coefficients. New coefficients are sent to the filter from a coefficient generator. The coefficient generator is an adaptive algorithm that modifies the coefficients in response to an incoming signal. The digital filter is typically a special type of finite impulse response (FIR) filter, but it can be also an infinite impulse response (IIR) or other type of filter. Adaptive filters have uses in a number of applications, including noise cancellation, linear prediction, adaptive signal enhancement, and adaptive control.

Software requirement : MATLAB 7p4

Operating system : Windows


Adaptive Filter

General Description of Adaptive Filters

Adaptive filters are digital filters capable of self adjustment. These filters can change in accordance to their input signals. An adaptive filter is used in applications that require differing filter characteristics in response to variable signal conditions.

The adaptive filter requires two inputs:

_ the input signal x(n)

_ the reference input d(n)

An adaptive filter has the ability to update its coefficients. New coefficients are sent to the filter from a coefficient generator. The coefficient generator is an adaptive algorithm that modifies the coefficients in response to an incoming signal. The digital filter is typically a special type of finite impulse response (FIR) filter, but it can be also an infinite impulse response (IIR) or other type of filter. Adaptive filters have uses in a number of applications, including noise cancellation, linear prediction, adaptive signal enhancement, and adaptive control.

Figure 1 shows a block diagram of an adaptive filter model. The unknown system is modeled by an FIR filter with adjustable coefficients. Both the unknown system and FIR filter model are excited by an input sequence x(n). The adaptive FIR filter output y(n) is compared with the unknown system output d(n) to produce the error signal e(n). The error signal represents the difference between the unknown system output and the model output. The error e(n) is then used as the input to an adaptive control algorithm, which corrects the individual tap weights of the filter. This process is repeated through several iterations until the error signal e(n) becomes sufficiently small. The resultant FIR filter response now represents that of the previously unknown system.

The FIR (Finite Impulse Response) Filter

Figure 2 shows the structure of a transversal FIR filter with N taps adjustable weights.

The tap-weight vector, wi(n) is represented as

w(n) = [w0(n), w0(n), …, w0(n)]

the tap-input vector, x(n), as

x(n) = [x(n), x(n-1), …, x(n-N+1)]

The FIR filter output, y(n) can then be expressed as

where n is the time index and N is the order of the filter.

The LMS (least-mean square) Algorithm

The LMS algorithm is initialised by setting all weights to zero at time n=0. Tap weights are updated using the relationship


wi(n) represent the tap weights of the FIR filter

e(n) is the error signal

x(n) represent the tap inputs

the factor µ= is the adaptation parameter or step-size

The following figure shows a detailed description of the LMS-filter implementation

The filter output, y(n), is calculated in terms of the difference equation

Filtering Techniques:

Adaptive filtering techniques are used in a wide range of applications, including echo Cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beam forming. The performance of an adaptive filtering algorithm is evaluated based on its convergence rate, misadjustment, computational requirements, and numerical robustness. We attempt to improve the performance by developing new adaptation algorithms and by using "unconventional" structures for adaptive filters.

Part I of this dissertation presents a new adaptation algorithm, which we have termed the Normalized LMS algorithm with Orthogonal Correction Factors (NLMS-OCF). The NLMSOCF algorithm updates the adaptive filter coefficients (weights) on the basis of multiple input signal vectors, while NLMS updates the weights on the basis of a single input vector. The well-known Affine Projection Algorithm (APA) is a special case of our NLMS-OCF algorithm.

We derive convergence and tracking properties of NLMS-OCF using a simple model for the input vector. Our analysis shows that the convergence rate of NLMS-OCF (and also APA) is exponential and that it improves with an increase in the number of input signal vectors used for adaptation. While we show that, in theory, the misadjustment of the APA class is independent of the number of vectors used for adaptation, simulation results show a weak dependence. For white input the mean squared error drops by 20 dB in about 5N/(M+1) iterations, where N is the number of taps in the adaptive filter and (M+1) is the number of vectors used for adaptation.

The dependence of the steady-state error and of the tracking properties on the three user selectable parameters, namely step size ­, number of vectors used for adaptation (1€«€ M), and input vector delay D used for adaptation, is discussed. While the lag error depends on all of the above parameters, the fluctuation error depends only on ­. Increasing D results in a linear increase in the lag error and hence the total steady state mean-squared error. The optimum choices for ­€ and M are derived. Simulation results are provided to corroborate our analytical results.

We also derive a fast version of our NLMS-OCF algorithm that has a complexity of O(NM) .The fast version of the algorithm performs orthogonalization using a forward-backward prediction lattice. We demonstrate the advantages of using NLMS-OCF in a practical application, namely stereophonic acoustic echo cancellation. We find that NLMS-OCF can provide faster convergence, as well as better echo rejection, than the widely used APA.

While the first part of this project attempts to improve adaptive filter performance by refining the adaptation algorithm, the second part of this work looks at improving the convergence rate by using different structures. From an abstract viewpoint, the parameterization we decide to use has no special significance, other than serving as a vehicle to arrive at a good input-output description of the system. However, from a practical viewpoint, the parameterization decides how easy it is to numerically minimize the cost function that the adaptive filter is attempting to minimize.

A balanced realization is known to minimize the parameter sensitivity as well as the condition number for Grammians. Furthermore, a balanced realization is useful in model order reduction. These properties of the balanced realization make it an attractive candidate as a structure for adaptive filtering. We propose an adaptive filtering algorithm based on balanced realizations.

The third part of this project proposes a unit-norm-constrained equation-error based adaptive IIR filtering algorithm. Minimizing the equation error subject to the unit-norm constraint yields an unbiased estimate for the parameters of a system, if the measurement noise is white. The proposed algorithm uses the hyper-spherical transformation to convert this constrained optimization problem into an unconstrained optimization problem. It is shown that the hyper-spherical transformation does not introduce any new minima in the equation error surface. Hence, simple gradient-based algorithms converge to the global minimum. Simulation results indicate that the proposed algorithm provides an unbiased estimate of the system parameters.


Figure 1 is a block diagram of system identification using adaptive filtering. The objective

is to change (adapt) the coefficients of an FIR filter, W, to match as closely as possible the

Response of an unknown system, H. The unknown system and the adapting filter process

the same input signal x [n] and have outputs d [n](also referred to as the desired signal) and

y [n].

Gradient-descent adaptation: The adaptive filter, W, is adapted using the least mean-square algorithm, which is the most widely used adaptive filtering algorithm. First the error signal, e [n], is computed as e [n] = d [n] - y [n], which measures the divergence between the output of the adaptive filter and the output of the unknown system. On the basis of this measure, the adaptive filter will change its coefficients in an attempt to reduce the error. The coefficient update relation

Is a function of the error signal squared and is given by

The term inside the parentheses represents the gradient of the squared-error with respect

to the ith coefficient. The gradient is a vector pointing in the direction of the change in

filter coefficients that will cause the greatest increase in the error signal. Because the goal

is to minimize the error, however, Equation 1 updates the filter coefficients in the direction

opposite the gradient; that is why the gradient term is negated. The constant µ is a step-

size, which controls the amount of gradient information used to update each coefficient.

After repeatedly adjusting each coefficient in the direction opposite to the gradient of the

error, the adaptive filter should converge; that is, the difference between the unknown and

adaptive systems should get smaller and smaller.

To express the gradient decent coefficient update equation in a more usable manner, we

can rewrite the derivative of the squared-error term as

The step-size µ directly affects how quickly the adaptive filter will converge toward the

unknown system. If µ is very small, then the coefficients change only a small amount at each

update, and the filter converge slowly. With a larger step-size, more gradient information is

Included in each update, and the filter converges more quickly; however, when the step-size

is too large, the coefficients may change too quickly and the filter will diverge. (It is possible

in some cases to determine analytically the largest value of µ ensuring convergence.)

Adaptive FIR Filtering

Adaptive Filter Implementation:

First, simulate the system identification block diagram in Figure 1. For the simulation, let

d [n] = x [n] be a Gaussian random noise input, which can be generated in MATLAB using

the command randn. Because the adaptive filter coefficients change with time, implement

the filter on a sample by sample basis ( e.g., using a do loop). For the "unknown" system,

Use the fourth-order, low-pass elliptical IIR filter designed with the following MATLAB


Simulate the system with an adaptive filter of length 32 and a step-size of 0.02. Initialize all

adaptive filter coefficients to zero. Plot the squared-error versus sample number as the filter

adapts over time. Also, plot the frequency response of the adaptive filter coefficients at the

end of the simulation. How well does the filter match? How long does it take to converge?

Second, write an assembly program to implement an adaptive FIR filter. Your code can

be modified from the FIR filtering laboratory exercise to meet the following specifications:

1. The program should run at an 8kHz-sampling rate.

2. The filter coefficients should be initialized to zero at the start of program execution.

3. The left input channel is the input x [n] to the adaptive filter.

4. The right input channel is the reference input signal d [n].

5. The left channel output is the adaptive filter output y [n].

Figure: An adaptive filter

6. The right channel output is the error signal e [n].

Based on these specifications, your program should implement the block diagram of Figure 1.

The LMS filter coefficient update equation is

W [n + 1] = W [n] + µe [n]X [n] (1)

where W [n] is the vector of FIR filter coefficients at time n, e [n] = d [n] − y [n] is the error

between the reference and output, µ is the step size, and X [n] is the vector of current and

past input samples. See lecture notes for further details.

Applications of adaptive filtering:

1. Adaptive filter for system identification:

One application of adaptive filters is adaptive modeling or system identification. Figure 2

depicts the system modeling problem. The output of an unknown system to a known input

is used as the reference input to an adaptive filter. The known input signal is fed through

the adaptive filter in an effort to duplicate the unknown system output. Upon convergence,

the adaptive filter parameters represent an estimate of the unknown system.

Step 1

Test your adaptive filter program by applying it to the identification of a known system.

Use a filter box as your "unknown" system. Use a random noise generator as your "known"

input signal. Run your adaptive filter program using only 10 filter taps and stop the program

after the filter has converged. Enter the estimated filter coefficients into MATLAB and verify

that the correct filter has been estimated.

2. Adaptive Interference Cancellation

Adaptive filters are used in modern telephony to perform echo suppression. A small amount

of speech from talker A, transmitted to talker B, is unavoidably "echoed" back to talker A with talker B's speech.

Step 2

With this application in mind, a stereo speech recording has been made. On one channel,

talker A is speaking. On the other channel, speech from talker B is present, along with a

delayed "echo" of talker A's speech. Use your adaptive filter to suppress the "echo" from

the speech of talker B. Can you estimate the delay of the "echo"? Does the step size, µ, or

number of filter taps, N, have any effect on your results?

Figure : Using an adaptive filter to module an unknown system.

DMT: Equalization and Approximation


In equalization, the received spectral coefficient blocks (i.e. after cyclic prefix removal and

FFT) are adjusted to compensate for the frequency response of the channel (nothing can

be done here about the additive noise). Due to the cyclic prefix1, each block has essentially

undergone cyclic convolution with the channel's impulse response. In the frequency domain,

this is the same as if the spectral coefficients were point wise multiplied by the frequency

response of the channel. If the freq. response has no zeros and is known by the receiver, it

is possible to perfectly remove the effect of the channel's filter. Since the channel point wise

multiplied the blocks by its freq. response, all that needs to be done is multiply the blocks

point wise by the 1 over the freq. response. Because we implemented the channel's impulse

response as non-ideal low-pass, it's freq. response has no zeros and equalization is rather



After equalization, the effect of the channel's low-pass filter is removed, but the additive

noise is still there. It manifests itself as causing the received constellation points to deviate

from their location in the original constellation. To enable the bit stream to be recovered, a

nearest-neighbor approximation is performed on each point. As long as the noise amplitude

is small or the constellation points are far apart, it is unlikely that any single point will

deviate enough from it's original location such that it has a new nearest-neighbor. With

high noise power, however, the points are scattered all over the constellation; the nearest

neighbor in this case is unlikely to be the original point. In our system, we implemented

this approximation with a parser and look up table; we would examine each complex value

Frequency response of equalizer

Figure : This is the transfer function of the equalizer. If you compare it to the transfer

function of the channel, you will see that the dips there correspond to the peaks here.

Received constellation points (SNR = 52dB)


in the blocks and compare it to every point in the constellation, selecting the closest point

as the approximation.

Received constellation points (SNR = 25dB)


RLS algorithm

Fast RLS adaptive filtering algorithms represent an attractive way to compute the least squares solution of growing length data efficiently. While the conventional RLS algorithm requires O(M2) _ computations per sample, where M is the filter order, its fast counterparts require only O(M) operations. Examples of such fast schemes include the fast a posteriori error sequential technique (FAEST) [1], the fast transversal filter algorithm (FTF) [2], and least-squares lattice algorithms [3, 4, 5]. The latter class of algorithms deal with order-recursive structures, while the first two examples (FTF and FAEST) deal with fixed-order structures; both FTF and FAEST can also be viewed as special cases of a general fast estimation algorithm for state-space models, known as the (extended) Chandrasekhar recursions [6, 7].

The low complexity that is achieved by these algorithms is a direct consequence of the

shift structure that is characteristic of regression vectors in FIR adaptive structures. This fact is evident in the conventional derivations of fast adaptive algorithms; all of which rely heavily on the shift structure. The arguments in [6, 7], however, have shown that fast RLS algorithms can still be derived for certain more general structures in the regression vectors, other than the shift structure. In this paper, we show that input regression vectors that arise from Laguerre-based networks satisfy the conditions required in [7] and, therefore, that fast RLS algorithms can still be derived in this context.


There are several important reasons to consider Laguerre basis functions instead of the

Usual FIR implementations. First, the use of Laguerre models (or more generally, orthonormal basis functions) to describe the dynamical behavior of a wide class of systems has been studied extensively in many recent works on system identification and control [8]-[11]. Second, the orthonormality property of Laguerre models offers many benefits in estimation problems, including better numerical conditioning of the data. Third, one of the primary motivations in using all-pass basis functions for adaptive filtering is the fact that it requires fewer parameters to model systems with long impulse responses.

In echo cancellation applications, for example, a long FIR filter may be necessary to model the echo path and adaptive IIR techniques have been proposed as possible alternatives (e.g.,[12]-[13]). These techniques are nevertheless known to face stability problems due to the arbitrary pole locations during filter operation. Laguerre-domain adaptive filtering can offer an attractive alternative since, in this case, the pole location is fixed. It has already been suggested for echo cancellation and equalization applications in the works [14, 15]. However, these earlier contributions rely only on a slow RLS-type form for Laguerre adaptive filtering that requires operations. Fast O(M) Laguerre adaptive filters have been proposed before in the literature, but only as extensions of the classical gradient adaptive lattice (GAL) algorithm, which relies on LMS-type recursions and not on RLS-type recursions (see [16]).

The RLS algorithm computes the optimal solution of problem (1) recursively as follows:

Fast RLS Laguerre Algorithm

Algorithm 1 (Fast Array Laguerre Algorithm):

Implementation Issues:

Choice of the Regularization Matrix:


Once the problem is clearly understood, the next step is to conduct feasibility study, which is high-level capsule version of the entered systems analysis and design process. The objective is to determine whether or not the proposed system is feasible. The three tests of feasibility have been carried out.

Technical Feasibility


Operational Feasibility

Technical feasibility

In Technical Feasibility study, one has to test whether the Proposed system can be developed using existing technology or not. It is planned to implement the proposed system using OOPS technology. It is evident that the necessary hardware and software are available for development and implementation of the proposed system. Hence, the solution is technically feasible.

Economical Feasibility:

As part of this, the costs and benefits associated with the Proposed system compared and the project is economically feasible only if tangible or intangible benefits outweigh costs. The system development costs will be significant. So the proposed system is economically feasible.

Operational Feasibility:

It is a standard that ensures interoperability without stifling competition and innovation among users, to the benefit of the public both in terms of cost and service quality. The proposed system is acceptable to users. So the proposed system is operationally feasible.



Implementation is the stage where the theoretical is turned into a working system. The most crucial stage in achieving a new successful system and in giving confidence on the new system for the users that it will efficiently and effectively.

The system can be implemented only after thorough testing is done and if it is found to work according to the specification.

It involves careful planning, investigation of the current system and its constraints on implementation, design of methods to achieve the change over and an evaluation of change over methods a part from planning. Two major tasks of preparing the implementations are education and training of the users and testing of the system.

The more complex the system being implemented the more involved will be the systems analysis and design effort required just for implementation.

The implementation phas3e comprises of several activities. The required hardware and software acquisition is a carried out. The system may require some software to develop. For this, programs are written and tested. The user then changes over to his new fully tested system and the old system is discontinued.


The testing phase is an important part of software development. It is the process of finding errors and missing operations and also a complete verification to determine whether the objectives are met and the user requirements are satisfied.

Software testing is carried out in three steps:

The first includes unit testing, where in each module is tested to provide its correctness, validity and also determine any missing operation and to verify whether the objectives have been met. Errors are noted down and corrected immediately. Unit testing is the important and major part of the project. So errors are rectified easily in particular module and program clarity is increased. In this project entire system is divided into several modules and is developed individually. So unit testing is conducted to individual modules.

The second step includes integration testing. It need not be the case, the software whose modules when run individually and showing perfect results, will also show perfect results when run as a whole. The individual modules are clipped under this major module and tested again and verified the results. This is due to poor interfacing, which may result in data being lost across an interface. A module can have inadvertent, adverse effect on any other or on the global data structures, causing serious problems.

The final step involves validation and testing which determines which the software functions as the user expected. Here also some modifications were. In the completion of the project it is satisfied fully by the end user.

Maintenance and enhancement

As the number of computer-based systems, grieve libraries of computer software began to expand. In house developed projects produced tones of thousand soft program source statements. Software products purchased from the outside added hundreds of thousands of new statements. A dark cloud appeared on the horizon. All of these programs, all of those source statements had to corrected when false were detected, modified and user requirements changed, or adapted to new hardware that was purchased. These activities were collectively called software maintenance.

The maintenance phase focuses on change that is associated with error correction, adaptations required as the software's environment evolves, and changes due to enhancements brought about by changing customer requirements. Four types of changes are encountered during the maintenance phase.






Even with the best quality assurance activities is lightly that the customer will uncover defects in the software. Corrective maintenance changes the software to correct defects.

Maintenance is a set of software engineering activities that occur after software has been delivered to the customer and put into operation. Software configuration management is a set of tracking and control activities that began when a software project begins and terminates only when the software is taken out of the operation.

We may define maintenance by describing four activities that are undertaken after a program is released for use:

Corrective maintenance

Adaptive maintenance

Perfective maintenance or enhancement

Preventive maintenance or reengineering

Only about 20 percent of all maintenance work are spent "fixing mistakes". The remaining 80 percent are spent adapting existing systems to changes in their external environment, making enhancements requested by users, and reengineering an application for use.


Over time, the original environment (e.g., cpu, operating system, business rules, external product characteristics) for which the software was developed is likely to change. Adaptive maintenance results in modification to the software to accommodate change to its external environment.


The goal of this project is to give a little insight into the adaptive filter theory and show some basic aspects in noise cancellation in speech enhancement. Understanding the filter structure and the properties of each adaptive algorithm plays a key role in the process of implementation on DSP processors. The main impact is cast on cost reduction of the adaptive systems - final products - as well as retaining the high quality of noise suppression. As mentioned in the introduction, when designing an adaptive system, it is necessary to accept a number of trade-offs between disparate requirements, which cannot be satisfied at the same time. For example the LMS-family algorithms are very simple, tractable and computationally efficient. But their essential disadvantage is the slow convergence rate and bigger steady-state error. On the other one side, RLS-family algorithms are computationally more complex and also structurally complicated. Here, the proper set-up and tuning of system parameters requires deeper experience in the domain of adaptive filtration. However, a great contribution can be addressed to precise adaptive mechanism with a low steady-state error and extremely high convergence rate. In the future we can expect that the increasing performance of DSP processors as well as the quality of DSP services will cause requirements such as computational modesty or memory consumption irrelevant. Still there is a fact that higher standard does not come for nothing and the price as we know is a key factor for majority of customers. Finally, adaptive noise cancellation in speech signals is a huge domain of research, full of opportunities and unsolved problems. It is definitely worth to spend the future time in its exploration.