Implementation Of Adaptive Filter Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Adaptive systems are a fundamental tool in digital signal processing. Adaptive filters are appropriate in temporal variant environment applications where statistical parameters of signals change over time and, additionally, nonlinear characteristics are foreseen in the analysed or predicted process. Active noise cancellation is one of the important fields where adaptive filtering is used. Multiple adaptive algorithms exist but most of them assume that the adaptive system obtains an error following a Gaussian distribution. However, in many real problems, the noise encountered is more impulsive than the one predicted by a Gaussian distribution, which affects the proper operation of the filter and the adaptation process.

Nowadays, two main design processes can be followed for embedded system design, a hardware description language (e.g. Verilog) and a high level synthesis tool. This paper deals with the hardware implementation of an adaptive algorithm that is robust to impulsive noise using these approaches. Finally, a comparison is made between the proposed algorithm and the existing algorithms.

Keywords- Adaptive algorithm, hardware design language, impulse noise, RLS algorithm, LMS algorithm, High-level synthesis(HLS).


An adaptive filter is very generally defined as a filter whose characteristics can be modified to achieve some end or objective and is usually assumed to accomplish this modification or adaptation automatically. It is also usually assumed that the time scale of the modification is very slow compared to the bandwidth of the signal being filtered. With this assumption is that the system designer could in fact use a time-invariant, non-adaptive filter if only the designer knew enough about the input signals to design the filter before its use. This lack of knowledge may spring from true uncertainty about the characteristics of the signal when the filter is turned on, or because the characteristics of the input signal can slowly change during the filters operation. The designer then turns on to an "adaptive" filter, which can "learn" the signal characteristics when first turned on and thereafter can "track".

The reason for its success is, besides the simplicity of the transversal filter structure, error performance, and the existence of fast and efficient adaptive algorithms to adjust its parameters. Adaptive filters are widely used in communications to perform such functions as equalization, echo cancellation, noise cancellation, and speech compression. The filter coefficients are determined during a training sequence where a known data pattern is transmitted. The adaptive algorithm adjusts the filter coefficients to force the received data to match the training sequence data. Active noise cancellation is also an important field where adaptive filtering is used [2]. System identification is also an important area in control systems; in this case, adaptive filtering is used for transfer function identification of an unknown system by means of its inputs and outputs.

The structure of an adaptive algorithm consists of a temporal variant system where the transfer function directly depends on the statistical characteristics of the input signal, an algorithm connecting the inputs, and a signal informing about the error obtained by the adaptive system. Multiple adaptive algorithms exist [4], [5], but most of them assume that the adaptive system obtains an error following a Gaussian distribution. However, in many real problems, the noise encountered is more impulsive than the one predicted by a Gaussian distribution.

In addition, the presence of outliers, either at the input signal or at the desired signal, may cause the adaptive algorithm to become unstable. For this reason, different adaptive algorithms that are robust to impulsive noise have been proposed [7]-[9] in order to provide proper operation. Usually, impulsive noise cancellation is achieved by means of independent filter techniques. Impulsive noise suppression in speech is very important for clear voice communications.

Due to computational requirements, hardware implementation of adaptive systems is not always straightforward, restricting the range of applications where they can be applied. However, a high-performance low-cost system on chip implementations of DSP algorithms is receiving increased attention. Implementation platforms of DSP algorithms range from an application-specific integrated-circuit (ASIC) custom chip to general-purpose processor or DSP microprocessors.

Nowadays, the use of FPGAs is increasing. They are powerful hardware devices, combining the main advantages of ASIC and DSP processors, since they provide both a programmable and a dedicated hardware solution , which makes them very attractive devices for rapid prototyping. Even if the final design will be implemented in an ASIC, it is usual to use an FPGA as the prototyping device due to factors such as time and cost. Moreover, an FPGA is more efficient in power consumption, an advantage for battery-operated systems, and, for the same application, requires less clock system speed compared to a DSP or a general-purpose processor, offering better electromagnetic compatibility properties.

In a traditional approach, a design tool takes the VHDL description of the hardware and generates a bit stream(configuration file for the FPGA device). Nowadays, the use of high-level synthesis (HLS) tools is increasing, enabling designers to enter designs at a much higher level of abstraction. Such an HLS tool would accelerate the design process by taking the algorithmic description of the required hardware, introducing certain area and performance requirements according to the design needs, and automatically generating the bit stream. This paper develops VHDL and HLS implementation for a novel algorithm (adaptive impulsive noise filtering), providing some hints and comparisons for each design process. Xilinx devices and software environment have been chosen.

Fig. 1 Adaptive Filtering-Block Diagram

A combination of adaptive filtering, impulsive noise immunity, and hardware implementation is an area where few references can be found, mainly focused in the digital image processing field [23]-[25]. This paper deals with the FPGA hardware implementation of an adaptive algorithm that is robust to impulsive noise developed in [26]. This algorithm offers a low computational cost by employing a cost function widely used in independent component analysis (ICA). Some refinements to the original algorithm have been introduced to reduce computational cost without affecting functionality.


An adaptive system is composed of two modules (Fig. 1).

1) A temporal variant system (adaptive system).Most of the applications propose a digital FIR filter defined in a time instant n by a coefficient vector wn= [wn(0),wn(1),... ,wn(L−1)], using an input vector xn=[x(n),x(n−1),...,x(n−L+ 1)]to calculate the output signal y(n)as the wn and xn convolution.

2) An adaptive algorithm. This module adjusts the coefficients so that the adaptive system output y(n) and the desired signal d(n) are equal. The similarity between both signals is given by the error signal e (n) defined as d(n)−y(n). The error signal e (n) is used to ad-just the coefficients of the adaptive system for the next time instant. Different algorithms exist for the coefficient adjustment.

Adaptive algorithms are based on the minimization of a certain cost function (optimization problem). The most widely used cost function is the quadratic error. This cost function is optimum when errors follow a Gaussian distribution [4]. However, it is not a good choice in the presence of outliers or impulsive noise.

Sign LMS Algorithm

The LMS algorithm in figure 3.1 performs the following operations to update the coefficients of an adaptive FIR filter:

Calculates the output signal y(n) from the FIR filter.

y(n)= uT(n).w(n)

where    uT(n) is the filter input vector and u(n)= [x(n)x(n-1)…..x(n-N+1)]T

w (n) is the filter coefficients vector and w(n)=[ w0(n) w1(n)….. wN-1(n)]T

Calculates the error signal e(n) by using the following equation:

e(n) = d(n)-y(n)

Updates the filter coefficients by using the following equation:

w(n+1) = w(n)+µ.e(n).u(n)

where    Î¼ is the step size of the adaptive filter

            w(n) is the filter coefficients vector

            u(n) is the filter input vector.

By using the Delta rule for the update of the coefficients, we obtain the following equation where μ is the learning rate

wn+1 = wn + μ · tanh [ β · e(n)] · xn (2)

Equation (2) shows two attractive features.

The proposed cost function is noise robust: since the hyperbolic tangent saturates to±1 for extreme values.

Easy hardware implementation : An efficient hardware implementation for the hyperbolic tangent based on fuzzy logic is used, giving


= sign (e(n)) if |e(n)|>1/β

= -e(n).|e(n)|. β2+2. β.e(n) if |e(n)|≤1/β (3)



= wn+µ.sign [e(n)].xn if |e(n)| > 1/β

=wn+µ.[2β-β2.|e(n)|].e(n).xn if |e(n)|≤ 1/β (4)

The value of beta is very relevant since it controls which outliers are taken into account by the adaptive system. This parameter is related to the variance of the error produced by the adaptive system. The value of β can be modified iteratively according to the evolution of the error in the learning process. We have taken the classical threshold of three standard deviations to consider a pattern as an outlier. Moreover, since tanh(β) = |0.96|, i.e., a value close to unity, the parameter β can be obtained

β =3/(m+3.σ) (5)

where m is the mean value of the error signal and σ is its standard deviation estimated in an N-length window containing the last errors.

This threshold for the outliers can be modified, providing different levels of immunity to impulsive noise.

Stepwise Procedure for Algorithm implementation

The calculation process for coefficient update requires the values of e (n), β, and μ. However, for hardware implementation, some modifications are done. The update algorithm can be divided into four logic steps.

Calculation of y (n): In this case, t is the transpose and L=9

y (n) = (6)

Calculation of e (n): The estimation error e(n) is the difference between the desired response d(n) and the adaptive system output y(n), i.e., e(n)=d(n)−y(n).

Calculation of β: Typically, most of the applications in adaptive filtering have zero mean signals; then, when the number of samples is high, the value of m can be rounded to zero. Moreover, as β controls the outlier immunity, and thus, a fixed value is not necessary, we have experimentally proved that, using both the standard deviation or its square (i.e., the variance), the resultant β does not affect the system behavior. Finally, (5) becomes β≈1/σ2 This is very useful because an iterative calculation procedure for the variance can be used and β calculation is simplified


For a high number of samples(n>>), (7) can be approximated as

if n=0, σ2(n)=1

Calculation of next sample coefficient values wn+1(k). The filter coefficients are updated based on (4), which can be rewritten as indicated in the following equation where wn(k)and wn+1(k)are the current and updated coefficients, respectively (k∈{0,...,8}), μ is the learning rate, and x(n− k) is the delay-line samples provided by the input of the filter:

wn+1(k)= wn(k)+µ.sign[e(n)].x(n-k), if e(n) >

=wn(k ) + µ.[].e(n).x(n-k),if e(n) <=

The major advantage of LMS algorithm lies in its computational simplicity. However the price paid for this simplicity is slow convergence, especially when the eigenvalues of the autocorrelation matrix have a large spread. From another point of view LMS algorithm has only a single adjustable parameter for controlling the convergence rate, namely, the step size parameter µ.Since µ is limited for purpose of stability to be less than the upper bound, the modes corresponding to the eigenvalues converge very slowly.

To obtain faster convergence, it is necessary o devise more complex algorithms, which involve additional adjustable parameters. In particular, if the correlation matrix has M unequal eigenvalues, we should use an algorithm that contains M parameters, one for each of the eigenvalues. In deriving more rapidly converging adaptive filtering algorithm, we adopt least square criterion instead of statistical approach based on MSE criterion. Thus we directly deal with the data sequence x(n) and obtain estimates of correlation from the data.

LMS-like algorithms have a step size that determines the amount of correction applied as the filter adapts from one iteration to the next. Choosing the appropriate step size is not always easy, usually requiring experience in adaptive filter design.

A step size that is too small increases the time for the filter to converge on a set of coefficients. This becomes an issue of speed and accuracy.

One that is too large may cause the adapting filter to diverge, never reaching convergence. In this case, the issue is stability - the resulting filter might not be stable.

As a rule of thumb, smaller step sizes improve the accuracy of the convergence of the filter to match the characteristics of the unknown, at the expense of the time it takes to adapt.

RLS Algorithm

Compared to least mean squares (LMS) algorithms, recursive least squares (RLS) algorithms have a faster convergence speed and do not exhibit the eigenvalue spread problem. However, RLS algorithms involve more complicated mathematical operations and require more computational resources than LMS algorithms.

The standard RLS algorithm performs the following operations to update the coefficients of an adaptive filter.

Calculates the output signal y(n) of the adaptive filter.

Calculates the error signal e(n) by using the following equation: e(n) = d(n)-y(n).

Updates the filter coefficients by using the following equation: (8)

where the filter coefficients vector and the gain vector. defined by the following equation: (9)

where λ is the forgetting factor and P(n) is the inverse correlation matrix of the input signal. Refer to the book Adaptive Filter Theory for more information about the inverse correlation matrix.

P(n) has the following initial value P(0):

where δ is the regularization factor. The standard RLS algorithm uses the following equation to update this inverse correlation matrix. (10)

The smaller λ is, the smaller contribution of previous samples. This makes the filter more sensitive to recent samples, which means more fluctuations in the filter coefficients. The λ=1 case is referred to as the growing window RLS algorithm.


Fig 2:Top Level bock Diagram

All the calculations are carried out using signed fixed-point arithmetic (2's complement). In the case of overflow, the chosen option has been to saturate to the largest positive or smallest negative value, and for quantization, rounding is done. Internal calculations are carried out using double precision. The main parts of the proposed system are as follows.

1) Block "Output Update & Error," responsible for the output calculation y(n) and error calculation e(n) and containing a set of registers to store previous input data samples, nine multipliers, eight adders, and one subtracter. Equation (6) and β value are calculated using this module. Fig. 5 shows its block diagram.

2) Block "Variance," obtaining the variance value according to (8). This block stores the previous variance value for iterative calculation. It is composed of three multipliers, one adder, a ROM for the inverse value, and a counter to address the ROM.

3) Block "Control," calculating the absolute error value and the sign of the error signal according to (9). In addition, the error signal and the variance value are compared, sending the result to the "Weights" block using the Comp signal. Inside, a negate block computes the arithmetic negation (2's complement) of e(n), with a Xilinx Bit Slice Extractor extracting the sign of e(n) and a comparator evaluating whether |e(n)| > σ2(n).

4) Block "Weights," obtaining an internal value named "Var_Weights" to be used further in the coefficients' update. A main part of is carried out in , using four adders, one subtracter, one multiplier, one divider and five shift blocks to perform a right shift in the input signal. By using these resources, products by constants 0.05 and 0.025 are reduced to shift and add operations.

5) Block "Weights Update," responsible for updating the weight values of the adaptive filter using the value obtained in block "Weights". This block completes the calculation of and contains registers to store previous values of data input samples.


Simulation of Sign LMS Algorithm

The proposed adaptive algorithm implementation was tested on a system identification application with impulsive noise contribution. In this case, x(n) is the stimulus signal, which feeds directly to the unknown system and the adaptive filter. The output of the unknown system, d(n) or desired signal, is corrupted by impulsive noise and contaminated with Gaussian noise, not correlated to the adaptive system input.

Fig 3: Plot of error vs iterations using sign LMS algorithm

The input signal x(n) and desired signal d(n) are wide sense stationary and zero mean processes. The unknown system is an FIR filter with coefficients

wunknown =[−0.1,−0.2,−0.3,−0.4, 0.5, 0.4, 0.3, 0.2, 0.1].

As can be seen, the speed of convergence is fast, in less than 400 iterations(samples), and the filter gets an excellent identification value of coefficients for the unknown system.

Fig 4: Plot of filter output vs iterations using sign LMS algorithm

As an example, in iteration 417, the coefficient values obtained by the adaptive system are

wn = [-0.1101, -0.1915, -0.3280, -0.3761, 0.5252, .4144, 0.2872. 0.2141, 0.1217] which are close to the unknown system values wunknown.

Fig 5: Plot of error in dB vs iterations

Simulation of RLS Algorithm

In this algorithm, the convergence is fast compared to LMS algorithm. Hence, we see that the coefficient values obtained by the adaptive system are [-0.1000, -0.2000 -0.3000 -0.40000 0.5000 0.4000 0.3000 0.2000 0.1000] which are the exactly same as system values.

The computation time for the conventional LMS algorithm and the Sign LMS algorithm is relatively similar and much less than that of the RLS algorithm. The RLS computation time is increasing rapidly and non-linearly with the filter order.

Fig 6: Plot of Error using RLS Algorithm

Fig 7: Plot of Filter Output using RLS Algorithm

Fig 8: Plot of Error in dB using RLS Algorithm


The implementation of a useful adaptive filter to remove impulsive noise for system identification or prediction is done. It is observed that the system adapts to the varying environment and identifies the filter coefficients very quickly. The performance of the proposed algorithm can be said to be better than LMS algorithm but not as good as RLS algorithm.

The convergence of RLS algorithm is faster than sign LMS algorithm because there are more than one adjustable parameters to control the convergence rate.