Digital Filters Over Analogue Filters Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The problems caused by contaminating signal of interest with other interfering or unwanted signals (noise) had been a global issue in the communication system. When signals are transmitted through a channel especially in a wireless medium, they barely gets to their targeted destination without been contaminated or obstructed by varieties of noises (unwanted signals), such as the surrounding background noise (interfering signals from external sources), electronic device noise (from analogue to digital converters (ADC), digital to analogue converters (DAC), amplifier in the transmitting device etc), modulation noise, signal quantization error, channel noise etc. Communication over these noisy mediums always upshots in an annoying effect that is in most cases very unpleasant at both the transmitting and receiving end. The effect of these noises is a complete degradation in strength of signal of interest, which can either be inform of fading, echoing effect, delay, multipath interference, co-channel interference to mention but a few. To correct this problem, a device which is capable of extracting the desired signal from the corrupted ones or suppressing the annoying effects of noise to the very minimum is used and this device is known as a filter. Filters are set of organized signal processing blocks used in almost all the modern electronics devices to carry out filtering operations.

The rest of this chapter discussed in details different types of filters, their structures and different areas of their applications. Also some common adaptive algorithms which are widely used in adaptive filters are treated in detail. In this chapter, Yn is same as Y(n).

3.1 Filter

A filter is a device or a medium or network that selectively generates an output signal from input signal such as wave shape, frequency characterises (amplitude/phase) by extracting only the required information contained in the input signal [1]. Filters function by, accepting an input signal, preventing the pre-specified frequency components, and passing the desired signal without its unwanted components as the output [2]. Filters can minimize or suppress the noise effects or completely eliminates them from channel and allowing free passage of the needed signal. A filter may be in the form of hardware or software. Filters finds their applications in different areas such as; communication system, Radar, Sonar, Navigation, Seismology, Biomedical engineering and Financial engineering just to mention but few. There are two basic types of filter, analogue filters and digital filters. Each of these filters can be used to achieve a desired but they both have some limitation in their applications.

3.2 Analogue Filter

Analogue filters are generally signal processing building blocks used in electronic system for separation of desired signal from multiple or noisy signals. They are the first type of filtering system to be discovered since 1880s. Analogue filters operate on varying continuous analogue signals. These filters no doubt have contributed enormously to the development of electronics especially in telecommunications. As the years went by, signal processing systems gradually moved to digital circuitry thereby making the implementation of analogue functions on digitalized chips very difficult and impractical cost-wise [L. A. Williams]. Nowadays virtually all the signal processing systems are based on digital circuitry nevertheless, analogue filters still find their applications where the use of digital filters are impracticable such as in a low-order-simple systems and at a high frequency functions when it is important that the integration area and power consumption are at minimal level while still maintaining a good linearity [A. Caruson]. Analogue filters exist in all forms of digital filters; in other words for all the known analogue filters exist their digital counterpart.

3.3 Digital Filters

A digital filter is a mathematical algorithm/formula carried out in software and/ or hardware that accepts digital signal to produce a digital output signal during a filtering process [3]. In other words, digital filter does not have any pre-defined shape or structure; it could be a set of equation or a loop in a program or exist as a handful of integrated circuits linked on chips [4]. Digital filters can be typified as Finite impulse response (FIR), Infinite impulse response (IIR) or Adaptive filters. These three classes of filter are very powerful tools in digital signal processing (DSP), but their choice of use depend entirely on the design requirements, type of channel it is to be used and the behaviour of signal involved. IIR filters are best suited for design where the only important demands are sharp cut-off filters and high throughput because they have less coefficients especially those of elliptic class. FIR filters on the other hand, are best implemented if phase distortion should be of absolute minimum or absent completely [3] with few filter coefficients. Although FIR filters are sometimes uneconomical and involves more computations in obtaining the desired filter responses, adding that some of these responses are not practically realisable [5], but most of the newer DSP processors are completely designed to suite the use of FIR filters.

The advantages of digital filters over analogue filters

Digital filters have some characteristics such as a linear phase response which is not achievable in analogue filters [3].

Digital filters do not need periodic calibration as their performances do not vary with environmental changes.

The frequency response of a digital filter can be automatically attuned when implemented on a programmable processor.

Both the unfiltered and filtered signal can be retained for future use when using digital filter.

Digital filters can be used in a very low frequency application such as in biomedical applications; whereas the analogue filters cannot operate at such frequency.

There is no need to duplicate the hardware when filtering several input signals using one digital filter since the hardware can be reprogrammed to perform different task without necessary modifying its structure.

The digital filters perform continuously on unit-to-unit basis.

3.3.1 Finite Impulse Response (FIR) Filters

FIR filters are very important filtering tools in digital signal processing; they are finite because they do not operate in a feedback manner. Finite impulse response filter's output is got from implementation of series of delays, multipliers, and adders in the system [2]. Two major equations that characterize FIR filter as given below.

  Hz = h0z-0 + h1z-1 + h2z-2 + ... hN-1z-(N-1) (3.1)

yn = h0xn + h1xn-1 + h2xn-2 + ... hN-1xn-N-1, (3.2)

Equation (3.1) describes realisation in FIR filter, that is, a way of achieving a particular filter design by transforming the transfer function (Hz).

Equation (3.2) is the general method of calculating the filter's output (yn) where (xn) is the input signal as shown structurally in figure 3.1. Z-1 represents the delay of one sample, the delay boxes can also be called the memory locations or shift registers in a digital operation. The output yn is the sum of the weighted sample of current input xn and the previous inputs ranging from xn-1 to xn-N, hn is the impulse response which is the coefficients that carries out the multiplication operation. N is the filter length, as 'n' takes its values from 0 to N-1, [3].

Fig3.1 Logical Structure of Finite Impulse Response Filter [2]

The two major characterises of FIR filters are their unconditional stability and the linearity of their phase responses. These attribute attract broad interest in use of FIR filter on most DSP devices. The situation whereby the phase appearance of any filter represents the linear function of its frequency is called "Linear Phase" [5]. The linear phase makes the effects delay in FIR filter to be exact at all frequencies, hence the filter does not cause "phase/delay distortion". For FIR filter to have a linear-phase, its coefficients must be symmetrical (that is hn = ±hN - n - 1) around the centre coefficient, this is to say that, the first coefficient will be the same as the last and the second will be the same as the second-to-last and it continues like that until it gets to the centre. If the coefficient is odd, the middle one will stand alone without any match.

3.3.2 Infinite Impulse Response (IIR)

Infinite impulse response filter just like FIR filter is a very important digital signal processing filtering tool which acts on a recursive feedback system. IIR filters draw their strength from the flexibility of the feedback arrangement. This type of digital filter is mostly needed where the essential design requirements are sharp frequency cut-off and high throughput [3]. They make use of with fewer coefficients than their FIR counterpart hence their sharp cut-off.

IIR filters are implemented using the transfer function Hz as shown in equation (3.3), (z-N) is the delay function of the filter while bN and aM are coefficients. N is integer value ranging from 0 to N -1.


IIR filters are in most cases very unstable and can suffer great degrading in their performance as a result of insufficient coefficient. Therefore to achieve stability in IIR filters, the absolute values of the roots of the denominator aM in (3.3) should to be less than one. This is an important preventative measure to be taken when designing an IIR filter. The frequency response function of a stable IIR filter is equal to the Discrete Fourier Transform (DFT) of the filter's impulse response. Infinite impulse response filters are normally realizable [3] that is, their transfer function (Hz) can easily be converted into a desired filter design. Figure 3.2 below illustrates direct form of IIR realization.

Fig.3.2 Direct Realization Structure of IIR Filter [Hoebenreich] confirms and delete.

3.3.3 Advantages and Disadvantages of FIR and IIR Filters

The use of FIR filter or IIR filter for a particular application basically depends solely on the type of system and the output requirement of the system. These two filters generally has individual unique characterises which might be deterministic factor for choice of one over the other. Table 3.1 outlines some basic advantages and disadvantages of FIR and IIR filters.

Table 3.1 Advantages and Disadvantages of FIR and IIR Filters

Finite Impulse Response (FIR)

Infinite Impulse Response (IIR)


They are highly stable.

Very simple to design.

They have good linear phase response.

They have the ability of reproducing an exact match of an impulse response.

They can be used to implement any impulse function.

They are easily realised.

They are highly economical to implement.

They have very sharp cut-off and fantastic attenuation.

They are highly flexible.

IIR have both poles and zeros.


They are not cost-effective to use for sharp cut-offs.

They have only zeros and no poles.

They are barely stable.

They have non-linear phase response.

They operate a transient memory arrangement.

They are mostly affected by errors and noise because of limited number of coefficient.

3.3.4 Frequency Magnitude Responses of Digital Filters

The frequency magnitude response of a digital filter describes the behaviour of the filter towards their input samples in the frequency domain. Digital filters can modify an input signal a particular way so as to accomplish a specified design objective. The modifications are based on the way the frequency of input signals are allowed to pass through the filter which can either be low-pass, high-pass, band-pass and band-stop. Hence four distinct classes of were derived based on the nature of outputted frequency responses of an inputted signal and this were Band-pass filter, Low-pass filter, Band-stop filter and High-pass filter.

A low-pass filter is one that allows the passage of specified low frequencies while rejecting anyone above the cut-off frequency. The cut-off frequency is the highest pre-defined frequency of interest that is, the highest usable frequency. Low-pass filters are sometimes refers to as high-cut-off filters. Low-pass filters are commonly use in an acoustic smoothing.

High-pass filter is an exact opposite of a Low-pass filter, they accepts only high frequency signals at a certain range and completely blocked frequencies below their cut-off frequency. High-pass filters are used in audio systems to perform frequency crossover.

Band-pass filters on the other hand cuts-off frequencies outside a specified limit and allows frequencies within the desired limit to pass through. Band-pass filter is achieved by direct combination of High-pass filter and Low-pass filter [6].

A Band-stop filter is the inverse of a Band-pass filter. While Band-pass accepts frequencies within a certain range, Band-stop filter rejects those frequencies completely and only allow those outside that specified range to pass through. A Band-stop filter can be constructed by parallel arrangement of High-pass and Low-pass filter [7]. They are used as amplifiers in acoustic instrument. Figures 3.3, 3.4, 3.5 and 3.6 below shows the frequency magnitude responses for FIR Low-pass, Band-stop, Band-pass and High-pass filters respectively;

Fig3.2 frequency response of a Low-pass filter Fig3.3 Frequency response of a Band-stop filter

Fig3.4 frequency response of a Band-pass filter Fig3.5 frequency response of a High-pass filter

From the above figures, the passband limit refers to the range of frequencies that passes through, while the stopband contains the frequency components that were suppressed. Transition band shows how fast the filter transits from passband to stopband and vice versa. The shape transition width determines whether a filter has a sharp cut-off while the shape the passband and stopband shows the linearity of the phase. A filter becomes very difficult to achieve if the transition rate is very high [2]. Stopband attenuation refers to the lowest degree to which frequencies in the stopband can be attenuated [7].

3.4 Digital Adaptive Filters

Adaptive filters are self-adjustable digital filters that depend on the adaptable algorithms for its operations. This gives them the ability to perform very excellent in an environment where there is insufficient information about the input signals [9]. The adaptable filtering algorithm presumes some initial conditions to represent some known fact about the environment. In a stationary environment, adaptive filters follow the optimum Wiener's convergence point as their reference point. The extent of deviation from this point can be use to determine how well the filter performs. Nonetheless in a non-stationary environment, the adaptive filters try to trace the time variation of the input signals with respect to the desired signal [9]. Adaptive filters are best used when;

there is need for filter characteristics to vary and adjust to dynamic conditions;

there is an existence of spectral overlap of the noise and input signal; or/and

if it is difficult to specify the amount of noise in the system that is when the noise is unknown, such as in Electroencephalograph (EEG), Digital communication using a spread spectrum, in high frequency digitalized telephone communication system [3].

Adaptive filters are commonly used for noise cancellation, linear prediction/estimation/tracking, adaptive signal enhancement and adaptive control [9].

An adaptive filter is made up of two important parts: a digital filter with adjustable coefficients, such as FIR filters and an adaptive algorithm (self updating algorithm) used in modifying the coefficients of the filter [9]. Figure 3.6, below shows a typical structure of an adaptive filter.

Fig.3.6, Typical Structure of an Adaptive Filter

A wide range of adaptive filtering algorithms have been developed over the years both in theoretical works, and in real time to enhance the efficiency of the adaptive filters [9], though the preference of one adaptive algorithm over another depends exclusively on some factors which are fully outline later in this chapter.

3.5 Adaptive Algorithms

An algorithm, in a general sense can be define as a set of understandable finite data used for accomplishing some task, when given a defined set of inputs, the algorithm will return some expectable output [10]. Adaptive algorithms are those algorithms that systematically adjust themselves when in an unknown environment to suite their operational conditions based on the information available to them. These algorithms are very intelligent and can adapt and learn very fast in any environment depending on their computational capacity. Adaptive filtering algorithms are the key element in adaptive filters. Several types of these algorithms have existed over the years having Least Mean Squared (LMS), Normalised Least Mean Squared (NLMS) and Recursive Least Squared (RLS) as the most common in real-life applications. These algorithms generally, have similar structure and can work on the same input data, but their strength and pattern of operation differs. The features of these adaptive filtering algorithms, some of the mathematical expressions and derivations discussed below.

3.5.1 Least Mean Square Algorithm (LMS)

LMS algorithm is one of the most commonly used adaptive algorithms in linear adaptive filtering process. It was discovered by Widrow and Hoff in 1960 [9]. LMS as at that time was the first choice algorithm among other linear adaptive filtering algorithms because, this algorithm exhibits a high level of stability at noise floor and was simple to design. LMS does not require matrix inversion or quantification of the related correlation function. The LMS algorithm is a very important member of stochastic gradient algorithms which utilises the gradient vector of the filter coefficients to converge at the optimal wiener solution [9], [11]. This algorithm undergoes two main operational processes; First, the filtering process (the computation of output of a linear filter for any given input data and estimating the error which is the variation between the desired and output signal). Second, the adaptation process (self-updating of the filter tap-weight with response to the error estimation) [9],[3]. These processes are illustrated in equations (3.4), (3.5), and (3.6) below;

The filter output is given by: yn = n xn-i = wnxn (3.4)

Estimated error: en = dn - yn (3.5)

The tap-weight adaptation (initialization): wn + 1 = wn + 2µxne*n (3.6)

The estimated error en in (3.4) is based on the present estimate of the tap-weight vector, wn. It is useful to note that µxne*n in (3.5) is the adjustment done on the present estimate of the tap-weight vector wn [9]. For each iteration cycle, the LMS algorithm needs to know the current values of the input signal xn, the desired signal dn and tap-weight vector wn. Figure 3.7 illustrates the signal-stream of LMS algorithm as a recursive model. In this figure, the simplicity of LMS was clearly defined, as it showed that it needs only 2M + 1 complex multiplication and 2M additions in every cycle (where M denotes the number of tap-weights (filter length)). The parameter µ is the step size which controls the convergence rate of the algorithm and Z-1 represents the delay per sample. The tolerable values of µ must be within a certain range for a reasonable value of N, to ensure best performance of this algorithm,

i.e. 0 < µ < 2/NSmax. Smax is the tap input's maximum spectral power density.


Figure 3.7 illustration of signal-stream of LMS algorithm [9]. Derivation of LMS

The operation of least mean square algorithm is based on the steepest descent algorithm where the weight vector is updated sample by sample as given in (3.7) below [3].

Wn+1 = wn - µâˆ†n (3.7)

∆n is the gradient at error performance surface. LMS makes use of the Wiener equation (3.8) which relates to the gradient vector, autocorrelation matrix and the cross-correlation matrix;

∆ = = -2P + 2Rw, (3.8)

P = xnyn (the cross-correlation vector),

R = xnxnT (the autocorrelation matrix) and

∆ is the gradient vector.

Substituting the values of P and R (3.8):- ∆n = -2Pn + 2Rwn = -2xnyn + 2xnxnTwn

= -2xn (yn - xnTwn)

Recalling; en = yn - xnTwn; dn = xnTwn

Therefore, ∆n = -2enxn (3.9)

Substituting (3.9) in (3.7), we have;

Wn+1 = wn + 2µenxn (3.10)

Equation (3.10) is known as the Window Hoff weight updating equation for Least Mean Square algorithm. The self adjustment and adaption of this algorithm is carried out by the means of the weight updating equation. Normalized Least Mean Squared (NLMS) Algorithm

The Normalized Least Mean Squared algorithm is an adaptive algorithm proposed by Nagumo and Noda in 1967. This algorithm has an exact structure of the standard LMS but their weight updating pattern differs. The purpose of this algorithm is to solve the optimization and gradient noise amplification problem of LMS which results when input data are large, since the adjustment in LMS filter is directly proportional to the input data [9]. NLMS algorithm is given more preference in real-time operation more than LMS because, it demonstrates excellent balance in performance [12] and converges faster than the standard LMS [13]. The weight update of NLMS is given by equation (3.11)

Wn+1 = wn + e*n µxn(1/ǁ xn ǁ2) (3.11)

Where wn+1 is the new tap-weight update at iteration n+1, wn is the previous weight, xn is the tap-input vector, µ is the step-size (updating factor) and e*n the estimated error. Derivation of Normalized Least Mean Squared (NLMS) Updating Equation

The NLMS algorithm utilizes the principle of minimal disturbance, which states that; the change in the weight vector of an adaptive filter should be in a minimal sense from one cycle to another, with respect to the restrictions exact on the updated filter's output [9].

Using the updating equation of standard LMS algorithm in (3.10) and applying a variable convergence factor µk, we have that;

Wn+1 = Wn + 2µkenxn (3.12)

To achieve our aim of faster convergence, µk is carefully selected so that;

µk = (1/2ǁ xn ǁ2) (3.13)

µk is a variable convergence factor that minimizes the time of convergence though this factor introduces extensive maladjustment between the tap-weight vectors.

By substituting (3.13) in (3.12), we have;

Wn+1 = Wn + 2enxn(1/2ǁ xn ǁ2)

Wn+1 = Wn + enxn /ǁ xn ǁ2 (3.14)

To control the maladjustment caused by µk, without changing the direction of the vectors, a positive real scaling factor (fixed convergence factor) µn will be introduced in (3.14) since all the derivations are based on instantaneous values of the squared errors and not on the MSE [1]. Hence we have that;

Wn+1 = Wn + en xn µn /ǁ xn ǁ2 (3.15)

NLMS, in attempt to solve the gradient noise amplification problem of standard LMS filter introduces a self-problem which is, for a small input vector xn, the value of the squared norm ǁ xn ǁ2 becomes very small making the scaling factor µn large which in completely jeopardise the whole system. In view of this, (3.15) is modify to (3.16) which now includes a positive controlling factor δ, greater than zero at all times;

Wn+1 = Wn + en xn µn / (δ + ǁ xn ǁ2) (3.16)

Equation (3.16) is the general equation for computing the M-by-1 tap-weight vector in the normalized least mean square algorithm [9].

Comparing equations (3.10) and (3.16), we notice that;

The adaptation factor µn for the NLMS is dimensionless while for LMS µ has a dimension of inverse power [9].

The NLMS demonstrate faster convergence ability than the standard LMS algorithm [9].

Furthermore, NLMS can be seen as LMS filter with time varying step - size setting

µn = 1 /ǁ xn ǁ2

3.5.3. Recursive Least Squares (RLS) Algorithm

The Recursive Least Square is an adaptive algorithm which is based on the least squares method. This algorithm recursively estimates the filter coefficients which minimize the weighted least squares cost function describing the filter input [14]. RLS adaptive algorithm is generally known to have fast convergence nature which is of order of magnitude faster than that of NLMS and LMS algorithms however, this unique feature is at the expense of its high computational complexity [9]. Derivation of RLS Algorithm

Recursive least Squares aimed at minimizing the cost function K by carefully selecting the filter coefficients (weight) wn, (where n is the variable length) and performing the filter update on arrival of new set of data. This cost function is dependent on wn, because it is a function of estimated error en. Hence;

K(wn) = n-ie2i (3.17)

ei = di - yi = di - wHn ui

λ is the forgetting factor that influence the weight of the older error samples. It takes the values 0 < λ ≤ 1. The smaller the λ, the more this algorithm forgets about previous samples will be very sensitive of the new samples; for optimum performance, the RLS needs to have good knowledge of the preceding samplings hence, when λ = 1, it is known as growing window algorithm [14]. The inverse of 1 - λ, more or less determines the memory of the algorithm [9].

By computing the partial derivatives for all Z entries of the coefficient vector and equating the outcome to zero, we can reduce K; that is:

= n-iei = n-iei x(i - Z) = 0 (3.18)

Putting in the value of en in (3.18) and rearranging the equation, we have that;

n(l)[ n-iei x(i - l) x(i - l)] = n-idi x(i - l) (3.19)

Equation (3.19) can be expressed in form of matrices by equating;

Řx(n)wn = řn(dx), (3.20)

Where Řx(n) is the weighted sample correlation matrix for xn, and řn(dx) is the corresponding estimate of cross correlation between dn and xn. From (3.20), wn which reduces the cost function is;

wn = Řx(n)-1 řn(dx) (3.21) Determining the RLS Tap-Weight Update

The aim here is to derive the recursive solution for updating the least-squares estimate of the tap-weight vector wn, in the form;

wn = w(n-1) + w(n-1)

Where w(n-1) is the correction factor at n-1 time.

From (3.20), let the řn(dx) = Bn, such that Bn = Bn-1 at time n-1.

Bn = n-idi xi = n-idi xi + λ0 di xi

= λ B(n-1) + di xi

For xn = [xn, xn-1,..., xi - p]T, here xn have the dimension of p+1

On the other hand, let Řx(n) = Řx(n - 1), we have that;

Řx(n) = n-ixixTi = λŘx(n - 1) + xnxTn

At this stage we apply Woodbury matrix identity which states that;

(A + UCV)-1 = A-1 - A-1U(C-1 + VA1U)-1VA-1, such that;

Řx(n)-1 = [λ Řx(n - 1) + xnxTn]-1

= λ-1 Řx(n - 1)-1 - λ-1 Řx(n - 1)-1xn(1 + xTnλ-1 Řx(n - 1)-1xn)-1 xTnλ-1 Řx(n - 1)-1 (3.22)

For convenience computation, let

Pn = Řx(n)-1, so that (3.22) will be;

= λ-1 P(n - 1) - gn xTn λ-1 P(n - 1) (3.23)

Equation (3.23) is called Riccati equation for the RLS algorithm [9].

gn = λ-1 P(n - 1)-1xn(1 + xTnλ-1 P(n - 1)xn)-1 (3.24)

(3.24) is refers to as the gain factor

Rearranging (3.24) such that;

gn = (1 + xTnλ-1 P(n - 1)xn) = λ-1 P(n - 1)xn

gn + gnxTn λ-1 P(n - 1)xn = λ-1 P(n - 1)xn (3.25)

Rearranging (3.25) further we have;

gn = λ-1 [P(n - 1) - gnxTn P(n - 1)]xn (3.26)

Observe that (3.26) is equal to Pn, hence we say that;

gn = Pnxn

Recall from (3.21), wn = Řx(n)-1 řn(dx) = Pn řn(dx)

= λPn B(n-1) + dnxnPn

Writing Bn, Pn and gn in their recursive form, we get;

wn = λ(λ-1 P(n - 1) - gn xTnλ-1 P(n - 1)) B(n-1) + dnxngn (3.27)

= P(n - 1) B(n-1) gn xTn P(n - 1) B(n-1) + dnxngn

= P(n - 1) B(n-1) + gn(dn - xTn P(n - 1) B(n-1))

Where wn - 1 = P(n - 1) B(n-1)

wn = w(n - 1) + gn(dn - xTn w(n - 1))

with αn = (dn - xTn wn - 1), (3.28)

Therefore wn = wn - 1 + gn αn (3.29)

Equation (3.29) is the RLS tap-weigh update equation, (3.28) is the prior error

The mean correction factor is given as;

wn-1 = gn αn (3.30)

3.6 The Deterministic Factors for Selecting Adaptive Filtering Algorithm;

For all the existing adaptive algorithms, they all have their unique qualities which undisputable make them outstanding from each other. These individual qualities are the factors that draw up people's attention in real-life usage. Below are the major factors that could determine choice of algorithm over another in laboratory and real-life applications;

Convergence rate;

This is the time it takes the algorithm to get to the optimum Wiener solution at the mean-square error point relative to stationary inputs over some iteration. Algorithms with a fast rate of convergence learn and adapt quicker in new environment.

Computational complexity;

This is the amount operational computation that an algorithm needs to perform to achieve a single task. Algorithm that requires large number of computation such as multiplications, divisions and additions/subtractions is always very difficult, complex and time consuming to be design and implement in a real-time especially when there is thousands of input data stream to work on. Such an algorithm will indeed consumes lots of memory location and as well be costly to implement on hardware [9].


This measures the amount at which the final mean-square error of a preferred adaptive algorithm differs with the minimum mean-square of a Wiener filter over different range coefficients.


This is the ability of an adaptive algorithm to track the behaviour of desired signal in a non-stationary environment. Algorithm with good tracking ability shows very little variation at steady-state due to inevitable gradient noise [9].


Highly robust algorithm resists internal or external disturbances and will only experience small estimation error for a small fluctuation in the system.


The structural movement of information in an algorithm governs the way the information is implemented in hardware structure.

Numerical properties;

All algorithms suffer numerical inaccuracy when implemented numerically due to quantization error that results in converting from analogue - to - digital form. Numerical stability (which is an intrinsic characterises of adaptive filtering algorithm) and numerical accuracy (which is the number of bits used in representing the filter coefficients and the data sample numerically) are two main challenges here. Hence an adaptive algorithm with less or numb numerical variation is said to be numerically robust and will be preferred for real-time applications [9].