# Filter FIR Analog

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Introduction

The advent of technology over the time has vastly changed the way people communicate and live. Telephones have shifted from luxury to a necessasity, the usage have never been this much in the past. More and more research is done and equipments are made to improve the communication via telephones and to make the communication noiseless.

Telephones are often used in noisy environment; resulting additional unwanted noise signals which are transmitted with the speaker's voice over the channel. At the receiving side the speaker's voice can be unclear and it can be hard to distinguish along the noise in the signal.

In order to remove the noise from the signal we adapt the technique of filtering.

Filter

In signal processing the purpose of a filter is to remove the unwanted parts from the signal like random noise or to extract useful information from the signal by blocking the undesired part.

Types of Filters

There are two main types of filters analog and digital filters

Analog Filters

Analog filters are made up of analog electric components like resistors, op-amps and capacitors to provide the filtering. Analog filters are widely used in noise reduction, graphic equalizers in hi-fi systems and etc.

Digital Filters

Digital filters use a digital processor to perform numerical calculations on sampled values of signal. The processor can be a general purpose Computer or a DSP Chip.

The analog input signal is sampled and digitized by the help of analog to digital converter (ADC) and then filtered by the digital filter to remove the unwanted part from it.

Normally the two types of digital filters: IIR (Infinite Impulse Response) and FIR (Finite Impulse Response). Filters are commonly used to remove the noise from the signal.

FIR (Finite Impulse Response) FILTERS

A finite impulse response (FIR) filter is a digital filter with finite impulse response. Mathematically

The above equation can also be expressed as

Where P is filter's order, x(n) is the input signal, y(n) is the output signal at discrete time instance n and b_{i} are the filter coefficients or i-th feed forward tap.

Advantages of FIR Filter

- Easier to design and are mostly bounded input bounded output (BIBO) stable

- FIR filter can have linear phase if the filter taps are designed symmetrical about the centre tap position. This is desired for many application like music, video and image processing

- No feedback is required so a same relative error occurs in each calculation as any rounding errors are not compounded by summed iteration.

- Can be simply implemented on most DSP microprocessors due to the low sensitivity to filter coefficient quantization errors,

- The FIR calculation can be done by looping a single instruction.

- They are suited to multi-rate applications. By multi-rate, we mean either "decimation" (reducing the sampling rate), "interpolation" (increasing the sampling rate), or both. Whether decimating or interpolating, the use of FIR filters allows some of the calculations to be omitted, thus providing an important computational efficiency. In contrast, if IIR filters are used, each output must be individually calculated, even if it that output will discard (so the feedback will be incorporated into the filter).

- They have desirable numeric properties. In practice, all DSP filters must be implemented using "finite-precision" arithmetic, that is, a limited number of bits. The use of finite-precision arithmetic in IIR filters can cause significant problems due to the use of feedback, but FIR filters have no feedback, so they can usually be implemented using fewer bits, and the designer has fewer practical problems to solve related to non-ideal arithmetic.

- They can be implemented using fractional arithmetic. Unlike IIR filters, it is always possible to implement a FIR filter using coefficients with magnitude of less than 1.0. (The overall gain of the FIR filter can be adjusted at its output, if desired.) This is an important consideration when using fixed-point DSP's, because it makes the implementation much simpler.

Disadvantages of FIR Filters

FIR filters sometimes need more memory and calculation to achieve a given filter response characteristic.

Certain responses are not practical to implement with FIR filters.

IIR (Infinite Impulse Response) FILTERS

Infinite impulse response (IIR) filter is also a type of digital filter bearing infinite impulse response. IIR filters use feedback

Where ak is the k-th feedback tap

Advantages of IIR Filters

- Suitable for high speed designs as a lower number of multipliers are required as compared to FIR

- Use less memory and calculation to achieve a filter response

Disadvantages of IIR Filter

- IIR filters don't have a linear phase and can be unstable if design is not proper

- Sensitivity to the filter coefficient is quiet greater than FIR filters this mostly occurs while using a finite no of bits to represent the filter taps or coefficients

- IIR filters are non causal and theoretical practical implementation is not possible

Adaptive Filters

As with the advancement in technology there has been the urge to have an intelligent filter which can give precise output with maximum noise cancellation and this is unachievable using the conventional filters like FIR and IIR filters which have fixed filter coefficients. In order to reach that mark adaptive filters are used.

Adaptive signal processing evolved from techniques developed to enable the adaptive control of time- varying systems

As the term adaptive clears the meaning, an adaptive filter is a filter that *self-adjusts* its transfer function according to an optimizing algorithm.

An adaptive filter is a digital filter whose coefficients are regularly adjusted during the filtering process, with the aid of an adaptive algorithm. Adaptive filters have two basic parts:

(a) A ‘Transversal Filter', which applies the adapted coefficients to the input signal.

(b) An adaptive algorithm, which adjusts the filter coefficients according to the difference between desired and input signal.

Adaptive Noise Cancellation removes background noise from useful signals. This technique is very useful when communication is handled in a very noisy environment

A typical example is of a jet aircraft. The jet engine can produce a noise over 140dB. Since normal human speech is at a level between 30 and 40 dB, the pilot's communication is impossible in such a environment if there no noise cancellation equipments inside the cockpit.

Usually the background noise isn't steady and it changes time to time. Considering the above example, the noise from the jet engine will be different at various flight states. So the noise cancellation must be an adaptive process: it should be adjustable with the changing environment and should be able to cope with changes in conditions.

Why Adaptive Filters

Usually the signals are interfered by the noise and other signals residing in the same frequency band.

When noise and desired signals are in different frequency band then conventional filtering techniques using either FIR or IIR filter having fixed filtering coefficients can be used.

But in situations where noise is changing randomly the filter coefficients must be adjusted to remove the noise to a maximum, and this can be done by adaptive filters

CHAPTER 2

Adaptive Filter

Adaptive Filter Theory

While beginning the study of *“adaptive filter*s” the meaning of the terms *“adaptiv*e” and *“filte*r” may be important to understand in a very general sense. To understand the adjective “*adaptive”* consider a system which is trying to adjust itself as response to some changes in its surroundings. Putting it in other way the system adjusts its parameters to achieve a goal which depends upon state of system and its surroundings. Basically filtering is a signal processing technique whose objective is to process a signal in order to manipulate the information contained in the signal. In other words, a filter is a device that maps its input signal into another output signal by extracting only the desired information contained in the input signal. Filter is the system carrying out the adaptation process. Depending upon the time required to meet the final target of the adaptation process (convergence time that is available to carry out the adaptation), we can have a variety of adaptation algorithms and filter structures.

We can define adaptive filters on basis of above explanation as Adaptive filters serve the purpose of filtering input signal so that the response is the desired signal.

Adaptive filters are a class of filters that iteratively change their parameters. Adaptive filter minimizes the error between a desired and a reference signal. The process results in N tap values which define nature of filtered input signal.

Example

Suppose a hospital is recording a heart beat (an ECG), which is being corrupted by a 50 Hz noise (the frequency coming from the power supply in many countries).

One way to remove the noise is to filter the signal with a notch filter at 50 Hz. However, due to slight variations in the power supply to the hospital, the exact frequency of the power supply might (hypothetically) wander between 47 Hz and 53 Hz. A static filter would need to remove all the frequencies between 47 and 53 Hz, which could excessively degrade the quality of the ECG since the heart beat would also likely have frequency components in the rejected range

Why Adaptive Filters

Usually the signals are interfered by the noise and other signals residing in the same frequency band.

When noise and desired signals are in different frequency band then conventional filtering techniques using either FIR or IIR filter having fixed filtering coefficients can be used.

But in situations where noise is changing randomly the filter coefficients must be adjusted to remove the noise to a maximum, and this can be done by adaptive filters An adaptive filter is necessary when either the fixed specifications are unknown or time-invariant filters cannot satisfy the specifications. Strictly speaking an adaptive filter is a nonlinear filter since its characteristics are dependent on the input signal and consequently the homogeneity and additivity conditions are not satisfied. Additionally, adaptive filters are time varying since their parameters are continually changing in order to meet a performance requirement. In a sense, an adaptive filter is a filter that performs the approximation step on line.

Adaptive Filtering Problem

The purpose of every filter is to extract useful information and reject unwanted signal (noisy data). A fixed filter is designed with prior knowledge of signal and noise. The adaptive filters continuously adjust its parameters with the change in surrounding with the aid of recursive algorithms. This technique is useful when we have no information of signal and the noise

The discrete adaptive filter convolves the input u(n) with filter taps w(n) and produces an output y(n). a desired reference signal d(n) is compared with the output of filter to have an estimation error e(n). This error is used in used to increment the filter taps for next iteration.

*Several algorithm exist for this weight adjustment like *

- Least Mean Square (LMS) Algorithm

- Recursive Least Square (RLS) Algorithm

The choice of algorithm depends upon the complexity, environment and convergence time

Approaches to development of linear adaptive filters

Two basic approaches are used for development of adaptive filters

- Stochastic gradient approach

- Least squares estimation

Stochastic gradient approach

In stochastic gradient approach the basis for the implementation linear adaptive filter is tapped delay line or transversal filter. For static inputs the cost function (or index of performance) is defined as the mean square error (i.e. the mean square difference between output of transversal filter and desired response)

The cost function is a 2^{nd} order function of the tap weights in transversal filter. The recursive algorithm to update tap weights is developed in two stages

First an iterative process is used to solve the Wiener Hopf equations (the matric equation defining the optimum wiener solution). This iterative procedure is based on the method of steepest descent.

Steepest descent method uses a gradient vector whose value is dependent on two parameters

- The correlation metric of tap inputs in transversal filter

- & cross correlation vector between the same tap inputs and the desired response

To make sure the gradient vector assumes stochastic character we use instantaneous values for the above said correlation

The correlation results in LMS (least mean square) algorithm. The process can be generally described as

The error signal here is the difference between a desired response and actual response of transversal filter.

Least Squares Estimation

Least square estimation is used to minimize a cost function equal to the weighted sum of squares of the difference between the actual output of adaptive filter and desired response for different time instants. Cost function is recursive in nature as the previous weighted values of estimation values are also taken into account, the parameter λ(forgetting factor) has the range 0<λ<1. λ is known as the forgetting factor because the previous values have negligible effect on filter tap updating when λ<1. The memory of the algorithm is the value of 1/ (1- λ)

Adaptive algorithms

Several methods exists which can be used for updating the filter tap or weights of the adaptive filter.

Some of these are

- Wiener filter

- Least mean square (LMS) algorithm

- Recursive least square (RLS) algorithm

- Kalman filter

Wiener Filters

The wiener filters were developed in 1949 by

It is the optimum linear filter in the sense that the output signal matches the desired to the maximum possibility. Wiener filter is often not considered for practical implementation due to its computational complexity. Wiener filter is as a frame of refrence for linear filtering of stochastic signals to which other algorithms can be compared. Mean square error (MSE) is used to formulate wiener and other adaptive algorithms.

If the input signal u(n) to a filter with M taps is given as

u(n)=[u(n), u(n-1),K, u(n-M+1)]_{T}

The tap coefficients or weight can be expressed in vector form as

w=[w(0), w(1),………, w(M-1)]_{ T}

then the square of output error will be

e_{n}^{2}=d_{n}^{2}-2d_{n}u_{n}^{T}w+w^{T}u_{n}u_{n}^{T}w

the mean square error J is obtained by taking expectation of both sides

J=E[e_{n}^{2}]=E[d_{n}^{2}] -2E[d_{n}u_{n}^{T}w+w^{T}u_{n}u_{n}^{T}w ]

=σ2 + 2p^{T}w + w^{T}Rw

σ is desired output's varience, p is the cross correlation vector and R is the autocorrelation matrix of u. the plot of MSE against the weights is a non negative parabolic shaped bowl with the minimum point being the optimal weights.

=dJ/dw=-2p+2Rw

We need to solve the Wiener Hopf equations to find the optimal wiener filter for a signal.

Let the R matrix denoted the MxM correlation matrix of u i.e.

R=E[u(n)u^{H}(n)]

H denotes the hermitian transpose and u(n) is the Mx1 tap input vector

R in expended form is expressed as

p represents the cross correlation vector is desired response and the tap inputs

p= E[u(n)*d(n)]

in expanded form

p= [p(0), p(-1), …………….., p(1-M)]^{ T}

either zero or else negative lags are used in definition of p so the wiener hopf equation in compact form can be rewritten as

Rw_{o}=p

Where w_{o} is the Mx1 optimum tap weights or filter coefficients vector for the transversal filter

w_{o}=[w_{o0}, w_{o1}, k, w_{o,M-1}]^{T}

Method of Steepest Descent

We can use the method of steepest descent to converge to the optimal filter weights to increase the error performance for a given problem. The gradient of a surface point to the direction of maximum increase and to minimize the error we want the filter taps to converge to the minimum. So if move opposite to the direction of gradient_{}, and update the weights at each time step we can reach the minimum adaptively using the equation

The constant µis the step size parameter. The step size parameter is the speed of convergence of algorithm. For the stability of convergence of algorithm the µ should satisfy the following condition

Where λ_{max} is the largest eigen value of the correlation matrix R

The method of steepest descent is simpler than wiener filter but is it practical use is rare due to the high computation needed because to calculate the gradient at each time step p and R should be calculated. Compared with the LMS algorithm the performance is similar but the advantage of LMS is the lesser no of calculations

Least Mean Square(LMS) Algorithm

The technique LMS algorithm was developed in 1960 by Widrow and Hoff for use in training neural networks. Least mean squares (LMS) is similar to steepest descent method as it also finds the weights or filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal) by iteratively approaching to the mean square error (MSE) but instead of calculating gradient at every time step a rough estimation to the gradient is used The idea behind LMS filters is to use the method of steepest descent to find a coefficient vector w_{n} which minimizes a cost function. The cost function is

The error at filter's output is expressed as

e_{n} = d_{n} - w^{T}_{n}u_{n}

the error is simply the difference of desired response and the actual output of filter.

The gradient by the definition of error estimation can be approximated as

= -2e_{n}u_{n}

Substituting the gradient expression into the weight update equation from the steepest descent method, the result is

W_{n-1}=w_{n} + 2c.e_{n}u_{n}

This is the widrow hoff LMS algorithm

In steepest descent method the step size parameter μ must satisfy the following condition _{}

λ_{max }may vary with time so to avoid this another condition for μ is used

0< μ < 2/MS_{max}

M is the no of filter taps and S_{max} is the max value of the power spectral density of the tap inputs u.

For a N tap filter the no of computations are reduce to 2*N multiplications and N additions per coefficient updates which is very feasible for real time applications and is a major reason behind LMS simplicity and popularity.

Recursive Least Square (RLS) Algorithm

RLS algorithm is based on the method of least squares. The least square method is applied to have a most suitable and fitting curve for a data points set. The fittest curve is approached by minimizing the sum of the squares of the offset of the points from the curve. RLS algorithm solves the problem of least squares recursively

Choosing the algorithm

Choosing an algorithm for the analysis depends on the following factors

- Convergence Rate

Convergence rate define the no of iterations or the rate at which the filter converges to optimum wiener solution. If the convergence rate is fast the algorithm will adapt the stationary environment rapidly but there will be a trade off for stability characteristics the system can be unstable and can diverge from the solution instead of converging. The decrease in convergence rate can lead to a stable system but at the same time the no of computations will increase. This shows that the convergence rate can only be considered in relation to the other performance metrics, not by itself with no regards to the rest of the system.

- Robustness

Robustness is a measure of how well the System can resist both input and quantization noise. More the robustness more will be the stability of the system. The disturbance may be due to numerous reasons like internal or external changes in the filter behavior and characteristics.

- Computational Requirements

- The no of operations (like addition, multiplication etc) for a single iteration of algorithm

- Memory size requirement for data and program storage

- The budget for the algorithm

Adaptive Filtering System Configurations

There are four major types of adaptive filtering configurations;

- Adaptive system identification.

- Adaptive linear prediction.

- Adaptive Inverse System Configuration

- Adaptive noise cancellation.

Digital signal processing (DSP) has been a major player in the current technical advancements such as noise filtering, system identification, and voice prediction. Standard DSP techniques, however, are not enough to solve these problems quickly and obtain acceptable results. Adaptive filtering techniques must be implemented to promote accurate solutions and a timely convergence to that solution. A number of adaptive structures have been used for different applications in adaptive filtering. All of the above systems are similar in the implementation of the algorithm, but different in system configuration. All 4 systems have the same general parts; an input x (n), a desired result d(n), an Output y (n), an adaptive transfer function w(n), and an error signal e(n) which is the difference Between the desired output u(n) and the actual output y(n). In addition to these parts, the system Identification and the inverse system configurations have an unknown linear system u(n) that can Receive an input and give a linear output to the given input.

Adaptive System Identification

The adaptive system identification is primarily responsible for determining a discrete estimation of the transfer function for an unknown digital or analog system. The same input x(n) is applied to both the adaptive filter and the unknown system from which the outputs are compared (see figure a). The output of the adaptive filter y(n) is subtracted from the output of the unknown system resulting in a desired signal d(n). The resulting difference is an error signal e(n) used to manipulate the filter coefficients of the adaptive system trending towards an error signal of zero.

After a number of iterations of this process are performed, and if the system is designed correctly, the adaptive filter's transfer function will converge to, or near to, the unknown system's transfer function. For this configuration, the error signal does not have to go to zero, although convergence to zero is the ideal situation, to closely approximate the given system. There will, however, be a difference between adaptive filter transfer function and the unknown system transfer function if the error is nonzero and the magnitude of that difference will be directly related to the magnitude of the error signal.

Adaptive Linear Prediction

Adaptive linear prediction is the second type of adaptive configuration. This configuration essentially performs two operations. The first operation, if the output is taken from the error signal e(n), is linear prediction. The adaptive filter coefficients are being trained to predict, from the statistics of the input signal x(n), what the next input signal will be. The second operation, if the output is taken from y(n), is a noise filter similar to the adaptive noise cancellation outlined in the coming section. As per described, neither the linear prediction output nor the noise cancellation output will converge to an error of zero. This is true for the linear prediction output because if the error signal did converge to zero, this would mean that the input signal x(n) is entirely deterministic, in which case we would not need to transmit any information at all. In the case of the noise filtering output,y(n) will converge to the noiseless version of the input signal.

Adaptive Inverse System Configuration

The adaptive inverse system configuration is shown in figure c. The goal of the adaptive filter here is to model the inverse of the unknown system u(n). This is particularly useful in adaptive equalization where the goal of the filter is to eliminate any spectral changes that are caused by a prior system or transmission line. The way this filter works is as follows. The input x(n) is sent through the unknown filter u(n) and then through the adaptive filter resulting in an output y(n). The input is also sent through a delay to attain d(n). As the error signal is converging to zero, the adaptive filter coefficients w(n) are converging to the inverse of the unknown system u(n). For this configuration, as for the system identification configuration, the error can theoretically go to zero.

This will only be true; however, if the unknown system consists only of a finite number of poles or the adaptive filter is an IIR filter. If neither of these conditions is true, the system will converge only to a constant due to the limited number of zeroes available in an FIR system.

Adaptive noise cancellation

Another configuration is the adaptive noise cancellation configuration as shown in figure d. In this configuration the input x(n) is compared with a desired signal d(n), which consists of a signal s(n) corrupted by another noise N0(n). The adaptive filter coefficients adapt to cause the error signal to be a noiseless version of the signal s(n). Both of the noise signals for this configuration need to be uncorrelated to the signal s(n). In addition, the noise sources must be correlated to each other in some way, preferably equal, to get the best results. Do to the nature of the error signal; the error signal will never become zero. The error signal should converge to the signal s(n), but not converge to the exact signal. In other words, the difference betwee

Performance Measures in Adaptive Systems

There are six major performance measures of Adaptive Systems.

Step size parameter

μ determines the convergence or divergence speed and precision of the adaptive filter coefficients. If μ is large, the filter will converge fast, but could diverge if μ is too large. When μ is large, the adaptation is quick, but there will be an increase in the average excess MSE. This excess MSE may be undesirable result. If μ is small, the filter will converge slowly, which is equivalent to the algorithm having “long” memory, an undesirable quality. Every application will have a different step size that needs to be adjusted.

Convergence Rate

Convergence rate define the no of iterations or the rate at which the filter converges to optimum wiener solution. If the convergence rate is fast the algorithm will adapt the stationary environment rapidly but there will be a trade off for stability characteristics the system can be unstable and can diverge from the solution instead of converging. The decrease in convergence rate can lead to a stable system but at the same time the no of computations will increase. This shows that the convergence rate can only be considered in relation to the other performance metrics, not by itself with no regards to the rest of the system.

Minimum Mean Square Error

The minimum mean square error (MSE) is a metric indicating how well a system can adapt to a given solution. A small minimum MSE is an indication that the adaptive system has accurately modeled, predicted, adapted and/or converged to a solution for the system. A very large MSE usually indicates that the adaptive filter cannot accurately model the given system or the initial state of the adaptive filter is an inadequate starting point to cause the adaptive filter to converge.

Computational Complexity

Computational complexity is particularly important in real time adaptive filter applications. When a real time system is being implemented, there are hardware limitations that may affect the performance of the system. A highly complex algorithm will require much greater hardware resources than a simplistic algorithm.

Stability

Stability is probably the most important performance measure for the adaptive system. In most cases the systems that are implemented are marginally stable, with the stability determined by the initial conditions, transfer function of the system and the step size of the input.

Robustness

Robustness is a measure of how well the System can resist both input and quantization noise. More the robustness more will be the stability of the system. The disturbance may be due to numerous reasons like internal or external changes in the filter behavior and characteristics.

Filter length

Applications of adaptive filter

Adaptive filters perform well in the unknown environments and track statistical time variations.Adaptive filters are useful in many fields. Some of them are

- Channel/System Identification

- Noise Cancellation

- Channel Equalization

- Adaptive Controller

- Inverse modeling

- Signal prediction

- Adaptive Feedback Cancellation

- Interference cancellation

The way to get the desired output may vary for above said applications

Chapter 3

LMS Algorithm

Introduction

The technique LMS algorithm was developed in 1960 by Widrow and hoff for use in training neural networks through their studies of pattern recognition. Least mean squares (LMS) is similar to steepest descent method as it also finds the weights or filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal) by iteratively approaching to the mean square error (MSE) but instead of calculating gradient at every time step a rough estimation to the gradient is used The idea behind LMS filters is to use the method of steepest descent to find a coefficient vector w_{n} which minimizes a cost function. The cost function is

LMS algorithm is a linear adaptive filter algorithm, consisting two basic processes

*A filter process which involves*

- Computing the output of the linear filter in response to an input

- Generating an estimation error by comparing this output with a desired response

*An adaptive process which involves the automatic adjustment of the parameters of filters in accordance with the estimation error*

The combination of the above two processes constitute a feedback loop. First we have transversal filter around which the LMS algorithm is built. This component does the filtering.

Secondly there is a mechanism for the updating of the filter taps by adapting to the changes in the surroundings and updating the filter taps after each iteration.

LMS Equation

LMS algorithm is same as the steepest descent method just the gradient is not calculated an approximation is used.

The main equation for updating the taps is

W_{k}[n+1] = w_{k}[n] + μ u[n-k]*e[n]

the parameter μ plays a very important role in the LMS algorithm. It can also be varied with time, but usually a constant μ ("convergence weight facor") is used, chosen after experimentation for a given application

Where e[n] is the error expressed as

e[n] = d[n]- y[n]

d[n] is the desired response and y[n] is the filter output

Derivation of LMS Algorithm

Implementation of LMS Algorithm

3 distinct steps are required for each iteration in the following order

- The FIR filter output y(n) is calculated as

y[n] = ∑ w_{k }u[n-k]

- Error is estimated using

e[n]= d[n] - y[n]

- Updating of tap weights of the FIR filter for next iteration is done using

W_{k}[n+1] = w_{k}[n] + μ u[n-k]*e[n]

Simplicity of LMS

LMS is the simplest adaptive algorithm and its implementation is the easiest as compared to other adaptive algorithms because of its computational simplicity. For each iteration LMS algorithm requires 2N additions and 2N+1 multiplications (N for calculating the output y(n), 1 for 2μe(n) and the additional N for scalar by vector multiplication)

Advantages of LMS Algorithm

- it was the first

- it is very simple

- in practice it works well (except that sometimes it converges slowly)

- it requires relatively litle computation

- it updates the tap weights every sample, so it continually adapts the filter

- it tracks slow changes in the signal statistics well

Tradeoffs

large μ: fast convergence, fast adaptivity

small μ: accurate W → less misadjustment error, stability