# Band interference canceller based on lms algorithms

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The chief limiting factor in our system performance is interference. As discussed previously in Chapter 2, using an appropriate interference cancellation scheme can effectively increase efficiency of any system.

We now discuss the first of interference cancellation for our system i.e. Cross Polar Interference Cancellation. In this chapter, we only pay attention to the performance of the canceller only and not the results from the main system.

## 4.1.1 CROSS-POLAR INTERFERENCE CANCELLER (XPIC)

We consider transmission of two independent, orthogonally polarized M-ary QAM signals with same bandwidth and carrier frequency. The base band signal present at the receiver can be expressed as

## ¥

m=0 Si(t) = “ ¡I (m) h(t-mT), i = V, H, 4.1

Where, ¡i (m) denotes the complex valued information system stream, and h(t) is the complex low-pass equivalent of the channel impulse response. The complex valued symbols are denoted by

¡i (m)= ¤i (m) + j¢ (m) i = V, H, 4.2

Where ¤i (m) and ¢i (m) take on elements from the constellation set of QAM. They are identically distributed and take on specified values with equal probability.

It is assumed that the data sequences are synchronized and carrier signals are coherent. As previously discussed, the channel is assumed to be of the slowly time variant nondispersive type that takes two independent streams of data and distorts the transmission by introducing a fraction of one stream onto the other.

The dual channel matrix is characterized by

H = hvv hvH 4.3

hHv hHH

in which hij, are complex-valued quantities used to represent the amount of cross polar interference, channel attenuation and phase shift (in some cases). These values are time varying but slow in comparison to the symbol rate of transmission.

The received low pass equivalent signals can be expressed as

rv(t) = hvv sv (t) + hVH Sn (t) + n1 (t) 4.4

rH(t) = hHV sv (t) + hHH SH (t) + n2 (t)

Where ni(t) is independent, zero-mean, white Gaussian processes. The sampled signals are denoted by xi(k), i = V, H, and are expressed as

xv(k) = hvv € ¡v(k) + hVH ¡H (k) + n1 (k) 4.5

rH(k) = hHV ¡v (k) + hHH ¡H (k) + n2 (k)

The adaptive canceller that attempts to remove the cross-polar interference is characterized by

W = wvv wVH 4.6

wHV wHH

Where, wij are the canceller coefficients.

This adaptive device that is part of the QAM detector circuit is studied for LMS adaptive method. Figure 3.1 shows the LMS canceller structure. The samples at the matched filter output of each receiver are inputs to a bank of adaptive filters formed by a set of multiplier accumulators (MACs). To update the coefficients of the canceller, each MAC contains storage elements for storing the result of the multiplication of the signal-sample detection error and the complex conjugate of the corresponding received signal sample at the matched filter output. The calculated coefficients are multiplied by the signal samples at the receiver input and are used to cancel the cross-polar interference. The detectors shown in figure 3.1 are QAM demodulators .

The canceller structure consists of a simple adaptive filter, which minimizes the least mean square error in symbol estimation. In our case, the signal sample estimation error is given by

¥i (k) = âi (k) - âi (k), i = V, H, 3.7

These signals can easily be pointed out from the figure. The canceller coefficients are selected by solving this equation min E(/¥V (k) |2 + |¥H (k) |2). The minimization process is done using the steepest decent algorithm. The process is recursive and states that

wij (k+1) = wij (k) + ¦¥i (k) xj (k), i = V, H, j = V, H, 3.8

Where, ¦ is the step size for this algorithm.

The signal samples at the canceller output for each channel can be expressed as

âv(k) = wVVxV(k) + wVH XH(K) 3.9

âH(k) = wHVXV(k) + wHH XH(K)

As discussed previously, this algorithm searches the objective J = E{/e2 (n)|} (performance index), which is an L-dimensional, bowl shaped, and hyper parabolic surface for the canceller coefficients wij, which produces Jmin. One of the attractions of LS minimization is that such a surface has a single minimum. The iteration begins with an initial estimate or a guess of a weighting vector (some starting point on J surface) and the filter is successively updated using the iterative formula (3.8) designed to force the filter towards an optimum solution.

The algorithm has a natural tendency to take large steps when current weighting values are far from solution and progressively smaller steps as weights approach the optimum solution. Provided the step size '¦' is not too large, the process will eventually converge to the minimum.

Operation of the LMS algorithm requires selection of step size '¦' and the initial canceller coefficients. Because of the uni-modal nature of 'J', initial values for these coefficients do not affect the ultimate convergence.

As far as the step size is concerned, large values of '¦' results in increase in speed of convergence but also results in increase in noise generated by the algorithm itself. This might cause instability to the system. On the other side, smaller step sizes ensures stability and low noise but take very long to converge. So, there is always a trade off while choosing its value. The LMS algorithm converges in mean if

2

max 0 < ¦ <------- 3.10

Where ¬max is the largest eigenvalue of R. Since this is unknown as there is no need to calculate R (correlation matrix), the trace of R is sometimes taken as conservative estimate, so that '¦' is bounded by :

2

r(0) 0 < ¦ <------- 3.11

Where, r(0) is the power of the input signal. In our case, power is normalized and therefore step size should be limited from 0 to 1.

## 4.2 PERFORMANCE OF LMS ALGORITHM IN CROSS-POLAR INTERFERENCE CANCELLER

First task is to select an appropriate step size for the canceller. This canceller is expected to work for various noise levels and different interference values. This algorithm is tested for a range of SNR and XPI (Cross polar interference) values with different step sizes and optimum value is selected from the resulted plots. There are two factors between which there has to be some trade off. They are the error signal fluctuation or algorithm noise and the rate of convergence. As discussed previously, higher step size means higher algorithm noise and chances for instability but faster convergence.

Figure 4.1 The Block diagram of 16 coefficient LMS canceller is illustrated

## 4.3 BASE BAND INTERFERENCE CANCELLER - BASED ON RLS ALGORITHM

In general, the RLS can be used to solve any problem that can be solved by adaptive filters. For example, suppose that a signal d(n) is transmitted over an echoey, noisy channel that causes it to be received as

x(n)=\sum_{k=0}^q b_n(k) d(n-k)+v(n+1)

where v(n) represents additive noise. We will attempt to recover the desired signal d(n) by use of a p-tap FIR filter, \mathbf{w}:

\hat{d}(n) = \sum_{k=0}^{p-1} w_n(k)x(n-k)=\mathbf{w}_n^\mathit{T} \mathbf{x}(n)

Where \mathbf{x}_n=[x(n)\quad x(n-1)\quad\ldots\quad x(n-p+1)]^Tis the vector contain the p most recent samples of x(n). Our goal is to estimate the parameters of the filter \mathbf{w}, and at each time n we refer to the new least squares estimate by \mathbf{w_n}. As time evolves, we would like to avoid completely redoing the least squares algorithm to find the new estimate for \mathbf{w}_{n+1}, in terms of \mathbf{w}_n.

The benefit of the RLS algorithm is that there is no need to invert matrices, thereby saving computational power. Another advantage is that it provides intuition behind such results as the Kalman filter.

The idea behind RLS filters is to minimize a cost function C by appropriately selecting the filter coefficients\mathbf{w}_n, updating the filter as new data arrives. The errors signal e(n) and desired signal d(n) is defined in the negative feedback diagram below:

AdaptiveFilter C.png

Figure 4.2 Block Diagram of RLS Error Estimation

The error implicitly depends on the filter coefficients through the estimate\hat{d}(n):

e(n)=d(n)-\hat{d}(n)

\mathbf{R}_{x}(n)\,\mathbf{w}_{n}=\mathbf{r}_{dx}(n)

The smaller λ is, the smaller contribution of previous samples. This makes the filter more sensitive to recent samples, which means more fluctuations in the filter co-efficient. The λ = 1 case is referred to as the growing window RLS algorithm.

The discussion resulted in a single equation to determine a coefficient vector which minimizes the cost function. In this section we want to derive a recursive solution of the form

\mathbf{w}_{n}=\mathbf{w}_{n-1}+\Delta\mathbf{w}_{n-1}

where \Delta\mathbf{w}_{n-1}is a correction factor at time n-1.

\mathbf{r}_{dx}(n)=\lambda\mathbf{r}_{dx}(n-1)+d(n)\mathbf{x}(n)

\mathbf{P}(n)

=\lambda^{-1}\mathbf{P}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)

where the gain vector g(n) is

\mathbf{g}(n)

=\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}(n)\left\{1+\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)\mathbf{x}(n)\right\}^{-1}

=\mathbf{P}(n-1)\mathbf{x}(n)\left\{\lambda+\mathbf{x}^{T}(n)\mathbf{P}(n-1)\mathbf{x}(n)\right\}^{-1}

With the recursive definition of \mathbf{P}(n)the desired form follows

\mathbf{g}(n)=\mathbf{P}(n)\mathbf{x}(n)

Now we are ready to complete the recursion. As discussed

\mathbf{w}_{n}

=\mathbf{P}(n)\,\mathbf{r}_{dx}(n)

=\lambda\mathbf{P}(n)\,\mathbf{r}_{dx}(n-1)+d(n)\mathbf{P}(n)\,\mathbf{x}(n)

The second step follows from the recursive definition of \mathbf{r}_{dx}(n ). Next we incorporate the recursive definition of \mathbf{P}(n)together with the alternate form of \mathbf{g}(n)and get

\mathbf{w}_{n}

=\lambda\left[\lambda^{-1}\mathbf{P}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)\right]\mathbf{r}_{dx}(n-1)+d(n)\mathbf{g}(n)

With \mathbf{w}_{n-1}=\mathbf{P}(n-1)\mathbf{r}_{dx}(n-1) we arrive at the update equation

=\mathbf{w}_{n-1}+\mathbf{g}(n)\left[d(n)-\mathbf{x}^{T}(n)\mathbf{w}_{n-1}\right]

=\mathbf{w}_{n-1}+\mathbf{g}(n)\alpha(n)

Where \alpha(n)=d(n)-\mathbf{x}^{T}(n)\mathbf{w}_{n-1} is the a priori error, Compare this with the posterior error; the error calculated after the filter is updated:

e(n)=d(n)-\mathbf{x}^{T}(n)\mathbf{w}_n

That means we found the correction factor

\Delta\mathbf{w}_{n-1}=\mathbf{g}(n)\alpha(n)

This intuitively satisfying result indicates that the correction factor is directly proportional to both the error and the gain vector, which controls how much sensitivity is desired, through the weighting factor, λ.

The RLS algorithm for a p-th order RLS filter can be summarized as

Parameters:

p = filter order

λ = forgetting factor

δ = value to initialize \mathbf{P}(0)

Initialization:

\mathbf{w}_{n}=0

\mathbf{P}(0)=\delta^{-1}Iwhere I is the (p + 1)-by-(p + 1) identity matrix

Computation:

For n=0,1,2,\dots

\mathbf{x}(n) = \left[ \begin{matrix} x(n)\\ x(n-1)\\ \vdots\\ x(n-p) \end{matrix} \right]

\alpha(n) = d(n)-\mathbf{w}(n-1)^{T}\mathbf{x}(n)

\mathbf{g}(n)=\mathbf{P}(n-1)\mathbf{x}(n)\left\{\lambda+\mathbf{x}^{T}(n)\mathbf{P}(n-1)\mathbf{x}(n)\right\}^{-1}

\mathbf{P}(n)=\lambda^{-1}\mathbf{P}(n-1)-\mathbf{g}(n)\mathbf{x}^{T}(n)\lambda^{-1}\mathbf{P}(n-1)

\mathbf{w}(n) = \mathbf{w}(n-1)+\,\alpha(n)\mathbf{g}(n)

Figure 4.3 The RLS Algorithm Illustration

## 4.4 COMPARISON OF THE RLS AND LMS ALGORITHMS

1. In the LMS algorithm, the correction that is applied in updating the old estimate of the coefficient vector is based on the instantaneous sample value of the input vector and the error signal. On the other hand, in the RLS algorithm the computation of this correction utilizes all the past available information.

2. In the LMS algorithms, the correction applied to the previous estimate consists of the product of three factors: the (scalar) step-size parameter m, the error signal e( n-1), and the input vector u(n-1). On the other hand, in the RLS algorithm this correction consists of the product of two factors: the true estimation error h(n) and the gain vector k(n). The gain vector itself consists of F-1(n), the inverse of the deterministic correlation matrix, multiplied by the input vector u(n). The major difference between the LMS and RLS algorithms is therefore the presence of F-1(n) in the correction term of the RLS algorithm that has the effect of decorrelating the successive inputs, thereby making the RLS algorithm self-orthogonalizing. Because of this property, we find that the RLS algorithm is essentially independent of the eigen value spread of the correlation matrix of the filter input.

3. The LMS algorithm requires approximately 20M iterations to converge in mean square, where M is the filter order. On the other band, the RLS algorithm converges in mean square within less than 2M iterations. The rate of convergence of the RLS algorithm is therefore, in general, faster than that of the LMS algorithm by an order of magnitude.

4. Unlike the LMS algorithm, there are no approximations made in the derivation of the RLS algorithm. Accordingly, as the number of iterations approaches infinity, the least-squares estimate of the coefficient vector approaches the optimum Wiener value, and correspondingly, the mean-square error approaches the minimum value possible. In other words, the RLS algorithm, in theory, exhibits zero maladjustment. On the other hand, the LMS algorithm always exhibits a nonzero maladjustment, however, this maladjustment may be made arbitrarily small by using a sufficiently small step-size parameter m.

5. The superior performance of the RLS algorithm compared to the LMS algorithm, however, is attained at the expense of a large increase in computational complexity. The complexity of an adaptive algorithm for real-time operation is determined by two principal factors: (1) the number of multiplications (with divisions counted as multiplications) per iteration, and (2) the precision required to perform arithmetic operations. The RLS algorithm requires a total of 3M(3 + M )/2 multiplications, which increases as the square of M, the number of filter coefficients. On the other hand, the LMS algorithm requires 2M + 1 multiplications, increasing linearly with M. For example, for M = 31 the RLS algorithm requires 1581 multiplications, whereas the LMS algorithm requires only 63.