Adaptive Filtering Using Adaptive Algorithms English Language Essay

Published:

Regularization plays a fundamental role in adaptive filtering. An adaptive filter that is not properly regularized will perform very poorly. In spite of this, regularization in our opinion is underestimated and rarely discussed in the literature of adaptive filtering. There are, very likely, many different ways to regularize an adaptive filter. In this paper, we propose one possible way to do it based on a condition that intuitively makes sense. From this condition, we show how to regularize four important algorithms: the normalized least-mean-square (NLMS), the signed-regressor NLMS (SR-NLMS), the improved proportionate NLMS (IPNLMS), and the SR-IPNLMS.

Index Terms-Adaptive filters, echo cancellation, improved proportionate NLMS (IPNLMS), normalized least-mean-square (NLMS), regularization, signed-regressor NLMS (SR-NLMS), SR-IPNLMS.

I. INTRODUCTION

REGULARIZATION plays a fundamental role in all ill- posed problems, especially when the observation data is noisy, which is usually the case in all applications. In adaptive filtering, we always have a linear system of equations (over determined or underdetermined) to solve, explicitly or implicitly, so that we face an ill-conditioned problem or a rank-deficient problem [1]. As a result, regularization is an important design part in any adaptive filter if we want this one to behave properly.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Let us denote by δ the regularization parameter. In many adaptive filters [2], [3], this regularization is chosen as

δ = β σx2 (1)

Where σx2 =E[X2(n)] is the variance of the zero-mean input x(n),with E[.] denoting mathematical expectation, and β is a positive constant. In practice though, is more a variable that depends on the level of the additive noise. The more the noise, the larger is the value of β. In the rest of this work, we will refer to β as the normalized (with respect to the variance of the input signal) regularization parameter. The regularization as proposed in (1) seems to work well in practice (with, of course, a good choice of β) since the misalignment, which is a distance measure between the true impulse response and the estimated one with an adaptive algorithm, decreases smoothly with time and converges to a stable and small value. Without this δ, the misalignment of the adaptive filter may fluctuate a lot and may even never converge.

However, (1) was never really justified from a theoretical point of view. Even popular books [4], [5] discuss the regularization problem in a very superficial way and refer to it as a "small positive number," which is not really true. Indeed, in our experience, the regularization parameter can vary from very small to very large, depending on the level of the additive noise.

For the normalized least-mean-square (NLMS) algorithm, for example, we often take β=20, but we also know by experience that if the noise is very high, we should take β much higher than 20. Then, many questions arise, e.g., where this value of β comes from? Can it be justified? Can we find an optimal β and in which sense? What about other adaptive filters?

In this paper, we are not interested in a variable regularized parameter as discussed in many publications [6]-[8], although we could do so. We are mainly interested in a constant regularization that would guaranty a stable behavior of the adaptive filter. As a consequence, with an appropriate δ, we could compare fairly different adaptive algorithms.

There are, very likely, many different ways to regularize an adaptive filter. In this study, we show how to derive a regularization parameter from a condition that intuitively makes sense. We discuss the regularization of four important algorithms: the NLMS [4], [5], the signed-regressor NLMS (SR-NLMS) [2], [9], the improved proportionate NLMS (IPNLMS) [10], which is an improved version of the PNLMS [11], and the SR-IPNLMS [2].

II. SIGNAL MODEL

We have the observed or desired signal

d (n) = hTX (n) +w (n)

= y (n) +w (n) (2)

where n is the discrete-time index,

h= [h0 h1……...….hL-1] T (3)

is the impulse response (of length L) of the system that we need to identify, superscript denotes transpose of a vector or a matrix,

X (n) =[x (n) x (n-1) . . . x (n-L-1)] T (4)

is a vector containing the most L recent samples of the zero mean input signal x(n) , and w(n) is a zero-mean additive noise signal, which is independent of x(n) .The signal y(n) is called the echo in the context of echo cancellation.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

From (2), we define the echo-to-noise ratio (ENR) [2], which is also the signal-to-noise ratio (SNR), as

ENR

= (5)

where σy2=E[y2(n)] and σw2=E[w2(n)] are the variances of y(n) and w(n), respectively, and Rx=E[x(n)x(n)T] is the correlation matrix of x(n).

Our objective is to estimate or identify h with an adaptive filter

(6)

In such a way that for a reasonable value of n, we have for the (normalized) misalignment:

≤ℓ (7)

where ℓ is a predetermined small positive number and is the ℓ2 norm.‌‌‌‌

III. REGULARIZATION OF THE NLMS ALGORITHM

The classical NLMS algorithm is summarized by the following two expressions [2]-[5]:

e(n) = d (n) - xT (n)

= d (n) - (8)

(9)

where α (0< α<2) is the normalized step-size parameter and δ is the regularization parameter of the NLMS.

The question now is how to find δ ?

Since e (n) = d (n) - xT(n)h(n) is the error signal between the desired signal and the estimated signal. We should find δ in such a way that the expected value of e2(n) is equal to the variance of the noise, i.e.,

E[e2(n)]=σ2w. (10)

This is reasonable if we want to attenuate the effects of the noise in the estimator h (n).

To derive the optimal according to (10), we assume in the rest that L>>1 and x (n) is stationary, As a result,

xT(n)x(n)≈Lσ2X (11) Developing (10) and using (11), we easily derive the quadratic equation

(12)

from which we deduce the obvious solution

= (13)

where

(14)

is the normalized regularization parameter of the NLMS.

We see that δ depends on three elements: the length of the adaptive filter, the variance σ2x of the input signal, and the ENR. In both network and acoustic echo cancellation, the first two elements (L and σ2x) are known, while the ENR is often roughly known or can be estimated. Therefore, it is not hard to find a good value for δ in these applications. For example, we often take in simulations L=512 and ENR=30dB.With these values, we find that, which is very close to the value β=20 discussed in the introduction. This is clearly justified here as well as in our simulations.

Furthermore, we have

= ∞ (15)

= 0 (16)

which is what we desire.

It is of importance to check the evolution of βNLMS as a function of the ENR. In Fig. 1, the normalized regularization parameter (14) is plotted for L=512 with different values of the ENR (between 0 and 50 dB). As expected, the importance of becomes more apparent for low ENRs. Also, as it can be noticed from the detailed figure presented in Fig. 2, the usual "ad-hoc" choice corresponds to a value of the ENR close to 30 dB, which is also a common choice in many simulation scenarios related to echo cancellation.

Fig.1.Normalized regularization parameter as a function of the ENR with L=512.The ENR varies from 0 to 50 dB.

Fig.2.Normalized regularization parameter as a function of the ENR with L=512.The ENR varies from 20 to 50 dB.

IV. REGULARIZATION OF THE SR-NLMS ALGORITHM

The equations of the SR-NLMS algorithm are [2], [9]

e (n) = d (n) - (17)

(18)

Where sgn [x (n)] is the sign of each component of and is the regularization parameter of the SR-NLMS. This algorithm is very interesting from a practical point of view because its performance is equivalent to the NLMS but requires less multiplication at each iteration time as noticed in (18).

For L>>1and a stationary signal x(n), we have

(19)

Where

(20)

is the normalized regularization parameter of the SR-NLMS.

V. REGULARIZATION OF THE IPNLMS ALGORITHM

When the target impulse response is sparse, it is possible to take advantage of this sparsity to improve the performance of the classical adaptive filters. Duttweiler was one of the first re - searchers to come up with an elegant idea more than a decade ago by proposing the PNLMS algorithm [3], [11]. The idea be- hind the PNLMS is to update each coefficient of the filter independently of the others by adjusting the adaptation step size in proportion to the magnitude of the estimated filter coefficient. It redistributes the adaptation gains among all coefficients and emphasizes the large ones (in magnitude) in order to speed up their convergence and, consequently, achieving a fast initial convergence rate. The IPNLMS [3], [10] is an improved version of the PNLMS and works very well even if the impulse response is not sparse, which not the case is for the PNLMS. The IPNLMS expressions are

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

e (n) = d (n) - (21)

Where is the regularization parameter of the IPNLMS,

G (n-1) =diag []

is an L Ã- L diagonal matrix.

For L>>1and a stationary signal x(n), we have

(23)

Where

(24)

is the normalized regularization parameter of the IPNLMS.

VI.REGULARIZATION OF THE SR-IPNLMS ALGORITHM

The SR-PNLMS was proposed in [2]. The extension of the SR principle to the IPNLMS is straightforward. Therefore, the SR-IPNLMS is summarized by the following two equations:

e (n) = d (n) - (25)

(26)

Where is the regularization parameter of the SRIPNLMS and G (n-1) is defined in the previous section.

For L>>1and a stationary signal x(n), we have

(27)

Where

(28)

is the normalized regularization parameter of the SR-IPNLMS.

VII. SIMULATIONS

Simulations were performed in the context of acoustic echo cancellation. This application is basically a system identification problem [4], where an adaptive filter is used to identify an unknown system, i.e., the acoustic echo path between the loud- speaker and the microphone. In this context, the level of the background noise (i.e., the noise that corrupts the microphone signal) can be high. As a result, low ENR values can be expected and, consequently, the importance of the regularization parameter becomes more apparent.

Fig.3.Acoustic impulse response used in simulations.

The measured acoustic impulse response used in simulations is depicted in Fig. 3. It has 512 coefficients and the same length is used for the adaptive filter (i.e. L=512); the sampling rate is 8 kHz. The far-end (input) signal x (n) is either a whit gaussian noise or a speech sequence. An independent white gaussian noise w (n), is added to the echo signal with different values of the ENR. Only the single-talk case is considered, i.e., the near-end talker is absent. In order to evaluate the tracking capabilities of the algorithms, an echo path change scenario is simulated by shifting the impulse response to the right by 12 samples. The performance is evaluated in terms of the normalized misalignment (in dB), defined as

(29)

and the results are averaged over 20 independent trials.

In order to outline the influence and the importance of the regularization parameter, the normalized step-size parameter of the adaptive algorithms is set to α=1for most of the experiments (except when a speech sequence is used as input). In this way, we provide the fastest convergence rate for the adaptive filters, so that the difference between the algorithms (in terms of the misalignment level) is influenced only by the regularization parameter.

In the first set of experiments, the performance of the NLMS algorithm is evaluated. Fig. 4 presents the misalignment of this algorithm using different values of the normalized regularization constant β [see (1)], as compared to the "optimal" normalized regularization given βNLMS in (18).

The ENR is set to 30 dB and the input signal is white and Gaussian. According to this figure, it is clear that a lower misalignment level is achieved for a higher normalized regularization constant, but with a slower convergence rate and tracking. Also, it can be noticed that the performance obtained using the "optimal" normalized regularization is similar to the "classical" β=20 and the convergence rate or tracking is not affected as compared to the case where there is little regularization. The same experiment is repeated in Fig. 5, but using a lower value of the ENR, i.e., 0dB. It is clear that the importance of the "optimal" regularization becomes more apparent. In order to match the performance obtained with βNLMS the normalized regularization constant needs to be further increased(i.e., β=1200).All these results are in consistence with Fig. 1, which provides the values of βNLMS as a function of the ENR.

Fig.4. Misalignment of the NLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian,α=1, L=512, and ENR=30dB.

Commonly, the SR-NLMS algorithm uses a similar regularization to the NLMS algorithm [see (1)]. However, as it was proved in Section IV,the regularization parameters of these two algorithms differ by the factor βx,i.e., βSR-NLMS=βx βNLMS.Fig.8 presents the misalignment of the SR-NLMS algorithm with different values of β from (1), as compared to the "optimal" normalized regular-ization βSR-NLMS given in(20).The input signal is white and Gaussian, and ENR=10dB. It can be noticed that the "classical" normalized regularization β=20 is not appropriate in this case. The SR-NLMS algorithm with (which is close to the value β=170) performs much better in terms of both fast convergence/tracking and misalignment. However, for lower values of the ENR, the normalized regularization constant needs to be further increased. The experiment reported in Fig. 7 is performed with ENR=0dB. Again, the SR-NLMS algorithm with βSR-NLMS (which is now close to the value β=1000) gives the best performance.

Fig. 5. Misalignment of the NLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian, α=1, L=512, and ENR=0dB.

Fig.6.Misalignment of the SR-NLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian, α=1, L=512, and ENR=10dB.

The IPNLMS algorithm is very useful when we need to identify sparse impulse responses, which is often the case in net- work and acoustic echo cancellation. In [10], it was intuitively shown that the regularization parameter of this algorithm should be taken as δIPNLMS=δNLMS(1-k)/(2L). However, as it was proved in Section V, the regularization of the IPNLMS algorithm does not depend on the parameter k (that controls the amount of proportionality in the algorithm).

The "optimal" regularization of the IPNLMS algorithm is given in (23) and it is based on the parameter from (24). In fact, it is equivalent to the regularization of the NLMS up to the scaling factor L, i.e, βIPNLMS=βNLMS/L.

The next set of experiments evaluates the performance of the IPNLMS algorithm. The proportionality parameter is set to k=0.

Fig.7. Misalignment of the SR-NLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian, α=1, L=512, and ENR=0dB.

Fig.8.Misalignment of the IPNLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian, α=1, k=0, L=512, and ENR=10dB.

The misalignment of this algorithm using the "classical" normalized regularization constant β=20/2L, as compared to the "optimal" normalized regularization βIPNLMS. The input signal is white and Gaussian, and ENR=30dB. It can be noticed that the performance of the algorithms is very similar. However, this is not the case for lower ENRs. Indeed, the previous experiment is repeated in Fig.8, but with ENR=10dB. In this case, a much higher value of the normalized regularization constant is required [i.e., β=400/ (2L)], in order to match the performance obtained using βIPNLMS .This fact is also supported in Fig. 9, where ENR=0dB, so that the normalized regularization constant needs to be further increased [up to β=2400/ (2L)] in order that the IPNLMS performs in a similar way when the "optimal" choice is used.

Fig.9.Misalignment of the IPNLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian, α=1, k=0, L=512, and ENR=0dB.

Finally, the performance of the SR-IPNLMS algorithm is evaluated. Usually, the regularization of this algorithm is identical to the IPNLMS one. However, as it was shown in Section VI, the relation between the regularization parameters of the SR-IPNLMS and IPNLMS algorithms is similar to the one between the SR-NLMS and NLMS algorithms ,i.e., βSR-IPNLMS= βxβIPNLM. In Fig.10, the input signal is white and Gaussian, and ENR=10dB.

According to this figure ,it is clear that the SR-IPNLMS algorithm using the "optimal" value βSR-IPNLMS performs better as compared to the regular normalized regularization β=20/(2L). Also, it can be noticed that a lower misalignment level can be obtained by using a higher normalized regularization parameter, i.e., β=400/(2L). However, this value is not appropriate anymore when the ENR decreases. Indeed, in Fig.11, we consider ENR=0dB. It is clear that a higher normalized regularization parameter is required now [i.e., β=2400/(2L)] to match the performance obtained with β of SR-IPNLMS.

Fig.10.Misalignment of the IPNLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian, α=1, k=0, L=512, and ENR=10dB.

Fig.11.Misalignment of the IPNLMS algorithm using different values of the normalized regularization parameter. The input signal is white and Gaussian, α=1, k=0, L=512, and ENR=0dB.

VIII. CONCLUSION

In this paper, we have proposed a simple condition, that intuitively makes sense, for the derivation of an optimal regularization parameter. From this condition we have derived the optimal regularization parameters of four algorithms: the NLMS, the SR-NLMS, the IPNLMS, and the SR-IPNLMS. Extensive simulations have shown that with the proposed regularization, the adaptive algorithms behave extremely well at all ENR levels.