Analysis Of Seismic Data Using Constrained LMS Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The key objective of this project is the development of constraint LMS algorithm and to discuss a relatively straight forward approach at the time of identifying the problem. To adjust the records of information in accelerograms it uses a correction procedure. For performing this operation a device represented Adaptive filter can be utilized .In order to de convolve the instrument response inverse filter is implemented to the information records .MATLAB code plays a vital role while correcting the records of accelerogram information. This project also discusses about the development and implementation of CLMS algorithm. De noising concept is utilized to achieve noise decrement. De noising filters de convolved records of the seismic information or by the wavelet transform which is invariant, which gives increase to considerable differences in power spectral density characteristics of the corrected seismic information records.


High fidelity digital accelerometers, analogue instruments are utilizing to get the ground motion information records during earthquakes. Analogue one could be compare with digital one. Analogue one is considerably cheaper to manufacture and maintain. The dynamic behavior of structures during earthquakes understands by information records from the former needs to be processed. The digital one is being expensive to maintain. The advantage of digital instruments is determinations of accurate outcome as match up to analogue one for utilize in earthquake studies.

The use of an accelerometer on a seismograph is that records acceleration as use of time. The recorded information records are really the retort of the instrument to ground motion before precise motion of the ground itself. In addition, the information records recorded certainly noisy due to different factors like ocean waves, heavy traffic, piling and wind in equally minimum and high frequency range. Addition to the woe is the truth that the largest part of the historical information records that exist today is from records of analogue instruments of unidentified characteristics or questionable reliability. It is hence essential to route the recorded data by means of digital filtering techniques to recover the information records that describe ground motion to the extent feasible.


The process of recording the motion of the earth is known as seismogram and studying about the earth through its vibrations is called as seismology. This information is stored in the form of records commonly known as seismograms. These seismograms are usually occurred by the influence vibrations which are caused by the human beings, volcanoes and the sea mechanically. Seismology can be reflexive against the vibrations of volcanic eruptions, earthquakes. Previously seismograms are recorded on the papers which are linked to the rotating drums. They use pens for an ordinary paper whereas light beams for a photosensitive paper. Now a days the seismograms are recorded digitally for the computer analysis. Drum seismometers are the ones which are generally used for the public use. Seismogram plays a vital role to measure earthquakes by means of Seismograms.


In optimum filtering, for filtering the reference signal 'M' tap weights of an FIR filter are fixed. This FIR filter compares the primary signal d(n), with an estimation error e(n),then to achieve minimum MSE at the output ,the estimation error e(n) could be orthogonal to every input data collected as sample involved in the filtering procedure at time n, written as

E[ u(n k) e0*(n)] = 0, k = 0,1,2,…M 1

This equation is representing as the standard of orthogonal. Here, the error which is denoted as e0*(n) is the optimal error. The weight values resulted from the above procedure represents the optimal weights w00,w01,…..w0m 1. Deriving the expression for e0*(n) and rearranging, we arrive at M 1.

w0i*E[u(n k)u*(n i)] = E[u(n k)d*(n)], where i=0,k = 0,1,2,…M 1

Representing the expectations in the form of r(i k), autocorrelation process of the filter input at lag i k and p( k), the cross correlation between the preferred response at lag -k filter input and the, gives the Weiner Hopf equations,

The optimal weights are derived solving the above M simultaneous equations . While, to get the optimal weights we requirement to know the signal characteristics namely the autocorrelation and cross correlation functions in advance, in lot of practical applications which is not possible. In order to overcome this adaptive filtering utilizing LMS algorithm was derived.


SDOF techniques:

Seismic correction techniques generally apply a second order and SDOF instrument process to convolute the accelerometer response and inverse the filter or de. Unfortunately the significant instrument parameters neither specified nor represented in many of seismic time histories. Therefore to de-convolute an instrument response researchers utilize an approximation for the instrument parameters, or they doesn't instrument de-convolute at all. Recently more modern inverse system identification techniques contain had been implemented in order to make the instrument response deconvolution. The most significant techniques include QR RLS, the totality of least Squares with wavelet de-noising and wavelet decomposition.

Correction of Seismic information records

Correction of Seismic information records involves standard techniques to de convolute the structural responses and the instruments from the seismic accelerograms.Instrument and structural responses are a result of convolution of ground motion along which the transfer process of recording instrument and structure on which the instrument is mounted. To recover the estimation of the ground motion of the seismic event the instrument response can be de-convolved in the time or frequency domain, which is usually followed by a band-pass filter for recording information. A band-pass filter removes any ground motion outside the band and hence the accelerograms may not effectively represent an approximation of true ground motion. to overcome all these it is proposed therefore to utilize wavelet de noising as an substitute to band-pass filtering. The key concepts of de-noising are it removes minimum and high frequency corrupting signals but holds relevant information records providing a better approximation of true ground motion. Utilizing the stationary wavelet transform (SWT) some data collected as sample seismic signals and are de noised to the threshold and matched up to along with the additional standard "band-pass" filtering techniques.

De noising by implementing a band-pass filtering method

Digital band-pass filter are designed to separate the minimum and high frequency noise. Ormsby filter is the widely utilized digital band-pass filter. In order to reach the essential band-pass design specifications the Ormsby filter essential large values of N and at the time utilizes a considerable computational effort. In order to reach the essential stringent band-pass specification and reduce computation time Infinite impulse response (IIR) digital filters are now normally employed. The response of phase of these filters is non linear whereas a zero phase response is essential. While it is still superior to the computational effort essential for the FIR filter there is a loss of computational optimality. While, by executing the information records in both the reverse and forward direction of a zero phase response is also secured. In this method the information records is first filtered towards the forward direction, then the filtered information records are reversed and run back through the filter. This method is computationally more efficient than former ones.

De noising utilizing wavelets

In this method the DWT of the signal will be taken and passed through a transform over a threshold that removes coeffiecients which are below the given value and taking an inverse DWT to reconstruct a signal which is denoised.More amounts of the energy is passed in the form of small coeffiecients of the wavelets while passing weights into the filter.

By attaining the high pass filter and mathing the coefficients of noise will result in large coeffiecients.Hence shrinking or thresholding a wavelet transform eliminates the noise of an amplitude in a wavelet domain as well inverse the DWT recovers the signal preferred in loss of a detail.

SWT (Stationary wavelet transform) (translation invariant DWT)

The key problem with the DWT approach is that it is not translation invariant. The signal cannot be orthogonal to the the majority of the basis functions due to the DWT coefficient that does not shifts the signals. To explain a signal large number of coefficients are essential and co-efficient dimensions will also be smaller, that decreases the effectiveness of any scheme which is de noising.

From the outcome obtained it can be clearly stated that the differences among utilizing a SWT and DWT for the de noising are insignificant; output plots like spectra of the response of acceleration as well as the power spectral densities are virtually the same. While discontinuities in seismic records do occur therefore the SWT seems a more effective choice than DWT and digital band-pass filters for de noise purposes. SWT eliminates the "noise" that has lowest energy that is independent of frequency. SWT de noising is computationally efficient and abstracts the requirement in order to adjust the filter cut off's to fit particular seismic events. Several researchers around the world it is apparent that the choice of the filter varies the cuts off frequencies. Rather unsurprisingly, at the and high frequency range and minimum range of the spectrum, the differences among band-pass filtering and SWT techniques exists. In the design of large dams or tall building structures the minimum frequency or long period end is of importance.

Different techniques of seismic information records correction:

RLS Algorithm

This is an algorithm of an adaptive filter utilized that finds the filter coefficients in order relate to recursively produces the least squares of an error signal. RLS algorithm is compared to other algorithms, which aims to decrease the mean square error. The key variation among RLS filters and MSE filters is that RLS filters are dependent on the signals themselves, whereas MSE filters are dependent on their statistics.

Recursive Least Squares and Wavelet De Noising based Seismic correction demonstrates about filtering the inverse and the usage of RLS algorithm,and it also gets the possible outcomes when it is matched to the deconvolution of the 2nd order.

In order to deconvolute the response instrumentally and follow-on inverse filter is then implemented to the information this process the set of two seismic event's entire acceleration response of the spectral and power spectral plots are matched.

Power spectral plots and the total acceleration response spectra of two sets of seismic events were match up to in this procedure. The key advantage of RLS over LMS is RLS algorithm is depends upon the incoming information records that samples rather than the statistics of the all together to an average in the LMS algorithm.

Total Least Squares

TLS algorithm is a best tool for correcting the seismic information records during the absence of the instrument parameters . The essential advantage of the least squares method is about the instrument response from the records of the seismic information that does not necessitate any of the data regarding the instrument; it only necessitates the recorded by the seismic information records. The only requirement is that an original recording of the seismograph of an algorithm can then produce an inverse filter to de convolve the instrument response. Utilizing information records from 4 instrument types the algorithm was gone through testing which is said to be an excellent agreement in all aspects with responses of the SDOF of 2nd order and the QR RLS.hence we can say that robustness cannot be achieved for the secured responses in QR RLS.While the TLS performance in some cases is good when compared to SDOF of second order and the QR RLS along with the standard filter,due to which it reflects the anti alias filter whose details were in this case available for this record.. For a picky problem in a seismic correction techniques is some times the transfer process of the recording instruments is not represented. In order to de couple the instrument response the second order of SDOF is implemented and the parameters of the instrument are being afforded.

During the absence of an instrument parameters, assumptions of the instrument parameter are not done, it is not possible to de convolve (de couple) the instrument response from the seismic information records utilizing a 2nd order SDOF expression. To overcome this

LMS approach is embedded to identify a problem in the inverse system in order to de-convolute an instrument response.

This approach uses a least squares minimization.

Approach of Partial Total Least Squares

The problem in the TLS AX ≈ B is solved by The Partial Total Least Squares (PTLS) subroutine by utilizing a Partial Singular Value Decomposition by improving the computational efficiency of classical TLS algorithm.Depending upon the observation the TLS solution could be computed from any of the orthogonal basis of the right singular subspace resulting to the smallest singular values of an augmented matrix [A;B],the computational complexity can be improved. For the smallest singular values the measurement of this subspace can be specified by TLS the rank approximation of [A;B] or depend on a specified upper bound.Greater than a single vector in B is supported in the routine PTLS. Computation of the minimum norm solution also solves underdetermined and specified a set of equations.

It is demonstrated that the PTLS algorithm produces good outcome when match up to to the standard TLS. While all the inverse responses will not representative of the instrument response when more rows and/or columns unmodified. Therefore applying the PTLS as opposed to the TLS isn't a net gain.

Method of QRLS

This algorithm is mainly based on the RLS algotihm which is developed by the kalman square root filter of the counter part.this square root is a factorization of an inverse correlation is upon the usage of the orthogonal triangulation which is characterized as a QR decomposition.

Upon the Householder reflection or by the Givens rotation the matrix implemented in this algorithm is derived,where zero's and and changed the square root of the inverse correlation matrix and the elements of an input information records vector. In order to get an approximation of the true ground motion these inverse weights of a filter in which information records of the original seismic is convoluted and the transfer process is inversed.

Iterative methods for the Re weighted least squares.

This method is implemented to burst the noise by specifying its sensitivity on synthetic and real seismic information records. These are of slight duration but with the large amplitude, that will connect with unstable instrument operation, but can also occur as the naturally occurring transient.

The IRLS is an algorithm for determining Lp optimizations where p≠2 where as Least Squares uses L2 optimization. At each of iteration of the algorithm the algorithm applies different weighs. As far as possible the seismic information records utilized for the analysis is chosen such that instrument parameters were available in the seismic record. This include in particular information records from the SMART 1 array in Taiwan which facilitate some particulars of an anti alias filter utilized. For other digital records without the particulars of an anti alias filter, these can be derived from the sampling rate, an assumption where an anti alias along with a cut off at which half of the sampling rate would be utilized. Without any frequency selective filtering the IRLS is implemented for a seismic information records after de noising as well as correcting the baseline.

Existing correction procedures

1. Minimum pass filtering utilizing an Ormsby filter (cut off = 25Hz, roll off term, frequency = 27 Hz)

2. elimination of least square regression line,correction of baseline,

3. elimination of least square regression line,correction of baseline, elimination of least square regression line.

4. Algorithm of recursive least squares.

For correcting seismic information records these are some of the techniques. With some significant modifications it follows lot of the features of BAP. A correction procedure similar to BAP was proposed by other researchers like the PEER Strong Motion Database.

The most significant thing that should be discussed here is the variations in the acceleration response spectra and PSD's. For some seismic information records the PSD's would indicate that, in this case the Nahanni after shock, utilizing a high cut filter at the frequency of 15Hz by simply eliminating the energy at frequencies which ranges from 15-25Hz that gives an inadequate picture which effects these frequency components in the structure. Moreover, the acceleration response spectrum of the Nahanni can produce anerror of an order of 20% for the response spectrum by selecting an inappropriate value for a high cut off order to alter the corrected records instrument correction is displayed.A standard procedure for the correction of filter bands thus resulting in the variation of records.thus the standards are set up carefully.

Fundamentals of Correction Procedures

1. Re sampling, Interpolating,

Interpolation is the starting step for an essential in every correction procedures such as points which are uniformly spaced. The procedure of Interpolation can be described as a process that takes on specified values at specified points. The sampling rate of the 600 Hz is utilized in the strong motion information records examined in this study. A standard overlapping method was utilized in earlier correction procedures (BAPS) for information records segmentation and recombination. With additional zero padding, each segmented is incremented by the correction procedures..

2. correction of a baseline error,

Both the SSA 1 and the SMA 1 are planned to start copy only when it is triggered by a small threshold acceleration of 0.01g to reduce the greater costs at the beginning of the ground motion.the vibrations that triggers will be lost order to make the acceleration curve zero accelerograms records will be translated to illustrate the zero line.the outcome is adjusted so as to make the velocity zero.this sets an error at the beginning and the end of an earthquake. Displacement and Velocity follow-on from such an accelerograms results in quadratic error respectively and the linear respectively. Subtracting the least square regression line from the accelerograms the baseline correction is performed.

3. Instrument Response

the Instruments such as SSA 1 and the SMA 1 contains the dynamic characteristics which influence the motion they record of their own.Therefore to meet a better approximation of a ground motion an instrument correction is necessary. The dynamic parameters of the instrument are evaluated, which is modeled as SDOF system. To decouple the response of the instrument the ground model is implemented.upon the range of frequency the relative motion of mass is suspended w.r.t the acceleration, velocity and displacement.

4. Phase Correction and Filtering

Seismic information records taken from all the accelerometers are generally include errors from an external vibrations due to wind , piling ocean waves, and heavy traffic both in the high frequency zone and minimum.For the indication of local to signal and noise ratio it will be helpful if it estimates the local noise readings are included in any strong motion information records. While this kind of data is more or less never specified with databases of records. In terms of the filter order and roll off Elliptic filters offer an improvement over the Butterworth response, while at the expense of ripples in the pass band which may introduce distortions in the seismic trace.

5. Decimation, Down Sampling

Decimation can be described as a technique that reduces the quantity of samples over a discrete time signal.

This is a two step procedure:

Minimum pass anti aliasing filter

Down sampling

From analogue instruments Uncorrected information records contains an average sampling rate which is of 600 Hz. A high sampling rate is preferable.When the sampling rate is decreased to 200 Hz,in order to decrease the execution time of a structured analysis. If the information record is in the form of digital format which is already Decimation is not essential .If the information records points lies amongst the essential time intervals then Decimation inevitably it requires the rejection of those information record points. in addition the data is collected with the frequency of 600hz the records set are sampling rate is reduced to 200 Hz, the follow-on signal is aliased with a 100hz frequency so that the accelerogram is ruined with a noise.Signal is minimum pass filtered and the badwidth is reduced to 200hz.To corrupt the phase the in the methods of Trifunac'sand the BAP the anti aliasing filter is implemented. while it can be eliminated by utilizing an approach such as two directional filtering .


Adaptive Filtering

In the real world signals are usually corrupted by noise. the noise cancellation provides a best solution for the seperation of signal is hidden in noise. Its key advantage is the fact that no a priori knowledge of signal or noise is essential. The merits of adaptive filtering with CLMS algorithm were confirmed utilizing experimental outcome, over optimum filtering utilizing the solution afforded by Weiner Hopf equations.

Adaptive signal executing has been an active research area, since the pioneering work by Widrow and Hoff in the early 1960s.Practical applications of Adaptive filtering includes channel equalization and echo cancellation. In this project one of the vastly utilized applications of adaptive filtering represented as an adaption of noise is cancelled in discussion. The uses of this technique range from wireless communication to biomedical engineering. Recently, Chen and Moir contain utilized adaptive noise cancellation to perform speech recognition for non stationary real information records with background noise utilizing three microphones. Elko has investigated noise cancellation utilizing combinations of differential microphones

Figure 1 depicts a typical adaptive noise canceller. The primary signal is the preferred signal, which is corrupted by noise. The assumed reference signal consists of only noise, which is uncorrelated to the preferred signal but correlated to the noise present in the primary signal. Prior to filtering when a characteristics of the signal are represent, then the tap weights can be assigned to optimal tap weights as dictated by Weiner Hopf equations, which is represent as optimum filtering and here the weights are fixed. Generally the signal characteristics are unknown in real time, and at the most, can only be guessed. Further optimum filtering fails in the case of non stationary information records, and hence necessitate arises to adapt the weights as the signal characteristics change, converging to the optimal solution. This is achieved utilizing adaptive filtering based on CLMS algorithm.


LMS is a filter which is used to decrease the ideal filter through its coefficients which produces the error signal by the least mean this method the filter is adapted depending upon the error over the present time.

This algorithm is the estimation of the steepest algorithm that utilizes the instant estimation by a gradient vector for a cost process.the estimation is based on the values of the error signal and the tap input this algorithm the iterations of every coefficient is moved to an approximate gradient.

For an LMS algorithm the reference signal d[n] is most significant to get an desired output.the differences in between the actual output and the reference signal of a signal is the error signal.

"e[n] = d[n] − cH[n]x[n] "

the main mission of this algorithm is to find the coefficients which decreases the value of the error signal which is expected in order to attain the error of the least mean square and also called as squared error.the values are expected as follows:



e2 is called the squared error which is a quadratic process of a vector c and hence has a single minimum which can be established if the values expected values are present.

The approach of the gradient descent states that the coefficients must be shifted from the surface of the error to the steepest descent towards the negative gradient direction i.e,

cost process J = E(e2) w.r.t the co-efficient vector

the values expected in equation is

E(d x) = p, the vector of the cross correlation in the tap input vector and the desired output signal is E(x xH) = R,the input vector of the auto correlation can be utilized from many samples of x and d.

In this,the samples of the current account are taken into consideration by using the short time estimation as follows.

" E(d x) _ d x, and E(x xH) _ x xH, "

the coeffiecients of the filter are updated as follows:

Here, we introduced the 'step size' parameter μ, that controls the distance travelled over the error surface. In this algorithm coefficients are updated and performed at every moment n,

Choice of step size

The 'step size' parameter μ introduced in the above equations that controls the process during the error updates.

μ should be chosen if μ > 0 and μ should be small value, since its LMS algorithm will use a local estimation of R and p in gradient computation during the the cost process. Therefore at every instant the global cost process differs.

Moreover the large step makes the LMS algorithm unstable that is the the coeffiecients oscillate to a fixed values instead of oscillating.LMS depends upon the maximum eigenvalue of the matrix R of auto correlation of an input signal.for the stable behaviour the range should be :

LMS algorithm description:

1. Filter operation:

y [n] = c H [n]x[n]

2. calculation of error:

e [n] = d[n] − y[n]

"d[n] " is the preferred output

3. adapting the coefficients:

c[n + 1] = c[n] + μ e_[n] x[n]

Constrained LMS

adaptive filter block diagram is shown in the following figure .

Fig 1:

From the above figure the error signal can be derives as


Then the Mean square error (MSE) can be expressed as


By substituting (1) in (2) we get


Fig 2:From equation 3 we can observe

MSE is quadratic from w(n), where the weights are being adjusted in order to minimize the error process of the (3). The outcomes are descending along the quadratic surface of in anticipation of of the lowest point of the bowl as shown in the figure 2. similar to the other gradient algorithms Steepest descent locally estimates the gradient to slide down the surface curve in the direction that the performance surface has greatest rate of fall.

Weight update equation of steepest descent can be written as


Where µ is represented by the factor of convergence ,where it effects the algorithm stability and descent speed.while performing a sequence of iterations to the vector weights towards the direction of the steepest descent brings the value to an optimum such as wo.The cost process can be minimized by the LMS filter linearly

which is subjected to the linear set of constraints and is described by

where 'w' is vector of the co-efficient of length 'M', 'R' is the autocorrelation matrix of input signal, 'C' is the M x p constrained matrix and f is p x 1 gain vector.

The problem of constraint can also be minimized utilizing Lagrange multipliers, that can be expressed as


Where, . Gradient w.r.t the coefficients of


substitue the optimality condition in 6, which is in (4), we contain the updated equation as

µ …7

This algorithm is implemented for the correction of instruments over the six different accelerogram records and its performances.The information record is mainly corrected for finding the error of baseline in order to change the minimum as well as high frequency components by using the wavelet transform.


CLMS algorithm is implemented for instrument correction by studying and comparing over the six different sets of accelerograms information records. The seismic analysis can be achieved by minimizing and increasing the frequency of seismic records. This paper mainly discusses about the efficiency and insensitivity of the L1 de-convolution algorithm for the noise bursts. While developing the speed of convergence Fourier transforms are utilized. seismic de-convolution is achieved by performance of several computations .To analyze the earthquakes several techniques, assumptions and algorithms are being performed.therefore in this project several records of sesmic information had been acquired for earthquakes.In this project, CLMS algorithm of adaptive filtering technique is used to filter the seismic information records. Lack of knowledge of an instrument the the groundmotion can also be retrieved by constraint LMS.It would not make any assumption regarding the instruments only acquires the records that made in which is derived from an estimation of the inverse does not meet uncertainity as the phase is changed in a linear manner.the information records.signal and noise ratio is maximized to prevent from the amplification of de-convolution procedure by using an adaptive algorithm.