Interpolating RE Sampling Accelerogram Data Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Accelerogram data (The information which is recorded when the ground is vibrated during the earthquake) most of the times will sample unevenly and it is unavoidable. Spacing of the points will be done equally in interpolation, the first step in correction procedures. Data segmentation and recombination had done using a standard over lap add method in the earlier correction procedures. Segmentation is no more required since there is availability of much better and powerful processing systems which is helping to process lengthy data that is sampled at higher rates. UEL correction method using zero-padding will bypass segmentation and its related process. [7]

2.2 Baseline error correction and Detrending:

SMA-1 and SSA-1 are designed in such a way that in a small threshold acceleration of 0.01g, recording will only be started when the trigger is excess than that. Although the intention was to save the running cost any vibration before that triggering will be lost. With this process data at the start and end of the earthquake will be lost. This possible loss data will cause in linier and quadratic errors. Base line error correction method is used to correct these errors and a de trending algorithm is applied. [7]

In the FFT processing linear trend will be removed by detrend from a vector. The base line correction referred as [Matlab: Detrend] is performed when least square recursion line is subtracted from accelerogram. [7]

2.3 Instrument response:

Instruments have their own active response characteristics for example SMA-1 and the SSA-1. These characteristics will affect the movement of the instruments that are recording. In such cases instrument alteration is used to get a better value estimation of the ground motion. The instrument needs to be designed as a single degree of independent system which is a spring-mass-damper system in which spring has no damping or mass and the instruments dynamic parameters are evaluated. The instrument response from the actual ground movement is decoupled using the above model. The equation of motion of the modelled mass is given by

- Natural angular frequency

- Ratio of critical damping.

Approximate acceleration output of the instrument [7]

2.4 Filtering phase correction:

External vibrations such as ocean waves, winds, traffic and piling will cause errors in the accelerometers collected seismic data. These errors will occur in both minimum and maximum frequency zones. The records of the local noise readings and its estimations are useful. These estimations are indication to local signal to noise ratio and are included in strong motion data. There is a freq content b/w fc (corner frequency) and fmax (cut off frequency)in a typical accelerogram amplitude spectrum. Fmax is hard to define and its characterisation is given by local site effects and source mechanism effects. Most of the time fmax is constant for any given geographical region. Finite impulse response (FIR) and infinite impulse response (IRR) convolves are the digital filters that are used to achieve the fmax. Several other methods and technologies are used to do this. [7]


The study of the earthquakes and its related vibrations in the earth is called as seismology. Experts use seismometer to record the waves that are caused due to the earthquakes. Seismogram is a record of ground movement at a given function of time; provide the basic data that seismologist to study different waves as they spread out after the earthquake. The size of earthquakes is measured from the amplitude of the motion recorded on the seismograms and given in terms f magnitude and moment. Seismology made a tremendous progress from the past 20 years mainly because of the advent of new technology in computer science and improvements in data acquisition systems, which are capable of recording digital and analog waves of the ground motion over a frequency range of five orders of magnitude. The technological advancement helped the seismologists to record and make measurements with great precision and sophistication. Computational analysis to the data collected has been applied in an advanced stage and elaborate theoretical models have been devised to interpret them. Two types of distortions namely static and dynamic are formed with rapture in earthquake fault. If the event causes permanent displacement of the ground it is called as static deformation. In the second type are the dynamic motions in which rapture caused by the earthquake will radiate sound waves. Fault raptures which are driven by plate tectonic energy is taken by static distortion, up to 10% scatter straighter way in the form f seismic waves. Depending on the waves movement in different directions seismic waves are classified in to different types. The two foremost types of seismic waves are considered to be body and the surface waves. The waves which move only on the surface of the planet are surface waves where as waves travelling through the inner layers are the body waves. Body waves again classified in to two different types, namely compression waves or P (primary) waves and the second one are S or the secondary wave.


Figure.4 A typical seismogram


P waves are the fastest of waves and the ones which first arrives at the seismic station. Primary waves are the ones which travel through the solid rock and fluids. P waves push the rocks they travel through just like sound waves pushing the air. Scientists believe that p waves of an earthquake can be heard by the animals. In the P waves the movement of particles and waves are in the same direction in which energy is moving. The diagram below explains the travelling of P wave through a medium.


The secondary wave in an earth quake is slower. The S wave movement is only passes from solid not from liquid medium. This property of the S wave has made the seismologist to conclude that earth's outer core is liquid. The rock particles are moved up and down by the surface waves, sometimes side to side precisely the perpendicular way that the waves are travelling. The diagram below explains the S wave transition.


Surface waves are the low frequent waves that are travelling only through the crust. Surface waves may arrive after the body waves but they are the ones majorly responsible for the damage and destruction caused during the earthquakes.


In the deeper earthquakes the damage and strength of the surface waves are reduced. The first kind of surface waves are Love wave A.E.H Love a British mathematician. Confined to the surface of the crust, love waves produce entirely horizontal motion. (


Rayleigh waves are the second kind in surface waves. These waves are named after scientist Lord Rayleigh. These are the kind of waves that rolls along the ground similarly like waves rolling across a lake or ocean. The shaking that is felt from the earthquakes is mostly due to the Rayleigh wave.

P waves and the S waves are indirectly allowing the scientists to study the internal structure of the earth. Because of the speeds and different materials they travel through it is easy to determine the exact location of the earthquake.

The Sensitive seismographs are the principle tool of scientist who studies earthquakes. In the present world there are thousands of seismograph stations in operation. Even these instruments have been installed in Moon, Mars and Venus. A simple seismograph has a structure like pendulum. During the earth quakes or shakes in the ground the base and frame of the instrument moves accordingly at the same time keeping the pendulum bob in place using inertia. In relation to the shaking ground the bob also moves and pendulum displacements are recorded in regard t change with time tracing out the record called seismogram. Each of the seismograph station constitutes of three different pendulums sensitive to north-south, east-west and vertical motions of the ground. This will record the seismograms that allow scientists to estimate the distance, direction, Richter magnitude, and the types of faulting of the earthquake. Network within the seismograph stations will allow scientists to determine the location of the earthquake.


Filters are physical software or a hardware which is used to separate noise from a given set of information in order to achieve more reliability. Different conditions and sources create noise such as collecting the data from the means of the noise sensors or because of the corruption by transmission of signal through a communication channel. Filters basically perform three type of info processing tasks in any event.

Filtering: To extract the info about quantity of interest in some time(t) using data measuring and include time(t).

Smoothing: This gives more accurate results than filtering. The way smoothing quite different from filtering. At a given time the quantity of info is not compulsory also data measure latter then to obtain info the given time can be used. Result of interest will be produced a little late in the technique of smoothing

Prediction: Prediction in the information processing is considered to be forecasting. The goal of the prediction is to draw on some information on the quantity of interest will be like at some time t+â‚® in the future, for some â‚® > 0, by using data measured up to and including time t. Prediction in the information processing is considered to be forecasting.

Linier and non liner filters are the two types of the filters. In the linear filters linier function of the observations are applied to the filter input when there is predicted, smoothed and filtered quantity are at the output of the device. Others are considered as non liner.

Previous information about the statics of data which is to be processed is required in designing Weiner filter. Statistical characteristics of the input data must match the previous information that is given to design the filter in order to prove that the filter is optimum. The design is no longer optimum or even it is impossible to design the Weiner filter if the information is not known completely. Estimate and plug procedure is one of the straight forward approach that can be used in such events. Estimation of the statistical parameters of the significant signal by the filter is the first stage in Estimate and plug procedure. In the second stage these results are plugged in to a non recursive formula for computing the filter parameters. The main problem with this method is it requires complicated and costly hardware in the real time operations. Here comes the concept of adaptive filters which are considered to be more effective methodologies. Adaptive filters are more useful in the event where proper knowledge or information on significant signal characteristics is not available this is because of the adaptive filters nature t reply on the operation on a recursive algorithm. Representing the known facts about the environment this algorithm starts with predetermined set of initial conditions. Yet, in a stationary envron, it will be finding that after successive iterations o the algorithm it converges to the optimum Wiener solution in some statistical sense. Tracking capability will be offered by the algorithm in a non stationary environment which tracks the time variations in the statistics of input data in the environment where provided variations are low. The parameters become data dependent as a consequence of application of the recursive algorithm in which parameters of the filter is updated from one iteration to other. This proves that adaptive filters are non linear device in real as it does not obey the principle of super position. Adaptive filters not obeying this property again classified in to linear or non linear. If the estimate of the quantity is computed adaptively as a linier combination of the available set of observations applied to filter input then it is called as liner adaptive filter. The adaptive filter notwithstanding the above property is non liner filter.

Based on the literature for the operation of linear adaptive filters different types of recursive algorithms are developed. The final algorithm is chosen depending on the one or more factors that are discussed below.

Rate of convergence: Rate of convergence is defined as number of iterations necessary for the algorithm in reply to static inputs, to congregate close enough to the optimal Weiner solution in mean square logic. The rapid rate of convergence allows the algorithm to adjust faster to the stationary environment f unknown stats.

Mis adjustment: Mis adjustment for an algorithm is a parameter which provides a quantitative measures of the amount by which the final value of mean squared error , averaged over collection of adaptive filters, deviates from the minimum mean squared error that is produced by Wiener filter.

Tracking: The algorithms need to track statistical variations in the environment if an adaptive filtering algorithm works in a static environment. Tracking efficiency of the algorithm is influenced by tow contrary features.

1 Rate of convergence

2 Steady state fluctuation due to algorithm noise.

Robustness: As a matter of fact disturbances in the filters rise from different factors which may be internal or external to filters. Adaptive filters are said to be robust when the minor disturbances caused in the filter only results in smaller estimated errors.

Computational requirements: In the computational requirements issues to take care are number of process constituting a single complete iteration of the algorithm, memory volume required to store data and programs and funds required to program the algorithm on a computer.

Structure: This section defines the architecture of the algorithm showing the manner in which it is implemented in hardware form.

Numerical properties: Quantization errors yield to inaccuracies when the algorithm is implemented numerically. The conversions from analog to digital of the input data and internal calculations which are represented digitally will cause the quantization errors. These errors cause severe design problems. Numerical stability and accuracy are the two basic problems to concern. The former is the inherent characteristics of an adaptive filter algorithm where as numerical accuracy is given by the bits used in numerical representation of the data samples and coefficients of the filter. Insensitive of variations in the word length used is one characteristic which makes adaptive filter algorithm robust numerically when it is implemented digitally.

These factors, in their own ways, also enter in to the design of non linear adaptive filters, except for the fact that we now no longer have a well defined frame of reference in the form of Weiner filter.

(Adaptive filter theory- 3rd edition)

The diagram below explains the general setup of an adaptive- filtering environment.



Adaptive filter x(n) y(n)



Algorithm e(n)

Fig: General adaptive- filter configuration

n is the iteration number

x(n) is the input signal

y(n) is the adaptive filter output signal

d(n) defines the desired signal

e(n) error signal

The error signal is calculated as the difference between desired signal d(n) and adaptive filter output signal y(n). Adaption algorithm requires performance function which is formed by error signal e(n) to determine the appropriate updating of the filter coefficients. The reduction of the objective function implies that the adaptive filter output signal is equivalent to the desired signal in some logic. Three items constitute the complete specification of adaptive system.

1) Application: The choice of signal that is attained from the conditionst to be input/ output signals defines the type of application. Some of the examples in which adaptive techniques are being successfully used are signal enhancement, system identification, noise cancelling and control.

2) Adaptive filter structure:

The structure of the adaptive filters can be in variety types. The selection of the structure can affect the computational complexity of the process. There are two important types of adaptive filters namely IIR and FIR

3) Algorithm: The adaptive filter coefficient will be adjusted using algorithm to minimize a prescribed criterion. Several crucial will be determined by the choice of algorithms such as biased optimal solution and computational complexity. ( Adaptive filtering: algorithms and practical implementation)

FIR Adaptive filters: ( statistical digital signal processing and modelling)

In contrast to IIR filters or recursive adaptive filters, FIR filters (non recursive) are routinely used in adaptive filtering applications that range from adaptive equalizers in digital communication systems to adaptive noise control systems. There are different reasons for reputation of FIR adaptive filters. First stability is easily controlled by making sure that the filter coefficients are enclosed. Second, there are simple and efficient algorithms for adjusting the filter coefficients. Third, the accomplishment of these algorithms is well understood in terms of their convergence and stability. Finally, FIR adaptive filters very often perform well enough to satisfy the design criteria.

An FIR adaptive filter for estimating a desired signal d(n) from a related signal x(n), as illustrated in the figure, is


Here is assumed that x(n) and d (n) are non static random process and the target is to

FIGURE: A direct form FIR adaptive filter

Find the coefficient vector Wn at times n that minimizes the mean square error,




As in the derivation of the FIR wiener filter, the solution to this minimization problem may be found by setting the derivative of Ԑ(n) with respect to Wn*(k) equal to zero for k= 0, 1, .........,p. The result is


Substituting equation 3 and 4 we have


Which after rearranging terms, becomes


Equation 6 is a set of p+1 linear equations in the p+1 unknown Wn(l).

The steepest decent adaptive filter:

In designing an FIR adaptive filter, the goal is to find the vector Wn at time n that minimizes the quadradic function


The vector that decreaces may be institute by set the derivatives of with respect to which is zero equvivalent, other method is using to fine the solution technique called "steepest descent". The basic idea of this method is as fllows. Let Wn be an prediction of the vector that decreaces the mean sqaure error at time n. At time n+1 a new estimate is formed by add some corections to Wn that is designed to bring Wn clser to the expected solution. The correction includes taking a step of size µ in the direction of the maximum descent down the quadratic error surface. For example, shown in diagram (a) is a three dimensional plot of a quadratic function of two real valued coefficients, W(0) and W(1), given by


Note that the countours of constant error, when projected on to the plane, form a set of cocentric ellipses.This direction is given by gradient which is the vector of partial derivatives of with repect to the cefficient w(k).

FIGURE: (a) A quadradic function of two weights and (b) the contours of constant error. The gradient vector, which points in the direction of maximum increase in , is orthogonal to the line that is tangent to the contour as illustrated in (b).

For quadradic function in eq (9.12), the gradient vector is

As shown in figure b , for any vector (w), the gradient is orthgonal to the line that is tangent to the contour of constant error at w. However, since the gradient vector points in the direction of steepest descent points in the negtive gradient direction. Thus the update equation of the Wn is

The steepest descent algorithm may be summarized as follows.

Initialize the steepest descent algorithm with an initial estimate, W0, of the optimum weight vector w.

Evaluate the gradient of at the current estimate Wn, of the optimum weight vector.

Update the estimate at time n by adding a correction that is formed by taking a step of size µ in the negative gradient direction.

Go back to (2) and repeat the process.

Let us evaluate the gradient vector. Assuming that W is complex, the gradient is the derivative of with respect to W*. With


It follows that

Thus, with a step size of µ, the steepest decent algorithm becomes


The main intention of the correction procedures is to achieve the best estimates of the signal. Optimum digital filters where not used in the correction procedures discussed earlier. The above techniques assume that seismic signal is stationary, where as clearly that is not the case. There are other methods which consider the non stationary process and can give better estimates o the seismic signal. Least squares adaptive techniques are one of those. Recursive least square error (RLS) and the mean square error (LMS) are minimized using the adaptive techniques. For all the sequences which have same statistics there will be same set of filter coefficient produced with the minimization in respect to the mean square error. Assemble average of the data rather than the data itself will derive the coefficients. Least square error minimizes the squared error with an explicit dependence of the values itself which means that different sets of signal data obtained has different filter coefficients, even if the statistics of data sequences are considered same. For the non stationary seismic events RLS adaptive predictor is best suited. [7]

Stochastic gradient Approach:

In implementing the linier adaptive filters traversal filter can be used as structural basis. The cost function which is also considered to be index of the performance is defined as mean square error in case of stationary inputs. This cost function is precisely a second- order function of the tap weights in the traversal filter. The dependenc e of the mean squared error on the unknown tap weights may be viewed to be multi dimensional paraboloid with a uniquly defined bttm r minimum point. This paraboloid is considered as error- performance surface, the tap weights corrsponding to the minimum point of the surface define the optimum Weiner solution. [9]

In order to update the tap weights with regard to the adaptive transversal filter, two stages need to be followed in the recursive algorithm. Initially, the the system that consists of the wiener hpf equations need to be modified. This can be done by using a method which is called the Steepest decent. This current method that has been stated is a very good technique that is related to the thery of optimization. The gradiesnt vector is used in order to make the required modifications. Two parameters are the base of this current value. These parameters are:

The tap inputs in a definite type of matrix which is the crrelation matrix present in the traversal filter

The cross correlation vector that is present in between the identical tap inputs and the required response

Secondly, in order to derive the estimate of the gradient vector the instant values are made of use when these corellations are taken in to consideration. This makes the stichastic character to be assumed in general. [9]

The algorithm which is a result of this entire process is known as the LMS algorithm. Taking in to consideration the filter operating case, when the data that is real valued is taken, this essence can be described.

Updates value Ola value of Learning rate tap input error

Of = Tap weight + Parameter * vector * Signal

Tap-weight vector vector

Where the error signal is defined as the difference between some desired responses and the actual response of the traversal filter produced by the tap input vector.

Under the correct conditions LMS algorithm is simple and efficient enough of achieving satisfactory accomplishment. Its major limitations are a relatively slow rate convergence and sensitivity to variations in the condition number of the correlation matrix of the tap inputs; the condition number of a Hermitian matrix is defined as ratio of its largest Eigen value to its smallest eigen value. Popularity of the LMS algorithm is very high and in used in a wide variety of applications.

In a non stationary environment, the orientation of the error performance surface varies continuously with time. In this case, the LMS algorithm has the added task of continually tracking the bottom of the error performance surface. Indeed, tracking will occur provided that the input data very slowly compared to the learning rate of the LMS algorithm.

The stochastic gradient approach can also be studied in context of lattice structure. Gradient adaptive lattice algorithm (GAL) is the resulting adaptive algorithm. LMS and GAL algorithms work in their own way and are members of stochastic gradient family in linier adaptive filters. LMS is the mst popular in the family.

LMS algorithm:

The steepest decent adaptive filter has a weight-vector update equation given by


A practical limitation with this algorithm is that the expectation E { e(n)X*(n)} is generally unknown. Therefore, it must be replaced with an estimate such as the sample mean


Incorporating this estimate into the steepest descent algorithm, the update for Wn becomes


A special case of Eq (3) occurs if we use a one point sample mean (L = 1),


In this case the weight vector update equation assumes a particularly simple form and is known as LMS algorithm.


The simplicity f the algorithm comes from the fact that the update for the kth coffecient,


Requires only one multiplication and one addition (the value for µe(n) need only be computed once and may be used for all of the coefficients). Therefore, an LMS adaptive filter having p+1 coefficient requires p+1 multiplications and (p+1) additions to update the filter coefficients. In addition, one addition is necessary to compute the error e(n)= d(n) - y(n) and one multiplication is needed to form the product µe(n). Finally, (p+1) multiplications and p additions are necessary to calculate the output, y(n), of the adaptive filter. Thus a total of 2p+3 multiplications and 2p+2 additions per output point are required. The complete LMS algorithm is summarized in the table below.

TABLE: The LMS algorithm for pth- order FIR Adaptive Filter

Parameters: p = Filter order

µ = Step size

Initialization: w0 = 0

Computation: For n= 0,1,2......

y(n) = x(n)

e(n)= d(n)- y(n)