Interpolating Re Sampling Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Accelerogram data The data which is recorded when the ground is vibrated during the earthquake most of the times will sample unevenly and it is unavoidable. Interpolation is the initial step in the correction procedures in which points are equally spaced. Data segmentation and recombination had done using a standard over lap add method in the earlier correction procedures. At present with more powerful processing systems there is no need of segmentation and the lengthy data sampled at higher rates also readily processed. UEL correction method using zero-padding will bypass segmentation and its related process.

2.2 Baseline error correction and Detrending:

SMA-1 and SSA-1 are designed in such a way that in a small threshold acceleration of 0.01g, recording will only be started when the trigger is excess than that. Although the intention was to save the running cost any vibration before that triggering will be lost. With this process data at the start and end of the earthquake will be lost. This possible loss data will cause in linier and quadratic errors. Base line error correction method is used to correct these errors and a de trending algorithm is applied.

In the FFT processing linear trend will be removed by detrend from a vector. The base line correction referred as [Matlab: Detrend] is performed by subtracting a least-square regression line from the accelerogram.

2.3 Instrument response:

Instruments have their own dynamic response characteristics for example SMA-1 and the SSA-1. These characteristics will influence the motion of the instruments that are recording. In such cases instrument correction is used to obtain a better value estimation of the ground motion. The instrument needs to be modelled as a single degree of freedom system which is a spring-mass-damper system in which spring has no damping or mass and the instruments dynamic parameters are evaluated. The instrument response from the actual ground motion is decoupled using the above model. The equation of motion of the modelled mass is given by

- Natural angular frequency

- Ratio of critical damping.

The approximate acceleration output of the instrument

2.4 Filtering phase correction:

External vibrations such as ocean waves, winds, traffic and piling will cause errors in the accelerometers collected seismic data. This could happen both in high and low frequency zones. Estimation of local noise readings are useful and are included in any strong motion data as an indication of local signal to noise ratio. There is a frequency content between fc (corner frequency) and fmax (cut off frequency)in a typical accelerogram amplitude spectrum. Fmax is hard to define and characterised by considering near site effects and source mechanism effects. Most of the time fmax is constant for any given geographical region. Finite impulse response (FIR) and infinite impulse response (IRR) convolvers are the digital filters that are used to achieve the fmax. Several other methods and technologies are used to do this.


The study of the earthquakes and its related vibrations in the earth is called as seismology. Experts use seismometer to record the waves that are caused due to the earthquakes. Recording of ground motion as a function of time are seismogram, provide the basic data that seismologist to study different waves as they spread out after the earthquake. The size of earthquakes is measured from the amplitude of the motion recorded on the seismograms and given in terms f magnitude and moment. Seismology made a tremendous progress from the past 20 years mainly because of the advent of new technology in computer science and improvements in data acquisition systems, which are capable of recording digital and analog waves of the ground motion over a frequency range of five orders of magnitude. The technological advancement helped the seismologists to record and make measurements with great precision and sophistication. Computational analysis to the data collected has been applied in an advanced stage and elaborate theoretical models have been devised to interpret them. When earthquake fault raptures, it causes two types of deformation one is static and the other is dynamic. If there is a permanent displacement of the ground due to the event then it is referred to be static deformation. The earthquake cycle progresses from a fault that is not under stress, to a stressed fault as the plate tectonic motions driving the fault slowly proceed, to rupture during an earthquake and a newly-relaxed but deformed state. The second type of deformation, dynamic motions, is essentially sound waves radiated from the earthquake as it raptures. While most of the plate- tectonic energy driving fault ruptures is taken up by static deformation, up to 10% may dissipate immediately in form of seismic waves. The seismic waves are different kinds which are moving in to different ways. The major types of seismic waves are considered to be body waves and the surface waves. Surface waves are the ones which move only on the surface of the planet where the body waves can travel through the inner layers of the earth. Body waves again divided in to two different types' namely compression waves or also known as P (primary) waves and the second one are S or the secondary wave.


Primary waves are the fastest kind of waves and the ones which first arrives at the seismic station. Primary waves are the ones which travel through the solid rock and fluids. P waves push the rocks they travel through just like sound waves pushing the air. Scientists believe that animals can hear the P waves of an earthquake. In the P waves particles move in the same direction that the wave is moving in, which is the direction energy is moving. The diagram below explains the travelling of P wave through a medium.


The secondary wave in an earth quake is slower and can only move through solid not through the liquid medium. It is the property of the S wave that led seismologist to conclude that the earths outer core is a liquid. S waves move rock particles up and down, or side to side - perpendicular that the wave is travelling in. The diagram below explains the S wave transition.


Surface waves are the low frequent waves that are travelling only through the crust. Surface waves may arrive after the body waves but they are the ones majorly responsible for the damage and destruction caused during the earthquakes.


In the deeper earthquakes the damage and strength of the surface waves are reduced. In the surface waves the first kind are Love wave, named after A.E.H Love, a British mathematician. Confined to the surface of the crust, love waves produce entirely horizontal motion. (


The other kind in the surface waves are Rayleigh wave, named after Lord Rayleigh. This type of wave just rolls along the ground just like a wave rolls across a lake or an ocean. When rolling it move the ground up and down and side to side in the direction of moving wave. The shaking that is felt from the earthquakes are mostly due to the Rayleigh wave.

P waves and the S waves are indirectly allowing the scientists to study the internal structure of the earth. Because of the speeds and different materials they travel through it is easy to determine the exact location of the earthquake.

Sensitive seismographs are the principle tool of scientist who studies earthquakes. In the present world there are thousands of seismograph stations in operation. Even these instruments have been installed in Moon, Mars and Venus. A simple seismograph looks like a pendulum. Whenever there is an earthquake or the ground shakes, the base and frame of the instrument move with it, but inertia keeps the pendulum bob in place. The bob them will move in relative to the shaking ground and records the pendulum displacements as they change with the time, tracing out the record called a seismogram. Each of the seismograph station constitutes of three different pendulums sensitive to north-south, east-west and vertical motions of the ground. This will record the seismograms that allow scientists to estimate the distance, direction, Richter magnitude, and the types of faulting of the earthquake. Network within the seismograph stations will allow scientists to determine the location of the earthquake. ( theoretical global sesimology)

ADAPTIVE FILTERING: ( adaptive filter theory - third edition)

The term filter is often used to describe a device in the form of piece of physical hardware or software that is applied to a set of noisy data in order to extract information about a prescribed quantity of interest. The noise may raise from a variety of sources. For example, data may have been derived by means of noisy sensors or may represent a useful signal component that has been corrupted by transmission through a communication channel. In any event, we may use a filter t perform tree basic information processing tasks:

Filtering: Which means the extraction of information about a quantity of interest at a time t by using data measured up to and including time t.

Smoothing: Which differs from filtering in that information about the quantity of interest need not be available at time t, and data measured later than time t can be used in obtaining this information. This means that in case of smoothing there is a delay in producing the result of interest. Smoothing can be more accurate when compared to filtering.

Prediction: It is the forecasting side of information processing. The aim here is t derive information about what quantity f interest will be like at some time t+â‚® in the future, for some â‚® > 0, by using data measured up to and including time t.

Filters can be classified in to linear and non linier filters. The filter is linier if filtered, smoothed and predicted quantity at the output of the device is a linier function of the observations applied to the filter input. The others are considered as non linier filters.

The design of Weiner filter requires priori information about the statistics of the data to be processed. The filter is optimum only when the statistical characteristics of the input data match the priori information on which the design of filter is based. When this information is not known completely, however, it may not be possible to design the Wiener filter or else the design may n longer optimum. A straight forward approach that we may use in such situations is the "estimate and plug" procedure. This is a two stage process whereby the filter first "estimates" the statistical parameters of the relevant signals and then "plugs" the results so obtained in to a non recursive formula for computing the filter parameters. For real time operation, this procedure has the disadvantage of requiring excessively elaborate and costly hardware. A more efficient method is to use an adaptive filter. By such a device we mean one that is self designing in that the adaptive filter relies for its operation on a recursive algorithm, which makes it possible for the filter to perform satisfactorily in an environment where complete knowledge of relevant signal characteristics is not available. The algorithm starts from some pre determined set of initial conditions, representing whatever known about the environment. Yet, in a stationary environment, it will be finding that after successive iterations o the algorithm it converges to the optimum Wiener solution in some statistical sense. In a non stationary environment, the algorithm offers a tracking capability, in that it can track time variations in the statistics of input data, provided that the variations are sufficiently low. As a direct consequence of the application of a recursive algorithm whereby the parameters of an adaptive filter are updated from the one iteration to the next, the parameters become data dependent. This, therefore, means that an adaptive filters in reality a non liner device, in sense that it does not obey the principle of super position. Notwithstanding this property, adaptive filters are commonly classified as linear or non linear. An adaptive filter is said to be liner if the estimate of the quantity is computed adaptively as a liner combination of the available set of observations applied to filter input. Otherwise, the adaptive filter is said to be non linear.

A wide variety of recursive algorithms have been developed in the literature for the operation of linear adaptive filters. In the final analysis, the choice of one algorithm over another is determined by one or more of the following factors.

Rate of convergence: This is defined by number of iterations required for the algorithm, in response to stationary inputs, to converge "close enough" to the optimum Weiner solution in the mean square sense. A fast rate of convergence allows the algorithm to adapt rapidly t a stationary environment of unknown statistics.

Mis adjustment: For an algorithm of interest, this parameters provides a quantitative measures of the amount by which the final value of a mean squared error, averaged over an ensemble of adaptive filters, deviates from the minimum mean squared error that is produced by Wiener filter.

Tracking: When an adaptive filtering algorithm operates in a stationary environment, the algorithm is required to track statistical variations in the environment. The tracking performance of the algorithm, however, is influenced by two contradictory features: (a) rate of convergence, and (b) steady- state fluctuation due to algorithm noise.

Robustness: For an adaptive, filter to be robust; small disturbances can only result in small estimation errors. The disturbances may raise from a variety of factors, internal r external to the filters.

Computational requirements: Here is the issues of concern include (a) the number of operations required t make one complete iteration of algorithm (b) the size of memory locations required to store the data and program and (c) the investment required to program the algorithm on a computer.

Structure: This refers to structure of information flow in algorithm, determining the manner in which it is implemented in hardware form.

Numerical Properties: When an algorithm is implemented numerically, inaccuracies are produced due to quantization errors. The quantization errors are due to analog to digital conversion of the input data and digital representation f internal calculations. Ordinarily, it is the latter source of quantization errors that poses a serious design problem. In particular there are two basic issues of concern numerical stability and numerical accuracy. Numerical stability is an inherent characteristic of an adaptive filter algorithm. Numerical accuracy on the other hand, is determined by number of bits used in numerical representation of data samples and filter coefficients. An adaptive filtering algorithm is said to be numerically robust when it is insensitive to variations in the word length used in its digital implementation.

These factors, in their own ways, also enter in to the design of non linear adaptive filters, except for the fact that we now no longer have a well defined frame of reference in the form of Weiner filter.

(Adaptive filter theory- 3rd edition)

The diagram below explains the general setup of an adaptive- filtering environment.



Adaptive filter x(n) y(n)



Algorithm e(n)

Fig: General adaptive- filter configuration

n is the iteration number

x(n) is the input signal

y(n) is the adaptive filter output signal

d(n) defines the desired signal

e(n) error signal

The error signal is calculated as the difference between desired signal d(n) and adaptive filter output signal y(n). Adaption algorithm requires performance function which is formed by error signal e(n) to determine the appropriate updating of the filter coefficients. The minimization of the objective function implies that the adaptive filter output signal is matching the desired signal in some sense. Three items constitute the complete specification of adaptive system.

1) Application: The choice of signal that is acquired from the environment to be input/ output signals defines the type of application. Some of the examples in which adaptive techniques are being successfully used are signal enhancement, system identification, noise cancelling and control.

2) Adaptive filter structure:

The structure of the adaptive filters can be in different types. The choice of the structure can influence the computational complexity of the process. There are two major types of adaptive filters namely IIR and FIR

3) Algorithm: The adaptive filter coefficient will be adjusted using algorithm to minimize a prescribed criterion. Several crucial will be determined by the choice of algorithms such as biased optimal solution and computational complexity. ( Adaptive filtering: algorithms and practical implementation)

FIR Adaptive filters: ( statistical digital signal processing and modelling)

In contrast to IIR filters or recursive adaptive filters, FIR filters (non recursive) are routinely used in adaptive filtering applications that range from adaptive equalizers in digital communication systems to adaptive noise control systems. There are several reasons for popularity of FIR adaptive filters. First stability is easily controlled by ensuring that the filter coefficients are bounded. Second, there are simple and efficient algorithms for adjusting the filter coefficients. Third, the performance of these algorithms is well understood in terms of their convergence and stability. Finally, FIR adaptive filters very often perform well enough to satisfy the design criteria.

An FIR adaptive filter for estimating a desired signal d(n) from a related signal x(n), as illustrated in the figure, is


Here is assumed that x(n) and d (n) are non stationary random process and the goal is to

FIGURE: A direct form FIR adaptive filter

Find the coefficient vector Wn at times n that minimizes the mean square error,




As in the derivation of the FIR wiener filter, the solution to this minimization problem may be found by setting the derivative of Ԑ(n) with respect to Wn*(k) equal to zero for k= 0, 1, .........,p. The result is


Substituting equation 3 and 4 we have


Which after rearranging terms, becomes


Equation 6 is a set of p+1 linear equations in the p+1 unknown Wn(l).

The steepest decent adaptive filter:

In designing an FIR adaptive filter, the goal is to find the vector Wn at time n that minimizes the quadradic function


Although the vector that minimizes may be found by setting the derivatives of with respect to equal to zero, another approach is to search for the solution using the method of steepest descent. The method of steepest descent ia an iterative procedure that has been used to find extrema of non linear functions since before the time of newton. The basic idea of this method is as fllows. Let Wn be an estimate f the vec tor that minimizes the mean sqaure error at time n. At time n+1 a new estimate is formed by adding a correction to Wn that is designed to bring Wn clser to the desired solution. The correction involves taking a step f size µ in the direction of the maximum descent down the quadratic error surface. For example, shown in figure a is a three dimensional plot of a quadratic function of two real valued coefficients, W(0) and W(1), given by


Note that the countours of constant error, when projected on to the plane, form a set of cocentric ellipses. The direction of steepest descent at any point in the plane is the direction that a marble would takeif it were placed on the inside of this quadradic bowl. Mathematically, this direction is given by gradient which is the vector of partial derivatives of with repect to the cefficient w(k).

FIGURE: (a) A quadradic function of two weights and (b) the contours of constant error. The gradient vector, which points in the direction of maximum increase in , is orthogonal to the line that is tangent to the contour as illustrated in (b).

For quadradic function in eq (9.12), the gradient vector is

As shown in figure b , for any vector w, the gradient is orthgonal to the line that is tangent to te contour of constant error at w. However, since the gradient vector points in the direction of steepest descent points in the negtive gradient direction. Thus the update equation of the Wn is

The step size µ affects the rate at which the weight vector moves down the quadratic surface and must be a positive number (a negative value for µ would move the weight vector up the quadratic surface in the direction of maximum ascent and would result in an increase in the error). For every small value of µ, the correction to Wn is small and the movement down the quadratic surface is slow and, as µ is increased, the rate of descent increases. However there is an upper limit on how large the step size may be. For values of µ that exceed this limit, the trajectory of Wn becomes unstable and unbounded. The steepest descent algorithm may be summarized as follows.

Initialize the steepest descent algorithm with an initial estimate, W0, of the optimum weight vector w.

Evaluate the gradient of at the current estimate Wn, of the optimum weight vector.

Update the estimate at time n by adding a correction that is formed by taking a step of size µ in the negative gradient direction.

Go back to (2) and repeat the process.

Let us evaluate the gradient vector. Assuming that W is complex, the gradient is the derivative of with respect to W*. With


It follows that

Thus, with a step size of µ, the steepest decent algorithm becomes


The main intention of the correction procedures is to achieve the best estimates of the signal. Optimum digital filters where not used in the correction procedures discussed earlier. The above techniques assume that seismic signal is stationary, where as clearly that is not the case. There are other methods which consider the non stationary process and can give better estimates o the seismic signal. Least squares adaptive techniques are one of those. Recursive least square error (RLS) and the mean square error (LMS) are minimized using the adaptive techniques. For all the sequences which have same statistics there will be same set of filter coefficient produced with the minimization in respect to the mean square error. Assemble average of the data rather than the data itself will derive the coefficients. Least square error minimizes the squared error with an explicit dependence of the values itself which means that different sets of signal data obtained has different filter coefficients, even if the statistics of data sequences are considered same. For the non stationary seismic events RLS adaptive predictor is best suited. ( A review of procedures used for the correction of seismic data- journal)

Stochastic gradient Approach:

Tapped- delay line r traversal filter can be used as the structural basis of implementing the lenier adaptive filter. For the case of stationary inputs, the cost function , also referred as the index of the performance, is defined as the mean squared error. This cost functin is precisely a second- order function of the tap weights in the traversal filter. The dependenc e of the mean squared error on the unknown tap weights may be viewed to be multi dimensional paraboloid with a uniquly defined bttm r minimum point. This paraboloid is considered as error- performance surface, the tap weights corrsponding to the minimumn point of the surface define the optimum Weiner solution.

There are two stages to devlop a recursive algorithm for updating the tap weights of the adaptive transversal filter. First modifying the system of Wiener- Hopf equations (i.e. the matrix equation defining the optimum Weiner solution) through the use of the method of Steepest descent, a well known technique in optimizatin theory, a well known technique in optimization theory. This modification requires the use f gradient vector, the value of which depends on two parameters: the correlation matrix of the tap inputs in the traversal filter, and the cross corellation vectoer between the desired responce and the same tap inputs. Next using instantaneous values for these corellations so as to derive an estimate for the gradient vector, making it assume as stichastic carecter in genral. The resulting algorithm is widely known as (LMS) algorithm, the esence of which may be described as follows for the case of filter operating on real valued data.

Updates value Ola value of Learning rate tap input error

Of = Tap weight + Parameter * vector * Signal

Tap-weight vector vector

Where the error signal is defined as the difference between some desired responses and the actual response of the traversal filter produced by the tap input vector.

The LMS algorithm is simple and yet capable of achieving satisfactory performance under the right conditions. Its major limitations are a relatively slow rate convergence and sensitivity to variations in the condition number of the correlation matrix of the tap inputs; the condition number of a Hermitian matrix is defined as ratio of its largest Eigen value t its smallest eigen value. Nevertheless, the LMS algorithm is highly popular and wieldy used in variety of applications.

In a non stationary environment, the orientation of the error performance surface varies continuously with time. In this case, the LMS algorithm has the added task of continually tracking the bottom of the error performance surface. Indeed, tracking will occur provided that the input data very slowly compared to the learning rate of the LMS algorithm.

The stochastic gradient approach may also be pursued in the context of lattice structure. The resulting adaptive filter algorithm is called as gradient adaptive lattice(GAL) algorithm. In their own individual ways the LMS and GAL algorithms are just two members of the stochastic gradient family of linear adaptive filters, although it must be said that the LMS algorithm is by far the most popular member of this family. (Adaptive filter thory -3rd edition)

LMS algorithm:

The steepest decent adaptive filter has a weight-vector update equation given by


A practical limitation with this algorithm is that the expectation E { e(n)X*(n)} is generally unknown. Therefore, it must be replaced with an estimate such as the sample mean


Incorporating this estimate into the steepest descent algorithm, the update for Wn becomes


A special case of Eq (3) occurs if we use a one point sample mean (L = 1),


In this case the weight vector update equation assumes a particularly simple form and is known as LMS algorithm.


The simplicity f the algorithm comes from the fact that the update for the kth coffecient,


Requires only one multiplication and one addition (the value for µe(n) need only be computed once and may be used for all of the coefficients). Therefore, an LMS adaptive filter having p+1 coefficient requires p+1 multiplications and (p+1) additions to update the filter coefficients. In addition, one addition is necessary to compute the error e(n)= d(n) - y(n) and one multiplication is needed to form the product µe(n). Finally, (p+1) multiplications and p additions are necessary to calculate the output, y(n), of the adaptive filter. Thus a total of 2p+3 multiplications and 2p+2 additions per output point are required. The complete LMS algorithm is summarized in the table below and a Matlab program is given.

TABLE: The LMS algorithm for pth- order FIR Adaptive Filter

Parameters: p = Filter order

µ = Step size

Initialization: w0 = 0

Computation: For n= 0,1,2......

y(n) = x(n)

e(n)= d(n)- y(n)


The LMS algorithm Mat lab program:

function [A,E] = lms (x,d,mu,nord,a0)

X= convm(X, nord);

[M,N]= size {x};

If nargin < 5, a0 = Zeros {1,N}; end

a0 = a0 (: ) . ';

E(1) = d(1) - a0 *X (1,: ).';

A (1. : ) = a0 + mu * E (1) * conj (X (1, : ) ) ;

If M> 1

for k=2: M-nord + 1;

E(k) = d(k) - A( k-1, : ) * X (k, : ).';

A(k) = A ( k-1, : ) + mu *E(k) * conj (X(k,: ) ) ;



Although the above program based on a very crude estimate of E { e(n)x*(n)}, the fact is that the LMS adaptive filter often performs well enough to be used successfully in a number of applications.