This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Digital Signal Processing (DSP) involves the processing of signals by the use of digital methods. Here, the term signal can be used to represent number of things. Historically the signal processing involved around electrical engineering and hence Signal would mean an electrical signal carried by a telephone line, wire, radio wave or a microwave. However, in proper context, the term signal means a stream of information which can represent anything ranging from experiment values to data from a satellite. A digital signal contains a stream of numbers, which are most of the times in binary form but other forms can also be used.
1.2 Analog and digital signals
In analog technology, the signal which might be in the form of audio or video is converted into electronic pulses. Whereas, in digital technology the signal is broken down in binary format where the data is converted into 0's and 1's. so in some cases the signal is already in analog format and in others in digital format. However, for the application of digital signal processing an analog signal must be first converted into digital signal. This conversion can be done using an electronic circuit called an analog-to-digital converter or ADC. This converter when fed in with an analog continuous signal generates a digital output in the form of binary numbers which represent the input signal at each sampling time.
1.3 Signal processing
All kinds of signals need to be processed before they can be used accurately. For example, the signal may be contaminated with unwanted Noise, which may be electrical or environmental. The ECG of a patient sometimes has a unwanted component called "mains interference" due to the interference from the mains supply. This changes the voltages levels and hence the signal has to be processed before it can be used properly. This signal processing can be carried out by various means. Digital filters can be used for this processing which removes or reduces the unwanted noise components from the signals. Now a days, all such filtering to improve the signal quality is done by using digital signal processing instead of analog electronics.
1.4 Sampling theorem
The sampling theorem states that an analog signal can be retrieved exactly from its samples if the rate of sampling is at least twice the highest frequency component of the signal. That is, if the signal has the frequency of 5 khz, the sampling rate has to be atleast 10 khz. This rate is termed as Nyquist rate.
Figure 1: Signal sampling representation
If the sampling theorem is not satisfied, i.e. if the rate of sampling is less than the nyquist rate, results in distortion of the signal. This is known as aliasing. In aliasing, the higher frequency components in the signal which have frequency higher than the Nyquist frequency get sampled at lower frequency. This distorts the original signal and the signal cannot be reconstructed from its samples.
1.5 Sampling and quantization
Sampling is the conversion of a continuous signal into discrete values. It can be visualized as multiplying the signal with an impulse train of unit amplitude. Hence, after multiplication only discrete values are obtained. The new signal has values only where the impulse was present. Now we have a signal with discrete values. But to use and manipulate this signal with micro controllers and processors we need to quantize the discrete values to some finite number. Hence the pulses are quantized to some finite Quantization Levels. For example, if the levels are 0,1,2 etc. a pulse magnitude between 0 and 1 will be quantized to 0 or 1. This process introduces a noise in the new signal which is known as the quantization noise.
1.6 ADC and DAC
Analog to Digital converters carry out the sampling process described above. ADC's are given an analog signal as input and a digital signal that corresponds to the input is obtained as the output of ADC. Analog to Digital converters work on different type of principles namely;
Successive approximation ADC
Sigma - Delta ADC
The flash Analog to Digital converter is the fastest among all but is very expensive and hence is not usually used. Successive approximation Analog to Digital converter is the most used ADC as it is relatively both cheap and fast.
After the processing of the signals by micro controllers and processors they have to be converted back into the analog form. This process is carried out by DAC (Digital to Analog Converter). They work exactly Analog to Digital converters. The Digital to Analog converters require digital data as input and give analog signal that corresponds to the input as output. They are less expensive when compared to Analog to Digital converters.
2. Classification of systems:
A system consists of a collection of individual elements bound together in a particular configuration. Systems are categorized in the following six forms:
2.1 Lumped and distributed parameter system:
A system is said to be a lumped parameter system if any disturbance originating from one point in the system propagates instantaneously to all the points in the system. This assumption is valid if the system is physically small enough when compared to the wavelength of the highest frequency present in the signal. These types of systems can be modeled with the use of general differential equations.
A distributed parameter type of system takes finite amount of time for the disturbance to spread everywhere in the system. Hence such systems are modeled by partial differential equations. All systems are somewhat like distributed parameter systems to an extent.
Engineering systems use lumping very frequently. It is very valid in electronic circuit and systems as the physical dimensions of the components are very small when compared to the wavelength of the frequency of the signal. Also, lumping has to make an allowance for distributive effects when used for transmission lines. And in optical circuits, phenomenon of travelling wave has to be considered.
2.2 Time invariant and Time varying:
The system characteristics remain fixed with time in time invariant systems, whereas, they change with time in time varying systems. That is, if the input of the system is shifted by some finite amount of time, the output of the system also changes by that time.
Input: y(t) = a r(t) + d r4(t)
Output: y(t) = a t r(t) + d r4(t)
If the input is shifted by z to r(t+z)
Now, the output of first system will be
y(t+z) = a r(t+z) + d r4(t+z)
this shows that it is a time invariant system.
The output of the second system will be:
y(t+z) = a t r(t+z) + d r4(t+z)
the system is hence time varying due to the presence of multiplier t present in the first term.
Causal and Non causal :
Causality is an intrinsic property of system. Causal systems, also called non anticipating systems, output at any instant depends only on past and present inputs. The output doesn't depend on future inputs. Whereas, in non causal, also known as anticipating systems, the output at any instant depends not only on past and present inputs but also on future inputs.
y (t) = r(t-1) ; t > to + 2
y (t) = r(t+1) ; t > to
In the first equation, the output at any time depends only on input that was given at time earlier. Hence it describes a causal system.
But in the next equation, the output depends on the input that would occur at a later time. Hence it is a non causal system.
2.4 Static and Dynamic:
A dynamic system is characterized by "memory" as it remembers the input received earlier. Whereas, static systems are memoryless and don't remember the previous inputs.
Hence, a causal system whose output depends on present as well as past inputs is a dynamic system.
2.5 Linear and Non linear system:
A linear system satisfies following conditions:
Homogeneity: If the input is scaled by some factor then the output is scaled by same factor.
Consider, input as r(t) and output as y(t)
r1 (t) = a r (t), then
y1 (t) = a y (t).
Superposition: If the system is given two inputs simultaneously, the output given by the system is sum of the individual outputs corresponding to the individual inputs.
r1 (t) gives y1 (t)
r2 (t) gives y2 (t)
r (t) = r1 (t) + r2 (t) ;
y (t) = y1 (t) + y2 (t) .
3. Z transform
A mathematical representation of the sampled signal is:
The above equation is equivalent to modulating a train of delta functions by the continuous analog signal. The delta function efficiently "filters" out the values of the signal at times matching to the zeros in the argument of the delta function. This described process is also sometimes referred to as "ideal" sampling since it results in sampled signals of "zero" width and with a perfectly periodic spectrum
The above equation is equal to the following as the delta function makes x(t) nonzero only at times corresponding t = kT.
Now, taking the Laplace transform of the sampled signal and using the integral definition and the properties of the delta function results in the above equation.
The Laplace transform has the Laplace variables happening in the exponent and can prove to be awkward to handle sometimes. However, substitutions give us the following equation:
Hence, the definition of the Z Transform is
If the sampling time T is fixed then the above Z Transform equation can also be written as
Thus, the final result is a polynomial in Z. The Z Transform and Laplace transform play similar roles. The Z transform is used in the processing of sampled signals whereas the Laplace transform is used in the processing of continuous signals.
In the above mentioned equations x(kT) and x(k) depicts a number that arises from the sampling and digitizing processes. For example, for a 8 bit quantization x(k) would have integer values from 0 to 255 (or -127 to +127) and for n bit quantization it would be from 0 to 2n - 1.
Since the Laplace variable s is complex then the variable z is also complex and therefore X(z) is a complex function having real and imaginary parts or magnitude and phase.
Because of the increasing incidence of digital signal processing and the fact that some come into digital signal processing without analog signal processing, then the above equation can be used as a definition of the Z Transform without reference to the Laplace transform.
The above definition of X(z) uses only positive values of k and hence is referred to as the one sided definition of the Z Transform. A 2 sided definition of the transform is also possible as in the following equation. Here negative values of k will be generously incorporated when convergence is assured.
3.2 Properities of z transform:
x1 (t) + x2 (t) = X1 (z) + X 2(z)
x(k+m) = zm X(z) -
x(t) = X(z)
if limit exists
x if limit exists
nx(n) = -z
4. Digital Filter Classification
Digital filters can be characterized by their use into three divisions i.e. time domain, frequency domain and custom. When the information is encoded in the shape of the signal's waveform Time domain filters are used. Time domain filtering is implemented for uses like smoothing, DC removal, waveform shaping, etc. However, when the information is contained in the amplitude, frequency, or in the phase of the sinusoids component frequency domain filters are used. The aim of these filters is to part one band of frequencies from other. When something more complex than the four basic responses viz high-pass, low-pass, band-pass and band-reject is required custom filters are used. For example, custom filters are implemented for deconvolution, a way of thwarting an undesired convolution. The table below shows the classification of filters according to their implementation and use.
FILTER IMPLEMENTED BY
FILTER USED FOR
(smoothing, DC removal)
The implementation of Digital filters can be done either by convolution (finite impulse response) or by recursion (infinite impulse response). Usually, filters designed by convolution have far superior performance than filters that are implemented using recursion, but they implement much more slowly. Convolution and recursion are competitor procedures.
4.1 Moving Average Filters
This is the most common filter in Digital Signal Processing and it is the easiest digital filter to comprehend and utilize. Despite its plainness, the moving average filter is finest for a general task like reducing arbitrary noise while preserving a sharp step response. Hence it is the leading filter for time domain encoded signals. But, the moving average filter is the most awful filter for frequency domain encoded signals, with small capability to separate one band of frequencies from other. The Gaussian, Blackman, and multiple-pass moving average filters are all relatives of the moving average filter. These have somewhat improved performance in the frequency domain, at the cost of augmented computation time.
Implementation by Convolution
The moving average filter functions by averaging a number of points from the input signal to generate each point in the output signal. The equation is given by:
For instance, in a 5 point moving average filter, point 80 in the output signal is given by:
Noise Reduction vs. Step Response
The moving average filter is very apt for numerous applications and is best suited for a general problem.
The signal in (a) is a pulse with arbitrary noise. In (b) and (c), the smoothing action of the moving average filter diminishes the amplitude of the random noise but also reduces the sharpness of the edges, which is not desired. The moving average generates the lowest noise for a given edge sharpness of all the linear filters.
The moving average filter can be implemented using an algorithm that is very fast. This feature of these filters makes it very useful and easy to implement. This can be understood by its algorithm, lets pass an input signal, through a seven point moving average filter to give an output signal. Let's see how adjoining output points, y  and y , are calculated:
These are almost the identical calculation. Hence, the most competent way to calculate y is:
After the first point is calculated in the output all of the other points can be established with only addition and subtraction per point.
This algorithm is quicker than algorithms of other digital filters for many causes. There are only two calculations per point, in spite of of the length. Addition and subtraction are the only math functions required, whereas most digital filters involve prolonged multiplication. The indexing scheme is very simple and the complete algorithm can be passed out with integer representation.
4.2 Windowed-Sinc Filters
Windowed-sinc filters are utilized to separate one band of frequencies from other. They are very steady and can give unbelievable performance. These outstanding frequency domain attributes are attained at the cost of poor performance in the time domain that includes too much ripple and overshoot in the step response. When implemented by standard convolution, windowed-sinc filters are easily programmable but slow in execution.
Strategy of the windowed-sinc:
The above figure depicts the scheme behind the windowed-sinc filter. The frequency response of the ideal low-pass filter is shown in (a). All frequencies lower than the cutoff frequency, fc, are passed with unity amplitude, whereas all upper frequencies are blocked. The pass band is ideally flat, the attenuation in the stop band is infinite, and the change between the two is infinitesimally small.
The Inverse Fourier Transform of this ideal frequency response gives the ideal filter kernel (impulse response) as shown in part (b). Convolving an input signal with this kernel gives a low-pass filter. But here the trouble is, the sinc function remains both negative and positive infinity without falling to zero amplitude. This infinite length, though not being a problem for mathematics, is a show stopper for computer.
To solve this problem, two modifications are made to the sinc function in (b), which results in the waveform as in (c). Firstly, it is truncated to M + 1 points, equally chosen around the main lobe. All samples exterior to these M + 1 points are set to zero, or ignored. Secondly, the complete series is shifted to the right in order that it runs from 0 to M. This represents the filter kernel using only positive indexes. Even though many programming languages permit negative indexes, they prove to be a nuisance.
Given that the customized kernel is an approximation to the ideal filter kernel, it won't have an ideal frequency response. The Fourier transform is taken of the signal in (c) in order to find the frequency response, which gives us the curve in (d). There is too much ripple in the pass band and very poor attenuation in the stop band. These difficulties result from the sudden discontinuity at the ends of the truncated sinc function. Escalating the length of the filter kernel does not decrease these troubles.
Figure (e) depicts a smoothly tapered curve called a Blackman window. When we multiply the truncated-sinc in (c) by the Blackman window in (e), we get windowed-sinc filter kernel as in (f). The motive here is to lessen the unexpectedness of the truncated ends and thus improve the frequency response. this improvement is shown in Figure (g). The pass band is flat and the attenuation in stop band is good and not visible in this graph.
4.3 Custom Filters
The four standard frequency responses are low-pass, high-pass, band-pass or band-reject. However some needs require special applications and response, and there is a universal method of designing digital filters with a random frequency response. The most important application of custom filters is deconvolution.
4.3.1 Arbitrary Frequency Response:
The approach utilized to get the windowed-sinc filter in can also be employed to design filters with almost any frequency response. The only difference being the method of moving the desired response from the frequency domain to the time domain. For the windowed-sinc filter, the frequency response and the filter kernel are characterized by equations and the translation between them is made by the Fourier transform.
The above figure depicts the frequency response we want to create is shown in (a). the response is very irregular and virtually impossible to get with analog electronics. This ideal frequency response is described by an collection of numbers that have been selected. In this instance, there are 513 samples spread between 0 and 0.5 of the sampling rate. We can use more points to better depict the desired frequency response, whereas a smaller number can help reduce the computation time during the filter design.
Now, we take the Inverse DFT to shift the filter to the time domain. This consequences in a 1024 sample signal running from 0 to 1023, as in (b). The (b) shows impulse response that matches to the frequency response we want but it is not suitable as a filter kernel. It requires to be shifted, truncated, and windowed. The testing of the filter kernel is done by taking the DFT to find the real frequency response, as in (d). To get better resolution in the frequency domain, cushion the filter kernel with zeros before FFT.
Redundant convolution is an intrinsic problem in moving analog information. Deconvolution is the procedure of filtering a signal to pay off for an unwanted convolution. The objective of deconvolution is to reconstruct the signal as it was before the convolution. Deconvolution is almost impossible to understand in the time domain, but pretty undemanding in the frequency domain. Every sinusoid that compiles the original signal is changed in amplitude and/or phase while it passes through the unwanted convolution. In order to remove the original signal, the deconvolution filter must undo these amplitude changes and phase changes.
We will the example of a gamma ray detector use to illustrate deconvolution . the figure shows pulses produced by the detector in reaction to arbitrarily arriving gamma rays. The information we want to remove from this signal is the amplitude of each pulse, which is comparative to the energy of the gamma ray that produced it. As in (a), two or more pulses might overlap, moving the measured amplitude. To solve this problem we can deconvolve the detector's output, which makes the pulses narrower so that a lesser amount of pile-up occurs.
The above figure shows the common approach. The frequency spectrum of the new audio signal is shown in (a). The frequency response of the recording equipment, a smooth curve except for several sharp resonance peaks is illustrated in (b). The spectrum of the recorded signal, illustrated in (c), is identical to the true spectrum, (a), multiplied by the rough frequency response, (b). The frequency response of the deconvolution filter, (d), must be the inverse of (b). Each peak in (b) is annulled by a equivalent dip in (d). The resulting signal has a spectrum, (e), which is equivalent to the original.
4.4 Single Pole Recursive Filters
The single pole recursive filters are certainly very useful and worth keeping in "DSP toolbox". They can be used to process digital signals as RC networks are used to process analog electronic signals. This includes almost the whole thing including DC removal, high-frequency noise suppression, wave shaping, smoothing. They are easily programmable and quickly executable.
The attributes of these filters are managed by the parameter (x), which is a value between zero and one. Actually, x is the amount of decay amid neighboring samples. For example, x is 0.86 in above figure which means that the value of every sample in the output signal is 0.86 the value of the sample prior to it. The greater the value of x, the slower is the decay. The filter turns out to be unstable if x is made greater than one. Which means, any nonzero value in the input makes the output increase till an overflow happens.
The value of x can be straightforwardly specified or it can be found from the preferred filter time constant. As RÃ-C gives the number of seconds taken by an RC circuit to decay to 36.8% of its final value, similarly, d is the number of samples taken by a recursive filter for decaying to the same level:
The above figure demonstrates an instance of single pole recursive filters use. As shown in (a), the original signal is a smooth curve excepting a burst of a high frequency sine wave. Whereas, the figure in (b) depicts the signal after it passes through a low-pass and a high-pass filter.
The above figure depicts the frequency responses of a variety of single pole recursive filters. The curves can be generated by passing a delta function through the filter and get the impulse response of the filter. FFT can be then used to translate the impulse response to the frequency response. The key feature in the above figure is that single pole recursive filters cannot separate one band of frequencies from other efficiently. That is, they execute good in the time domain but meagerly in the frequency domain. Cascading several stages can slightly improve the frequency response. This is achieved in two ways. Firstly, the signal is passed through the filter several times. Secondly, the z-transform is utilized and the recursion coefficients are generated that merge the cascade into a single stage. The graph in (c) depicts the frequency response of a cascaded four low-pass filters.
4.5 Chebyshev Filters
Chebyshev filters can be utilized for removing one frequency from others but they are unable to match the performance of the windowed-sinc filters. The main characteristic of Chebyshev filters is the speed, generally greater than an order of magnitude faster than the windowed-sinc. The reason behind this faster speed is that they are implemented by recursion and not by convolution.
4.5.1 The Chebyshev and Butterworth Responses
The Chebyshev response is a strategy for obtaining a quicker roll-off by permitting ripple in the frequency response. Digital filters which use the same approach are called Chebyshev filters. These filters are named so because they utilize the Chebyshev polynomials, invented by the Russian mathematician Pafnuti Chebyshev (1821-1894).
The above figure depicts the frequency response of low-pass Chebyshev filters. It has passband ripples of 0%, 0.5% and 20%. With an increase in the ripple, the roll-off gets sharper. The Chebyshev response is an most favorable trade-off between the two parameters. When the ripple is 0%, the filter is called a maximally flat or also known as Butterworth filter. The Chebyshev filters discussed here are called type 1 filters, which means that the ripple is permitted only in the passband. Whereas, the type 2 Chebyshev filters allow ripple in the stopband. However, the elliptic filter allows ripple in both the passband and the stopband. These filters give the fastest roll-off for a given number of poles but are very hard to design.
4.5.2 Frequency response
The four parameters required to design a Chebyshev filter are a high-pass or low-pass response, the cutoff frequency, the ripple percentage in the passband and the number of poles.
The above figure illustrates the frequency response of various Chebyshev filters which have 0.5% ripple. The number of poles here are even. The cutoff frequency of each filter is calculated at the point where the amplitude crosses 0.707 (-3dB). Filters that have a cutoff frequency near 0 or 0.5 have a more pointed roll-off when compared to those in the center of the frequency range.
5. Filter Comparison
5.1 Windowed-Sinc vs. Chebyshev
The windowed-sinc is a Finite Impusle Response filter realized by convolution whereas the Chebyshev is an Infinite Impulse Response filter implemeted by recursion. Here, we compare both of them.
The recursive filter used here is a 0.5% ripple and a 6 pole Chebyshev low-pass filter. Also, the Chebyshev's frequency response transforms according to the cutoff frequency. Here, a cutoff frequency of 0.2 is used. The filter kernel of windowed-sinc is taken as 51 points. Now, both the filters have the same 90% to 10% roll-off. This is shoen in the figure below in (a).
The recursive filter gives a 0.5% ripple in the passband, but the passband in the windowed-sinc is flat. The graph in (b) depicts that the windowed-sinc boasts of a better stopband attenuation than the attenuation in Chebyshev.
The above figure illustrates the step response of the same filters. The recursive filter has a nonlinear phase which can be taken care of by using bidirectional filtering. Both of these filters give bad step responses.
Now, let's consider the two main parameters, i.e. maximum performance and speed. The windowed-sinc is very powerful filter whereas the Chebyshev is very quick and flexible. When maximum performance is required the windowed-sinc filter proves to be a lot better filter than the Chebyshev.
However, when the speeds of the two filter is compared, as shown in the above figure, the windowed-sinc takes much longer than the chebyshev to execute. As the recursive filter can have a quicker roll-off at low and high frequencies, the windowed-sinc kernel length is made longer to equal the performance. This increase in length can be accounted for the increase in the execution time of the windowed-sinc when close to frequencies 0 and 0.5.
5.2 Moving Average vs. Single Pole
The single pole filter used here has a sample-to-sample decay of x = 0.70. the figure below shows the frequency response for both the filters. Neither one of them is very astonishing as these filters aren't used for frequency separation.
The above figure shows the step responses of these two filters. The graph in (a) shows the step response of the moving average which is a straight line and proves to be the quickest method of moving from one level to other. However, as shown in (b), the step response of recursive filter is smoother which proves better for some particular applications.
These filters are almost equivalent when it comes to performance. But, when we account for the trade-off between development and execution time, we can see the difference between the two filters. If the development time is to be reduced and a slower filter is acceptable, the recursive filter is used. But, if a as fast as possible filter is needed and the extra development time is not a bother the moving average implemented by recursion is used.
6. Applications of DSP
Nowadays, Digital Signal Processing technology is very common in devices such as mobiles, computers, audio and video recorders, Compact Disc players, external hard disc drive controllers, modems and is a appropriate replacement for analog circuitry in Television sets and phones. A very important application of Digital Signal Processing is in the field of signal compression and signal decompression. Signal compression is utilized in digital cellular mobile phones so that a greater number of calls can be monitored and handled simultaneously. Digital Signal Processing signal compression technology permits individuals to not only talk to one other but also to view others on their video screens, by the use of small and compact video cameras mounted on the screens. In audio Compact Disc systems, Digital Signal Processing technology can be used to carry out complex error detection and correction of the raw data.
Despite that some of the math's theory that underlies Digital Signal Processing techniques like Fourier Transforms, digital filters design and signal compression is very complex, the numerical operations needed for their actual implementation all these techniques are fairly simple, comprising cheifly of operations that can be carried out on a basic calculator. The Digital Signal Processing chip architecture is designed to implement such operations very quickly, thereby processing millions of samples each second and provide real-time performance. Hence, we have the ability to work with a signal as it is sampled and have the output of the processed signal, for instance to a video display. All the practical applications of Digital Signal Processing require real-time operation.
The major market electronics manufacturers are investing in Digital Signal Processing as it has applications in mass-market devices and equipments. Digital Signal Processing chips form a major percentage of the electronic devices world market. Billions of dollars are invested and pumped in this field every year and exceptional progress is expected.
6.1 Biomedical application: ECG
The electrocardiogram (ECG) has significant diagnostic importance and its application are various and widely used. They are a thoracic conclusion of the electrical activity in the human heart taken over time and recorded externally with the use of skin electrodes. The electrocardiogram implements itself by first detecting and then amplifying the little electrical changes that happen on the skin due to the depolarization of the heart muscle throughout the duration of each heart beat. However, these very precise readings may get distorted and deformed due to external interferences. For diagnosis to be error free and precise the ECG recordings and signal acquisition must be free of noise.
The ECG signal is vulnerable to interference from several biological and environmental sources. These interferences can be from the mains or from external devices such as pumps, television, drilling machines etc that operate near the ECG machine. Such disturbances bring in unnecessary components in the graphical output and thereby make accurate and proper treatment of the patient nearly impossible. Hence, here we will explore all possible digital filters that can filter out all such interferences thereby giving the doctors accurate electrocardiographs. The filters utilized are Savitzky-Golay filter, smooth filter, moving average filter, weighted window filter, Gaussian filter, median filter, finite impulse response filter and Butterworth filter.
The code for filtering the ECG using several filters is given by:
%# load ecg: simulate noisy ECG
x = repmat(ecg(Fs), 1, 8);
x = x + randn(1,length(x)).*0.18;
%# plot noisy signal
subplot(911), plot(x), set(gca, 'YLim', [-1 1], 'xtick',)
%# sgolay filter
frame = 15;
degree = 0;
y = sgolayfilt(x, degree, frame);
subplot(912), plot(y), set(gca, 'YLim', [-1 1], 'xtick',)
window = 30;
%#y = smooth(x, window, 'moving');
%#y = smooth(x, window/length(x), 'sgolay', 2);
y = smooth(x, window/length(x), 'rloess');
subplot(913), plot(y), set(gca, 'YLim', [-1 1], 'xtick',)
%# moving average filter
window = 15;
h = ones(window,1)/window;
y = filter(h, 1, x);
subplot(914), plot(y), set(gca, 'YLim', [-1 1], 'xtick',)
%# moving weighted window
window = 7;
h = gausswin(2*window+1)./window;
y = zeros(size(x));
if j>-i && j<(length(x)-i+1)
%#y(i) = y(i) + x(i+j) * (1-(j/window)^2)/window;
y(i) = y(i) + x(i+j) * h(j+window+1);
subplot(915), plot( y ), set(gca, 'YLim', [-1 1], 'xtick',)
window = 7;
h = normpdf( -window:window, 0, fix((2*window+1)/6) );
y = filter(h, 1, x);
subplot(916), plot( y ), set(gca, 'YLim', [-1 1], 'xtick',)
%# median filter
window = 15;
y = medfilt1(x, window);
subplot(917), plot(y), set(gca, 'YLim', [-1 1], 'xtick',)
order = 15;
h = fir1(order, 0.1, rectwin(order+1));
y = filter(h, 1, x);
subplot(918), plot( y ), set(gca, 'YLim', [-1 1], 'xtick',)
%# lowpass Butterworth filter
fNorm = 25 / (Fs/2); %# normalized cutoff frequency
[b,a] = butter(10, fNorm, 'low'); %# 10th order filter
y = filtfilt(b, a, x);
subplot(919), plot(y), set(gca, 'YLim', [-1 1])