A Report On Wavelet Analysis Communications Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.


Fourier Analysis

Signal analysts already have at their disposal about an impressive arsenal of tools. Perhaps the most well-known of all these is Fourier analysis, which breaks down a signal into some constituent sinusoids of different frequencies. Another way to think about the Fourier analysis is by using a mathematical technique for transforming our view of a signal from time-based to a frequency-based.

For many signals, Fourier analysis is considered to be the extremely useful one because of the signal's frequency content which is of great importance. So why do we need other techniques, like wavelet analysis?

Fourier analysis has a very big drawback. In transforming to a frequency domain, the time information is totally lost. When looking at a Fourier transform of a particular signal, it is highly impossible to tell that when will a particular event takes place. If the signal properties will not change much over a particular time - that is, if it is what is called as a stationary signal-this drawback isn't considered as very important. However, most interesting signals contain numerous non stationary or some transitory characteristics: drift, trends, abrupt changes, and beginnings and ends of events. These characteristics are often the most important part of the signal, and Fourier analysis is not suited to detecting them.

Short-Time Fourier analysis:

In an effort to correct the deficiency which has occurred, Dennis Gabor (1946) adapted a transform called the Fourier transform to analyze only a very small section of the signal at a time-a technique which is named as windowing the signal. Gabor's adaptation, known as the Short-Time Fourier Transform (STFT), maps a signal into a two-dimensional function which has time and frequency.

The STFT represents a compromise which is made between the time- and the frequency-based views of a particular signal. It provides some information about both of them when and at what frequencies a signal event occurs. However, you can only obtain this particular information with very limited precision, and that precision is determined by the size of the window. While STFT compromises between the time and the frequency then that information can be useful, the drawback is that once you choose a particular size for the time window, that window is the same for all frequencies. Many of the signals require a more flexible approach-one is where we can vary the window size to determine more accurately either the time or the frequency.

Wavelet Analysis

Wavelet analysis represents about the next logical step: that is a windowing technique with variable-sized regions. Wavelet analysis allows the use of long time intervals where we want more precise low-frequency information, and shorter regions where we want high-frequency information.

Here's what it looks like in contrast with the time-based, frequency-based, and STFT views of a given signal:

You may have noticed that the wavelet analysis does not use a time-frequency region, but rather a time-scale region. For more information about the concept of scale and the link between scale and frequency, see “How to Connect Scale to Frequency?”

What Can Wavelet Analysis Do?

One of the major advantages afforded by the wavelets is the ability to perform the local analysis, that is, to analyze that a localized area of a larger signal. Consider a sinusoidal signal with a small discontinuity - one as so tiny as to be barely visible. Such a signal could be easily generated in the real world, perhaps by a power fluctuation or a noisy switch.

A plot of the Fourier coefficients (as provided by the fft command) of this signal shows nothing in particularly interesting: a flat spectrum with the two peaks representing a single frequency. However, a plot of the wavelet coefficients clearly shows the exact location in time of the discontinuity.

Wavelet analysis is capable of revealing aspects of the data that other signal analysis techniques miss aspects like trends, breakdown points, discontinuities in higher derivatives, and the self-similarity. Furthermore, because it affords a different view of the data than those presented by the traditional techniques, wavelet analysis can be often compress or de-noise a signal without any of the appreciable degradation. Indeed, in their brief history within the signal processing field, wavelets have already been proven themselves to be an indispensable addition to the analyst's collection of the tools and continue to enjoy a burgeoning popularity today.

What Is Wavelet Analysis?

Now we know that some situations when wavelet analysis is very useful, it is worthwhile asking “What is wavelet analysis?” and even more,

“What is a wavelet?”

A wavelet is defined as a waveform of effectively limited duration that has an average value of zero.

Compare the wavelets with the sine waves, which are the only basis of Fourier analysis.

Sinusoids do not have limited duration - they extend from a minus to plus infinity. And where sinusoids are smooth and very predictable, wavelets tend to be very irregular and asymmetric.

Fourier analysis consists of breaking up a signal into sine waves of various frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the original (or mother) wavelet. Just looking at these pictures of the wavelets and sine waves, we can see intuitively that signals with sharp changes might be better analyzed with an irregular wavelet than with a smooth sinusoid, just as some foods are better handled with a fork than a spoon. It also makes sense that the local features can be described better with the wavelets that have local extent.

Number of Dimensions:

Thus far, as we've discussed only one-dimensional data, which encompasses most ordinary signals? However, the wavelet analysis can be applied to two-dimensional data (images) and, in principle, to higher dimensional data. This toolbox uses only one and two-dimensional analysis techniques.

The Continuous Wavelet Transform:

Mathematically, the process of the Fourier analysis is represented by the Fourier transform:

Which is the sum made over all time of the signal f (t) multiplied by a complex exponential. (Recall that a complex exponential can be broken down into real and imaginary sinusoidal components.) The results of the transform are the Fourier coefficients F (w), which when multiplied by a sinusoid of frequency w yields the constituent sinusoidal components of the original signal. Graphically, the process looks like:

Similarly, the continuous wavelet transform (CWT) is defined as the sum over all time of signal multiplied by scaled, shifted versions of the wavelet function.

The result of the CWT is a series many wavelet coefficients C, which are a function of scale and position.

Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the constituent wavelets of the original signal:


We've already alluded to the fact that wavelet analysis produces a time-scale view of a signal and now we're talking about scaling and shifting wavelets.

What exactly do we mean by scale in this context?

Scaling a wavelet simply means stretching (or compressing) it.

To go beyond colloquial descriptions such as “stretching,” we introduce the scale factor, often denoted by the letter a.

If we're talking about sinusoids, for example the effect of the scale factor is very easy to see:

The scale factor works exactly the same with wavelets. The smaller the scale factor, the more “compressed” the wavelet.

It is clear from the diagrams that for a sinusoid sin (wt) the scale factor ‘a' is related (inversely) to the radian frequency ‘w'. Similarly, with wavelet analysis the scale is related to the frequency of the signal.


Shifting a wavelet simply means delaying (or hastening) its onset. Mathematically, delaying a functiony (t) by  k is represented by y (t-k)

The Discrete Wavelet Transform:

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates an awful lot of data. What if we choose only a subset of scales and positions at which to make our calculations? It turns out rather remarkably that if we choose scales and positions based on powers of two-so-called dyadic scales and positions-then our analysis will be much more efficient and just as accurate. We obtain such an analysis from the discrete wavelet transform (DWT).

An efficient way to implement this scheme using filters was developed in 1988 by Mallat. The Mallat algorithm is in fact a classical scheme known in the signal processing community as a two-channel sub band coder. This very practical filtering algorithm yields a fast wavelet transform - a box into which a signal passes, and out of which wavelet coefficients quickly emerge. Let's examine this in more depth.

One-Stage Filtering: Approximations and Details:

For many signals, the low-frequency content is the most important part. It is what gives the signal its identity. The high-frequency content on the other hand imparts flavor or nuance. Consider the human voice. If you remove the high-frequency components, the voice sounds different but you can still tell what's being said. However, if you remove enough of the low-frequency components, you hear gibberish. In wavelet analysis, we often speak of approximations and details. The approximations are the high-scale, low-frequency components of the signal. The details are the low-scale, high-frequency components.

The filtering process at its most basic level looks like this:

The original signal S passes through two complementary filters and emerges as two signals.Unfortunately, if we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with. Suppose, for instance that the original signal S consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a total of 2000.

These signals A and D are interesting, but we get 2000 values instead of the 1000 we had. There exists a more subtle way to perform the decomposition using wavelets. By looking carefully at the computation, we may keep only one point out of two in each of the two 2000-length samples to get the complete information. This is the notion of own sampling. We produce two sequences called cA and cD.

The process on the right which includes down sampling produces DWT

Coefficients. To gain a better appreciation of this process let's perform a one-stage discrete wavelet transform of a signal. Our signal will be a pure sinusoid with high- frequency noise added to it.

Here is our schematic diagram with real signals inserted into it:

Notice that the detail coefficients cD is small and consist mainly of a high-frequency noise, while the approximation coefficients cA contains much less noise than does the original signal.

You may observe that the actual lengths of the detail and approximation coefficient vectors are slightly more than half the length of the original signal. This has to do with the filtering process, which is implemented by convolving the signal with a filter. The convolution “smears” the signal, introducing several extra samples into the result.

Multiple-Level Decomposition:

The decomposition process can be iterated, with successive approximations being decomposed in turn, so that one signal is broken down into many lower resolution components. This is called the wavelet decomposition tree.

Looking at a signal's wavelet decomposition tree can yield valuable information.

Number of Levels:

Since the analysis process is iterative, in theory it can be continued indefinitely. In reality, the decomposition can proceed only until the individual details consist of a single sample or pixel. In practice, you'll select a suitable number of levels based on the nature of the signal, or on a suitable criterion such as entropy.

Wavelet Reconstruction:

We've learned how the discrete wavelet transform can be used to analyze or decompose, signals and images. This process is called decomposition or analysis. The other half of the story is how those components can be assembled back into the original signal without loss of information. This process is called reconstruction, or synthesis. The mathematical manipulation that effects synthesis is called the inverse discrete wavelet transforms (IDWT). To synthesize a signal in the Wavelet Toolbox, we reconstruct it from the wavelet coefficients:

Where wavelet analysis involves filtering and down sampling, the wavelet reconstruction process consists of up sampling and filtering. Up sampling is the process of lengthening a signal component by inserting zeros between samples:

The Wavelet Toolbox includes commands like idwt and waverec that perform single-level or multilevel reconstruction respectively on the components of one-dimensional signals. These commands have their two-dimensional analogs, idwt2 and waverec2.

Reconstruction Filters:

The filtering part of the reconstruction process also bears some discussion, because it is the choice of filters that is crucial in achieving perfect reconstruction of the original signal. The down sampling of the signal components performed during the decomposition phase introduces a distortion called aliasing. It turns out that by carefully choosing filters for the decomposition and reconstruction phases that are closely related (but not identical), we can “cancel out” the effects of aliasing.

The low- and high pass decomposition filters (L and H), together with their associated reconstruction filters (L' and H'), form a system of what is called quadrature mirror filters:

Reconstructing Approximations and Details:

We have seen that it is possible to reconstruct our original signal from the coefficients of the approximations and details.

It is also possible to reconstruct the approximations and details themselves from their coefficient vectors.

As an example, let's consider how we would reconstruct the first-level approximation A1 from the coefficient vector cA1. We pass the coefficient vector cA1 through the same process we used to reconstruct the original signal. However, instead of combining it with the level-one detail cD1, we feed in a vector of zeros in place of the detail coefficients


The process yields a reconstructed approximation A1, which has the same length as the original signal S and which is a real approximation of it. Similarly, we can reconstruct the first-level detail D1, using the analogous process:

The reconstructed details and approximations are true constituents of the original signal. In fact, we find when we combine them that:

A1 + D1 = S

Note that the coefficient vectors cA1 and cD1-because they were produced by Down sampling and are only half the length of the original signal - cannot directly be combined to reproduce the signal.

It is necessary to reconstruct the approximations and details before combining them. Extending this technique to the components of a multilevel analysis, we find that similar relationships hold for all the reconstructed signal constituents.

That is, there are several ways to reassemble the original signal:

Relationship of Filters to Wavelet Shapes:

In the section “Reconstruction Filters”, we spoke of the importance of choosing the right filters. In fact, the choice of filters not only determines whether perfect reconstruction is possible, it also determines the shape of the wavelet we use to perform the analysis. To construct a wavelet of some practical utility, you seldom start by drawing a waveform. Instead, it usually makes more sense to design the appropriate quadrature mirror filters, and then use them to create the waveform. Let's see how this is done by focusing on an example.

Consider the low pass reconstruction filter (L') for the db2 wavelet.

Wavelet function position

The filter coefficients can be obtained from the dbaux command:

Lprime = dbaux(2)

Lprime = 0.3415 0.5915 0.1585 -0.0915

If we reverse the order of this vector (see wrev), and then multiply every even sample by -1, we obtain the high pass filter H':

Hprime = -0.0915 -0.1585 0.5915 -0.3415

Next, up sample Hprime by two (see dyadup), inserting zeros in alternate positions:

HU =-0.0915 0 -0.1585 0 0.5915 0 -0.3415 0

Finally, convolve the up sampled vector with the original low pass filter:

H2 = conv(HU,Lprime);


If we iterate this process several more times, repeatedly up sampling and convolving the resultant vector with the four-element filter vector Lprime, a pattern begins to emerge:

The curve begins to look progressively more like the db2 wavelet. This means that the wavelet's shape is determined entirely by the coefficients of the reconstruction filters. This relationship has profound implications. It means that you cannot choose just any shape, call it a wavelet, and perform an analysis. At least, you can't choose an arbitrary wavelet waveform if you want to be able to reconstruct the original signal accurately. You are compelled to choose a shape determined by quadrature mirror decomposition filters.

The Scaling Function:

We've seen the interrelation of wavelets and quadrature mirror filters. The wavelet function y is determined by the high pass filter, which also produces the details of the wavelet decomposition.

There is an additional function associated with some, but not all wavelets. This is the so-called scaling function . The scaling function is very similar to the wavelet function. It is determined by the low pass quadrature mirror filters, and thus is associated with the approximations of the wavelet decomposition. In the same way that iteratively up- sampling and convolving the high pass filter produces a shape approximating the wavelet function, iteratively up-sampling and convolving the low pass filter produces a shape approximating the scaling function.

Multi-step Decomposition and Reconstruction:

A multi step analysis-synthesis process can be represented as:

This process involves two aspects: breaking up a signal to obtain the wavelet coefficients, and reassembling the signal from the coefficients. We've already discussed decomposition and reconstruction at some length. Of course, there is no point breaking up a signal merely to have the satisfaction of immediately reconstructing it. We may modify the wavelet coefficients before performing the reconstruction step. We perform wavelet analysis because the coefficients thus obtained have many known uses, de-noising and compression being foremost among them. But wavelet analysis is still a new and emerging field. No doubt, many uncharted uses of the wavelet coefficients lie in wait. The Wavelet Toolbox can be a means of exploring possible uses and hitherto unknown applications of wavelet analysis. Explore the toolbox functions and see what you discover.