# A Survey On Lifting Schemes Biology Essay

Published:

For the last two decades the wavelet theory has been studied by many researchers to answer the demand of better and more appropriate functions to represent signals than the one offered by the Fourier analysis. Wavelets study each component of the signal on different resolutions and scales. One of the most attractive features that wavelet transformations provide is that their capability to analyze the signals which contain sharp spikes and discontinuities. Early implementations of the wavelet transform were based on filters' convolution algorithms. This approach requires a huge amount of computational resources. In fact at each resolution, the algorithm requires the convolution of the filters used with the approximation image. Relatively recent approaches are using the Lifting Schemes (LS). In this paper we provide the taxonomy and current state of the art in Lifting Schemes (LS).

Keywords: Lifting Schemes (LS), image compression, Discrete Wavelet Transform (DWT), Adaptive Lifting Scheme (ALS), Vector Lifting Scheme (VLS).

## 1.Introduction

### Professional

#### Essay Writers

Get your grade

or your money back

using our Essay Writing Service!

The Lifting Scheme is a new method for constructing biorthogonal wavelets. The main difference with classical constructions is that it does not rely on the Fourier transform. This way lifting can be used to construct second generation wavelets. These wavelets are not necessarily translates and dilates of one function. The latter we refer to as first generation wavelets.

In case of first generation wavelets, the Lifting Scheme will never come up with wavelets which somehow could not be found by the Cohen-Daubechies-Feauveau machinery. Nevertheless, it has the following advantages:

1. It allows a faster implementation of the wavelet transform. Traditionally, the fast wavelet transform is calculated with a two-band subband transform scheme. In each step the signal is split into a high pass and low pass band and then sub-sampled. Recursion occurs on the low pass band. The Lifting Scheme makes optimal use of similarities between the high and low pass filters to speed up the calculation. The number of flops can be reduced by a factor of two.

2. The Lifting Scheme allows a fully inplace calculation of the wavelet transform. In other words, no auxiliary memory is needed and the original image can be replaced with its wavelet transform.

3. In the classical case, it is not immediately clear that the inverse wavelet transform actually is the inverse of the forward transform. Only with the Fourier transform one can convince oneself of the perfect reconstruction property. With the Lifting Scheme, the inverse wavelet transform can immediately be found by undoing the operations of the forward transform. In practice, this comes down to simply changing each + into a - and vice versa.

Secondly, the Lifting Scheme can be used in situations where no Fourier transform is available. Typical examples are:

1. Wavelets on bounded domains: The construction of wavelets on general possibly non-smooth domains is needed in applications such as data segmentation and the solution of partial differential equations. A special case is the construction of wavelets on an interval, which is needed to transform finite length signals without introducing artifacts at the boundaries.

2. Wavelets on curves and surfaces: To analyze data that live on curves or surfaces or to solve equations on curves or surfaces, one need wavelets intrinsically defined on these manifolds, independent of parameterization.

3. Weighted wavelets: Diagonalization of differential operators and weighted approximation requires a basis adapted to weighted measures. Wavelets that are biorthogonalized with respect to a weighted inner product are needed.

4. Wavelets and irregular sampling: Many real life problems require basis functions and transforms adapted to irregularly sampled data.

It is obvious that wavelets adapted to these setting cannot be formed by translation and dilation. The Fourier transform can thus no longer be used as a construction tool. The Lifting Scheme (LS) provides an alternative.

## 2.Taxonomy of Lifting Scheme

## 2.1.Designing wavelets

The key to understand this question starts with the observation that digital data, i.e., data from speech, images, video and graphics are highly correlated and contain redundancy. Wavelet exploits this structure by providing a system by which this type of digital data can be represented accurately with a few parameters. Moreover, the computation involved in obtaining the representation is fast and efficient, usually linear in complexity. Because of this property, wavelet has been used in data compression, data transmission, geometric modeling as well as in numerical computations.

### Comprehensive

#### Writing Services

Plagiarism-free

Always on Time

Marked to Standard

For clarity of presentation, let us consider a scalar function of a single variable f(x). We wish to find an economical representation of this function. The easiest way to think of wavelets is to imagine a basis generated by a function which

is compactly supported,

is smooth,

has zero moments.

We call this generating function the mother wavelet, represented by y(x). The basic rule of generating this basis is to make translates and dilates of y(x) i.e. (2jx-l) where j and l are integers. The original function f(x) is represented as a linear combination of these basis functions. Here we let aj,l be the wavelet coefficients.

This representation will be economical and can be explained by the fact that the basis function is localized in time and frequency. This means that if f(x) has high frequency components at x0, the cost of accurately representing this feature at x0 is not global, i.e., we do not have to pay for this at point far away from x0. To see this more clearly, let us assume that the mother wavelet has the form such that its dilates are as displayed on the left side of Figure 1. We show the function f(x). The wavelet coefficients of f(x) are organized in a table whose columns are increasing index l and whose rows are increasing index j. For the function f(x) as shown in Figure 1, we would see the distribution of aj,l in the figure.

The computational machinery for actually finding the wavelet coefficients from the function, the forward wavelet transform, and its inverse, can be made very efficient. It has the similar properties to Discrete Fourier Transform (DFT) which can be exploited in much the same way as Fast Fourier Transform (FFT).

Figure 1. This figure explains why wavelets can be efficient in representing signals.

On the top of figure1 there is a signal with a "jump". The dilates of the mother wavelets are shown on the left. The wavelet coefficient can be thought of as the inner product of the translates of dilates with the function. We expect the wavelet coefficients to be distributed as displayed.

Another point worth noting is that wavelets provide a natural framework for multiresolution analysis of functions. That means, the wavelet transform in effect provides coarser and the coarser versions of the same function will keep track of the details missed going from fine to coarse. This realization gives a powerful tool for analysis and manipulation of functions.

## 2.2. Wavelet transformation: desecrate approach

Although the discretized continuous wavelet transform enables the computation of the Continuous Wavelet Transform (CWT) by computers, it is not a true discrete transform. As a matter of fact, the wavelet series is simply a sampled version of the CWT, and the information it provides is highly redundant as far as the reconstruction of the signal is concerned. This redundancy, on the other hand, requires a significant amount of computation time and resources. The Discrete Wavelet Transform (DWT), on the other hand, provides sufficient information both for analysis and synthesis of the original signal, with a significant reduction in the computation time. The DWT is considerably easier to implement when compared to the CWT. The basic concepts of the DWT will be introduced in this section along with its properties and the algorithms used to compute it. The discrete wavelet transform (DWT) is a representation of a signal x (t) âˆˆâ„’2 using an ortho-normal basis consisting of a discrete set of wavelets. Denoting the wavelet basis as {Ïˆk,n(t)|kâˆˆZâˆ§nâˆˆZ}, the DWT transform pair is

-------------- (1)

------ (2)

Where {} are the wavelet coefficients. Note the relationship to Fourier series and to the sampling theorem: in both cases we can perfectly describe a continuous-time signal x (t) using a discrete set of coefficients. Specifically, Fourier series enabled us to describe periodic signals using Fourier coefficients {X[k]|kâˆˆZ}, while the sampling theorem enabled us to describe band-limited signals using signal samples {x[n]|nâˆˆZ}. In both cases, signals within a limited class are represented using a coefficient set with a single countable index. The DWT can describe any signal in â„’2 using a coefficient set parameterized by two countable indices: {âˆ£âˆ£kâˆˆZâˆ§nâˆˆZ} .Wavelets are ortho-normal functions in â„’2 obtained by shifting and stretching a mother wavelet, Ïˆ (t) âˆˆâ„’2.

### This Essay is

#### a Student's Work

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our workFor example,

âˆ€k,n,kâˆ§nâˆˆZ:(Ïˆk,n(t)=2âˆ’k2Ïˆ(2âˆ’ktâˆ’n)) --------------(3)

defines a family of wavelets {Ïˆk,n(t) âˆ£âˆ£kâˆˆZâˆ§nâˆˆZ} which are related by power-of-two stretches. As k increases, the wavelet stretches by a factor of two; as n increases, the wavelet shifts right. Power-of-two stretching is a convenient, though somewhat arbitrary, choice. In our treatment of the Discrete Wavelet Transform (DWT), however, we will focus on this choice. Even with power-of-two stretches, there are various possibilities for Ïˆ (t), each giving a different flavor of DWT.

Wavelets are constructed so that {Ïˆk, n (t) âˆ£âˆ£ nâˆˆZ} (i.e., the set of all shifted wavelets at fixed scale k), describes a particular level of 'detail' in the signal. As k becomes smaller (i.e., closer to âˆ’âˆž), the wavelets become more "fine grained" and the level of detail increases. In this way, the DWT can give a multi-resolution description of a signal, very useful in analyzing "real-world" signals. Essentially, the DWT gives us a discrete multiresolution description of a continuous-time signal in â„’2.

In the modules that follow, these DWT concepts will be developed "from scratch" using Hilbert space principles. To aid the development, we make use of the so-called scaling function Ï† (t) âˆˆâ„’2, which will be used to approximate the signal up to a particular level of detail. Like with wavelets, a family of scaling functions can be constructed via shifts and power-of-two stretches for the given mother scaling function Ï† (t).

âˆ€k,n,kâˆ§nâˆˆZ:(Ï†k,n(t)=2âˆ’k2Ï†(2âˆ’ktâˆ’n)) ----------- (4)

## 2.3. Lifting

Figure 2. Taxonomy of Lifting Schemes

## 2.3.1. Integer lifting:

Invertible wavelet transforms that map integers to integers have important applications in lossless coding. Wavelets, wavelet packets, and local cosine transforms are used in a variety of applications, including image compression [1, 2]. In most cases, the filters that are used have floating point coefficients. For instance, if one prefers to use orthonormal filters with an assigned number N (N Â¢ 2) of vanishing moments and minimal filter length, then the resulting filter coefficients are real numbers which can be computed with high precision, but for which we do not even have a closed form expression if N > 3 [3]. When the input data consist of sequences of integers (as is the case for images), the resulting filtered outputs no longer consist of integers. Yet, for lossless coding it would be be able to characterize the output completely again with integers. In case of Fourier and Cosine transforms, this problem has been solved in [4]. An integer version of the orthogonal Haar transform has been known for some time as the S (sequential) transform [5, 6]. In [2], a redundant integer version of the Haar transform is introduced which uses rounded integer arithmetic and which yields near optimal approximation in certain Besov spaces. Relaxing the constraint of orthonormality of the wavelets makes it possible to obtain filter coefficients that are dyadic rationals [7]; up to scaling these filters can be viewed as mapping integers to integers.

## 2.3.2. Multi-dimensional lifting:

In this category lifting filters will be designed to compute a multidimensional nonseparable wavelet transform. In this lifting model each lifting step can be designed in order to increase the number of vanishing moments of the wavelet, or to preserve the signal approximation mean on each level.

## 2.3.3. Classical lifting:

For images, filter banks and lifting filters are usually developed for the 1-D case and then they are extended to the separable 2-D case by a succession of a vertical and an horizontal 1-D filtering. This structure leads to a 4 band decomposition per resolution level (Figure 3), which may be successively iterated on the LL band (the low pass vertically and low pass horizontally filtered band). For image compression purposes, bands with high frequency components (the HL, LH and HH bands) are not recursively filtered.

## 2.3.4. Space-varying lifting:

Space-varying lifting chooses the filter at each sample n according to local signal characteristics (LC). In general, there is no need to code side information.

P(x) = P(x, LC(x)) ------------------- (5)

U(y`) = U(y`,LC(y`)) -----------------(6)

Lifting steps depend on the same samples as the non-varying case. This is the most significant difference from the adaptive lifting. Typically, LC indicates flat zones, edges or textures. Filter may vary in many ways.

## 2.3.5. Adaptive lifting:

The adaptive Lifting Scheme is a modification of the classical lifting. Its simpler version was stated in [8, 9, 10] and the generalization and lossy coding versions in [11] and [12], respectively. Figure 3 shows an example in which an adaptive update step is followed by a fixed prediction. At each sample n, an update operator is chosen according to a decision function D(x[n], y) which depends on y, as the classical step and other 'adaptive' approaches [13, 14], but it also depends on the sample being updated. For this reason a problem arises, since the decoder does not dispose of the sample x[n] used at the coder for the decision. The decoder have the x`[n] sample, an updated version of x[n] through an unknown update filter. The challenge is to find a decision function and a set of filters which allow to reproduce the decision D(x[n], y) at the decoder (1), thus obtaining a reversible decomposition scheme. This property is known as the decision conservation condition.

D(x[n], y) = D`(x`[n], y) --------------------(7)

Figure 3. Adaptive Update lifting step followed by a classical prediction.

## 2.3.6. Vector Lifting:

Vector Lifting Schemes (VLS) offer the advantage of generating two compact multiresolution representations of the left and the right views. The approach of VLS is that the information coming from the left view, which is used for the prediction of the right one, is also used, through the update operator, to compute the approximation coefficients of the right view.

## 3. Sources of compression

## 3.1.1. Image compression:

For images, filter banks and lifting filters are usually developed for the 1-D case and then they are extended to the separable 2-D case by a succession of a vertical and an horizontal 1-D filtering. This structure leads to a 4 band per resolution level decomposition. The decomposition may be iterated on the LL band (the vertically and horizontally low-pass filtered band). The bands with high frequency components (the HL, LH, and HH bands) are not recursively filtered.

## 3.1.2. Video compression:

The multiresolution representation of videos has the advantage of scalability, i.e. the possibility to transmit the same sequence at different resolution as high resolution television, normal television, videophone, and videoconferencing. In addition, this representation seems to be the method of decomposition and understanding of images in the human visual system [1]. Among different possibilities of multiresolution analysis and synthesis, wavelet functions are the most adapted to these purposes due to their scaling and translation properties. The present difficulty in obtaining the motion compensated image from the multiresolution representation is caused by the impossibility to obtain a right answer by inverting the operators of translation and wavelet transform.

## 3.1.3. Data compression:

One of the wavelet transform advantages concerning data compression is that it tends to compact the input signal energy into a relatively small number of wavelet coefficients. Lowest resolution band coefficients have high energy. High frequency bands represent the majority of the transformed samples. Most of high frequency band coefficients are zero or have low energy. The exceptions are samples lying near strong edges relative to the band orientation. For instance, a vertical edge produces significant wavelet coefficients in the HL band, which is obtained by applying an horizontal high-pass filter and thus, the edge high frequency components are not eliminated. At the same time, the LH band is not affected by such an edge. Equivalent statements are valid for the other bands: in general, horizontal and diagonal edges produce significant coefficients in the HL and the HH bands, respectively.

## 3.2. Current State of the Art:

5-band motion compensated temporal lifting structure proposed by Haria Trocan [15], can be utilized in hierarchical open-loop subband motion compensated decomposition. This typical 5-band Lifting Scheme results in one approximation subband and four other subbands. This Lifting Scheme has two bidirectional and two mono directional predictors. Hence this methodology can be utilized when the video sequence is poor in motion video surveillance. This approach guarantees the low-pass features of the filter minimizing the told reconstruction error. This method also answers the problems arising from unconnected or the multiple connected pixels. As a motive and requirement of the proposed model, the authors initially argued that "Wavelet coding schemes were proved to be the best for video compression applications. In those video surveillance systems there is a growing importance for their low costs and for their characteristic to provide public security. The effacing of video coding technology is rated according to the design of digital surveillance systems. Instead of multiple encoding of each video for providing the optimized bit stream to each client, a scalable approach is better. With this scalable approach a single encoded bit stream with a syntax enabling a flexible and low complexity extraction of the information which matches the requirements of different devices and networks can be provided.

Wavelet based video coding schemes are very popular for their coding efficiency, spatio-temporal scalability, high energy compaction of sub bands, resilience to transmission errors. To secure all these features many and different networks have come in multimedia applications. One among them is the Lifting Scheme (LS).This lifting based coding schemes are not so complex, hence are very much used in 3-D subband wavelet coding. The dyadic motion - compensated temporal filtering (DCTF) processes were undergoing many developments, related to for example the lifting predict / update operators or the precision of motion estimation. In addition, H-band Lifting Schemes with good reconstruction or 3-band temporal decomposition were open to non-dyadic scalability factors. The methodology of decomposition used for the 3-band motion compensated temporal structures can be utilized to several other channels as well." Finally the authors conclude that "Through proposed 5-band temporal Lifting Scheme through which the present scheme coding efficiency of the temporal sub band approximation can be achieved resulting in a better temporal scalability. This special characteristic is very useful in the video surveillance in which motion activity is very low." Having more channels has two uses. First one being the possibility for a vast choice of the scalability factor for example it allows a temporary sub sampling with factors of 3 - 5 and combination of these: for example a 3 band scheme followed by a 5 band scheme leads to a reduction of a factor of 15 in the frame rate, and also the possibility of creating approximation bands using a less number of temporal decompositions. The coding performance is much dependent on the sequence characteristics and the structure of motion model etc. Another feature is according to the needed frame rate or temporal scalability factor, the structure of the above said models can select. Through the proposed scheme the unitary norm of the impulse responses of the filters involved in the 5-band structure are also preserved. It also saves the energy of an input sequence. The quantization error in every detail and approximation frame is taken into account to maintain equalization between reconstruction errors of the five consecutive frames and quantization error of the approximation and the detail frames. The proposed scheme is tested on those frames where the frames one too weak to give a good motion compensation of the detail frames. Through the experiment it is observed that the quality in SNR obtained with the 5-band filter is comparable with that obtained by four and respectively five levels of decomposition of the dyadic filters and three levels of decomposition of the three band filter. The new approach is presenting a better SNR value for the approximation subband. Thus it is proved that the 5 band scheme has a better coding efficiency for the temporal approximation sub bands, leading to an improved temporal scalability. This is also effective in motion zones. This approach presents a better video compression methodology.

The authors Mr. Gemma Piella et al[16] Suggests a new approach in his paper entitled 'A Three Step Nonlinear Lifting Scheme for lossless Image Compression'. Initially the authors argued that "Wavelet based image compression role in still image coding standard JPEG2000 is very important and very effective. But it is not without few lapses. This wavelet based image compression algorithms ruin the wavelet representation which effectively decor relate and approximate data with few non-zero wavelet co efficient. More over classical linear wavelets can't transform the edges and contours in an image. Hence there is an increasing demand for a better multi resolution decomposition which can take into account different aspects of input signal or image." Through this paper author brings in a new model to develop the proposed adaptive wavelet transform. This method follows a three step non linear Lifting Scheme a fixed prediction, space varying update and a non additive prediction. Many researchers proposed different solutions. The first solution is the deconstruction of the fixed but tailor made bases with the help of non linear wavelets or by the adaptive sub band structures. This deconstruction undertakes Lifting Scheme. Which is flexible and brings in linearity or different kinds of adaptively effective image compression can also be done with a new class of adaptive wavelet decompositions. This can capture the directional nature of picture information. In this method characteristics of 'Semi norms' are used to develop lifting structures which can select the best update filters. With Lifting Scheme it becomes possible to build any 'Wavelet you like' on 'every geometrical structure'. This principal is used by Mr. Gemma Piella by adapting both update and prediction operators to the local properties of the signal. The extension of the adaptive scheme to the two dimensional case is achieved by applying the one-dimensional filters to the image date in a separable way. The images are first filtered vertically and then horizontally resulting in four band decomposition. In the three steps nonlinear Lifting Scheme the adaptive scheme works as the haar wavelet if the gradient is not large in the location. The scheme recognizes that there is an edge and does not apply any smoothing while updating the image. This scheme is applied to one dimensional signal with a two level decomposition. The result is unlike the linear schemes the considered decomposition doesn't smooth the discontinuities in the approximation signal and avoids the oscillation effects. In addition it is in fewer and smaller non-zero details. This non-linear decomposition achieves lower bitrates as well.

Annabelle Gouze et al [17] proposed a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. This scheme is generalized and illustrated for the 2-D separable and for the quincunx images. The authors argued that the design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. The main aim of the model discussed is to enable the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. This model is based on two-step Lifting Scheme that associated with the lifting theory with Wiener's optimization. The first step of the model is known as prediction step and this step mainly concern to minimize the variance of the signal, and the second is update step defined in order to minimize a reconstruction error. The defined multi dimensional filtering based on two channel lifting works based on statistical properties of the source signal. Based on the emparical analysis results discussed in the literature we can conclude that the method can only be exploited for regular sampling scheme. This model has an optimizing coding scheme and offering an optimal decorrelation. Concern to prediction step, the optimization process depends on prediction operator. The choice of this operator plays an essential role, since encoding allows a bit rate gain, mainly in the high-frequency subband. The criterion retained to define the prediction step consisted in minimizing variance of the high-frequency subband. The update step determines the low-resolution signal. This signal is only slightly compressed, and it contains the essential information to reconstruct the signal. The update operator has been defined in order to minimize the reconstruction error when the high-frequency image is removed. The operator was thus defined by imposing an optimal reconstruction. These filters are called the LS(2,2), LS(4,2), LS(6,2), and the separable Optimized Lifting Schemes (OLS) are the OLS(4,2) and OLS(6,2) starting from the low-frequency image. The optimization of filters was exploited within the framework of the Quincunx Lifting Scheme (QLS).

For the high speed implementation, speeding up the performance and reducing the resource consumption Mr. Wei Wang et al [18], in their research paper entitled High speed FPGA Implementation for DWT of Lifting Scheme Presented a new architectural design. This design has High Speed 9/7, lifting DWT algorithm with multi stage pipelining structure and a rational 9/7 coefficients which is an implementation on FPGA. ID-DWT pipeline structure implementing the Lifting Scheme can only integer sums, reducing the area cost thus increasing the maximum operating frequency. This also brings in a problem of discontinuities in the finite sample filtering resulting in the loss of edges. Whereas the new DWT architecture proposed in this paper is naturally pipelined. This pipelined arithmetic stage design is made direct by inserting some registers inside the logic. With this format there will be just one sum operation at each pipeline stage. This kind of pipelining raises the area cost of the architecture but decreases the path between the registers will be fastened. This automatically increases the throughput rate of the DWT. In addition it also saves power. This new design consists of integer shifted adders resulting in an 18 stage pipeline. This has 3 times larger operating frequency of design than of the old design. As such this new design can produce a better area through put compromise. Another improvisation to the old design is the usage of rational 9/7 DWT coefficients which produces an effective larger data through put. And the computation through rational 9/7 lifting factorization proposed in this paper is relatively easier than the original 9/7 lifting factorization. Moreover this rational lifting need only 2 float point multipliers, 1 integer multiplies, 10 integer adders and 5 shifters. By using the rational 9/7 DWT, Coefficients, the PSNR values of the reconstructed image will not exceed 0.1dB. Other than the pipelining architecture, the original arithmetic stage structure could be improved by combination of tree type arrangement and Horner's rule for the accumulation of partial products in multiplication. Using Horner's rule in partial product accumulation multiplication truncation error is reduced where as tree height reduction is used latency reduction. With this proposed architecture has much higher operating frequency than the architecture which is without multi stage pipelining.

Pierre Ele et al [19] discussed the data compression of electromyography signals and presented a new decomposition following N-order. This kind of decomposition of modified Lifting Scheme (MLS) to the order N can resolve few problems related to the storage and transmission. This can perform better than wavelet packet transform and Lifting Scheme in effective transmission. Effective transmission and storage of the biomedical signals is closely linked to the band width. The compressed signals occupy less transmission time and need a smaller band width or storage space. As such compression signals need to be improvised. The compression methods followed for ECG & Images can't answer can't answer the problems of EMG. Since the statistics of the signals of EMG are not similar to that of ECG. There were few developments in transformation, quantization or coding steps. These authors discussed a MLS decomposition of 1, 2 and 3 order on the EMG signals and evaluated in comparison with the Lifting Scheme and decomposition of wavelet packet. Effectiveness of the new proposal was tested in the means of the function of the distortion, the compression ratio and the mean frequency distortion (MFD). The author tried to compare and find out the best between the compression method and the Lifting Scheme and modified Lifting Scheme of 1,2 and 3 order. In this modified Lifting Scheme, a separate wavelet is used to separate the signal into two sub signals. These two sub signals one then corresponded to samples of odd and even index. This is sub sampling of input signal and removes correlation on EMG signal. Thus the odd transform is built gradually, thus ensuring a better coding. The discussion carried out by the author claimed that in N-order Modified Lifting Scheme approach MLS is gradually built up to N order by adding a function from the base. With this new principle of adding, summing and over sampling on the input signal redundancy is reduced. In this procedure of decomposition only one input signal undergoes transformation where as the other remains same. The MLS decomposition algorithm has several advantages. The calculations are made on the fly, the number of operations required to calculate the transform is reduced and the transform is done without any reference to the wavelet coefficients. After experiments and comparisons if was found that the behavior of these methods are identical. But the slightest improvement is the raise in the. Visual quality of reconstructed EMG signal when MLS 1,2 and 3 is applied. The approach developed produced a better signal in terms of visual quality. Following this method distortion of the mean frequency of EMG signal is found minimal when compared to the Lifting Scheme and packet wavelet. Hence it is proved that MLS approach is better. Through this paper generalization to N-order of MLS decomposition and tests for order N equal to 1,2 and 3 are presented which showed some development in the reconstruction of EMG signals. This method has a low computation load. Through the experiments it is also found that a threshold of 19 to 20 dB represents the minimum quality reconstruction of EMG signals. But this proposal limited to EMG signals where as doesn't provide any insight into any other electrophysiological signals.

Danilo Basicu et al [20] discussed the implementation of 5/3 integer Lifting Scheme for the wavelet transform on a VLIW CPU core, to improve computational performance of image coding. The lifting approach has the main advantage of reducing the computational cost of image transforms by putting effort on optimal coding of the kernel on embedded VLIW processors. The authors aimed to elaborate the key optimization strategy available for VLIW processors offering a predefined balance of Instruction Level Parallelism (ILP). The two steps, the prediction step and the update step are done on a 1-D signal, by referring to the coefficients (averages) by Cn and original samples by Xn, the prediction step is performed as:

C(2n+1)=[X(2n)+X(2n+2)+1 ]/2 -------------- (8)

And the update step is computed as:

C(2n)=X(2n)+[C(2n-1)+C(2n+1)+2]/4 --------(9)

The co-efficient computed in the prediction step can be viewed due to high-pass filters, and those in the update step, by low-pass filters. These concepts are further extended into the 2-D image signal, i.e. the JPEG2000 standard, which applies to 2D Wavelet transforms as a sequence to the 1-D transform. Any software DWT implementation must handle border extension as well. The Symmetric boundary extension is known to offer perfect reconstruction bounded by some constraints. As a target for optimization, the ST220 processor which belongs to the VLIW cores family, particularly dedicated for DVD recording, has been taken up which can execute upto four 32 bits RISC instructions at any clock cycle. The processor is the most efficient of its type and is well organized, and is also known for heavy load data handling. The general purpose of this work is to show how an optimization can be obtained, starting from the 'naÃ¯ve coding practice'. It is hypothesized that the image corresponds to a unique tile. In order to group together the homogenous coefficients, a reordering of the output is applied. The reference algorithm, implements the Lifting Scheme without exploiting any particular software optimization. The 4x4 blocks algorithm deals with splitting the tile into 16 blocks. This again poses problems, the first being establishment of computing a complete level of decomposition. This algorithm avoids the symmetric boundary extension. Another key aspect is that while it is possible to read 4 input image pixels at a time using 32-bit loads, only 2 coefficients can be loaded/stored at a time because their dynamic range being greater than 8 bit and therefore they must be stored with 16 bits. In algorithms based on 8x2 blocks which is analogous to 4x4 ones, there are 9 possible positions, contributing to 41 registers necessary to map all coefficients. The 2-D algorithm represents an evolution of the 4x4 algorithm. According to the wavelet scheme, the levels are decomposed. Only the base band is further on decomposed. Memorization is followed by decomposition. The solution to the problems of the 2D algorithms has not been implemented so far due to the excessive computational cost. By and large, the final 2D algorithm has been compared with a reference 2D algorithm and has yielded better results. Also, the variations in the I-Cache do not affect the gain in terms of cycles. The stalls caused by the misses in the D-cache have been lowered. This work has shown that a keen assessment of the VLIW embedded processor is feasible within a fairly straightforward programming mode. The kernel of the JPEG2000 transform phase can be cast in effortless ways.

Marijn J.H. Loomans et al [21], proposed a multi-level two dimensional Discrete Wavelet transformations that deals with wavelet transformation, a state of the art technology used in several video and image codecs. The research is an extension of the work of Chatterjee and Meerwald [25] on changed behavior of the transform to improve the use of automatic cache memories in various architectures. The drawback of the theory of the later is removed by the authors by the use of 5/3 filter coefficients and is implemented using the lifting frame work, which in turn utilizes the two commonly found features in embedded processes- SIMD(Single Instruction Multiple Data) and DMA(Direct Memory Access). The main feature of the technique is the high transfer rate of data which completes the data requirement of the ALUs. The obtained execution performs a 4-level transformation at CCIR-601 broadcast resolution in 3.65MCycles at an average rate of 600 MHz, which satisfies the performance requirement for any real-time image/video encoding system. The author's motive of achieving the real-time implementation is fulfilled by the beautiful utilization of SIMD which allows processing of multi data points while executing a single instruction, thereby optimized usage of the wide data paths. The SIMD is used in horizontal filtering as well as in vertical filtering, the basic principle of which is splitting the input data in odd and even samples, generating a prediction from the even samples to adjust the odd samples and later updating the even samples with the help of odd samples. The overview of horizontal and vertical filtering under SIMD is following.

Horizontal filtering under SIMD consists of 3 steps namely the predict step, update step and then finally the split step. The governing equation used in the predict step for calculating the high pass output and synchronize the result in the original buffer at odd position is

X[n] = X[n] -{X (n-1) +X [(n+1)/2]} ------- (10)

The authors have tried to utilize the maximum of 64-bit model and store paths, whereas concurrently the 32-bit ALUs in three stages of calculation namely- proloz, kernel and epilog stages. After the predict step the calculation of low-pass output and replacing the result in original buffer at even places are done in update step, for which the governing equation is

X[n] = X[n] - {X (n-1) +X[(n+1)/4]} ------ (11)

Finally the splitting step takes the interleaved low and high-pass data created by the previous stages and splits it into two separate low and high - pass bands. The vertical filtering is performed on the previously horizontal filtered lines and goes in the same as the horizontal filtering. On the significant issue of cache management algorithm managed as prolog, kernel and epilog steps. Firstly in the prolog pointers are initialized and the first image line is copied to pLINEX using DMA. In the whole loop DMA implicitly performs vertical splitting. After the initialization and symmetric extension in prolog, the rest of the image is processed in kernel, and once again the DMA performs the vertical splitting. The authors also have proposed a novel implementation for multi-level wavelet transformation, in which only two buffers are used and the complete output is placed in the output buffer, the previously illustrated wavelet filtering performs single iteration of a multi-level 2D wavelet transform. The expansion to a multi-level transform is implicated due to the SIMD and DMA optimizations and the involved buffering. From the experiments conducted on DM642 DSP, using the techniques described the authors, it has been found that for higher resolutions the transform becomes more efficient, requiring less cycles per pixel. It has been verified that the authors have achieved a frame rate of 12.5-15 frame per second for the complete codec on a surveillance camera embedded with a DM642 DSP and hence they successfully claimed that the reduced complexity on the wavelet transforms lowers the constant load on the processor enabling the real-time implementation of other advanced algorithms like scalable video codec or video analysis algorithm.

M.Kaaniche et al[22] discussed loss-to-lossless image compression model based upon 2D non separable Lifting Scheme decomposition that enables progressive reconstruction and exact decoding of images are elaborative discussed due to overcoming the limitations caused by separable Lifting Scheme. Focusing on the optimization of involved decomposition operator, the prediction filters are designed by minimizing variance of detail signals. A new proposition criterion is aimed by concerning over update filters with successful simulations carried out on still and still images. The authors theoretically succeed to present an effective method for the optimization of the update filter and hence exploited flexibility offered by non separable Lifting Schemes. The proposed method adapts the filter to the contents of input image while ensuring perfect reconstruction. Experimental results carried out on still images and residual images have illustrated benefits of optimizing both prediction and update filters. In future work, plans to extend optimization method to the vector Lifting Scheme recently presented for stereo image coding are under construction.

Mounir Kaaniche et al[23] argued that "one of the potential drawbacks of the previous VLS structure is that it generates an update leakage effect, in the sense that the information coming from the left view, which is used for the prediction of the right one, is also used, through the update operator, to compute the approximation coefficients of the right view". As concern to this argument they proposed and analyzed two vector lifting schemes VLS-I based on "predict-update-predict (P-U-P) lifting structure" and VLS-II based on prediction stage with conventional lifting stucture. The authors succeeded to analyze two Vector Lifting Schemes (VLS) for lossy-to-lossless compression of stereo image pairs. They justify the projection of the VLS-I and VLS-II by arguing that to take advantage of the correlations between the two images these two proposed schemes are optimal. They empirically described that unlike conventional methods which generate a residual image to encode the stereo pair, the proposed schemes use a joint multiscale decomposition directly applied to the left and the right views. These two schemes exploit the intra and interim age redundancies by using the estimated disparity map between the two views. A theoretical analysis in terms of prediction error variance was conducted in order to show the benefits of the underlying VLS structure. The noteworthy point of the model is that it can be enhanced for optimization in decomposition by taking into account the effect of occlusions. Also, an extension of the scheme to multiview/video coding would be a vision that comes into reality in future literature.

Jinsheng Xiao et al[24] have analyzed the second generation wavelets based on the Lifting Scheme and thus derived filters that could compress the image. By increasing the number of vanishing moments, they could result an increase in the compress. In comparison with the first wavelet constructions which failed to explain Fourier transforms, translation and dilation, the second generation wavelets are suitable for regular sampling of data and can also overcome the problem of applying it in limit region. They relied on the very basic idea of Multi-Resolution analysis and with the help of Lifting Schemes, developed this theory. They took into consideration, Donoho and Lonsbery's theories [26] to fortify this theory. They developed filters which processed in space domain, independent of translating and dilating in frequency domain. The compress rate which determines the quality of image is not only dependent on length of the filter but also includes orthogonality, biorthogonality, vanishing moment, regularity and local frequency. By maintaining the biorthogonality, they controlled the above degrees of freedom, and thus increased the vanishing moments. The filter mainly affects the regularity and vanishing properties of wavelet by compressing and encoding, the wavelet N(x) with N being the number of vanishing moments. Then N corresponds to the number of roots, multiplicity for high pass filter G (w) with w=0. They emphasized that smoothness of N(x) could be increased by increasing the regularity index. It is determined by the convergence rate to improve the compression ratio. Certain conditions of filter are considered to improve the symmetry and vanishing moments. For equal number of vanishing moments, the larger of regularity index, the better the encoding effect, which is the key to image impression. The Laurent polynomial of corresponding filter is G (w) and H(w) is the corresponding scaling polynomial, then, for each natural number Î”N>0, there will be a polynomial t(w) with the highest ranked Î”N.

Gnew (w) =G(w)-N(2w)H(w) ------------- (12)

It has N+Î”N vanishing moments and ÊŽ(w)=wNt(w). the new constructed Gnew (w) via wave Lifting Scheme is given by Gnew(w) = wN+ Î”N.qnew(w). The Lifting Scheme can't change the vanishing number of dual wavelets but is much similar to the dual Lifting Scheme. This is one such drawback that hasn't been explained. The new filter has a transfer function

Hnew(w)=(-e2iw+eiw+8+8e-iw+e-2iw-e-(3iw))/16 ------- (13)

and

Gnew(w)=(e2iw+eiw-8+8e-iw-e-2iw -e-(3iw))/16 ------- (14)

for High Pass and Low Pass Filters respectively.

The Harr Wavelet is reconstructed using the first analyzed low pass coefficient and the reconstructed image using the second analyzed low pass coefficient. Apparently, the high-frequency coefficients aftethe lifted Harr Wavelet transform are less, and more coefficients approach zero, then the compressed effect is even well.

This new Lifting Scheme is a method to construct bi-orthogonal wavelet, which is operated entirely in space domain and thus reduces the computational time with its particular structure which finishes the frequency analysis of signal. Some featured characteristics of the wavelet are:

Inheriting the MRA of 1st generation wavelet

It is independent of fourier transform and thus constructing the wavelet completely in space domain.

It demands very little memory and can be realized by DSP chips

It was made easy to realize the wavelet transform my mapping an integer to integer which was their main aspect of research.

The reconstruction quality of the image can be manipulated and realized to image of any size.

Thus, the second generation wavelet transform is much better than the first one. The dynamic area of the second generation wavelet is wider that the first, influencing the compressing effect of the image. This could be done without any loss. The Lifting Scheme can also be placed on the JPEG2000 image compression standards. It can be thus used as a main algorithm in the wavelet constructions.

## 4. Conclusion

We carried out a review on recent literature to conclude the recent updates in Lifting Schemes for second generation wavelet transformations. Based on this review we can conclude that all or most of the Lifting Schemes purely based on prediction or statistical models. We also can argue that it is necessary to perform research that finds better ways to synchronize the Evolutionary Computation (EC) and Lifting Schemes for better optimization of Lifting steps.