The Contourlet Transformation Localisation Computer Science Essay

Published:

The need for improving the image quality in digital image processing applications has always been set to priority due to the loss and noise in image transmission. [4]A best image representation should have the following characteristics:

A) Multiresolution. The representation should allow images to be successively approximated, from coarse to fine resolutions.

B) Localization. The basic elements in the representation should be localized in both the spatial and the frequency domains.

C) Directionality. The representation should contain basis elements oriented at a variety of directions, much more than the few directions that are offered by separable wavelets.

[4]A digital image is comprised of: edges, details associated with edges and appearance of the image, and these crucial parts are more important in reconstructing an image. So in image compression or enhancement, it is very necessary to preserve this information to get a good quality of reconstructed the digital image.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

There are a lot of image transformations which are being developed for the sake of increasing the image quality for high end applications. But, all the applications will try to improve the quality of an image by showing their effects on a single image which may not be completely efficient for the attributes of a digital image especially the edges, the target objects e,t.c .

[6]To enhance an image, there is a need to improvise visual appearance of a digital image without inducing distortion for it. The bases of Wavelet in the image will bring few disadvantages, as they are not adjusted for detecting the highly anisotropic parameters like the alignments in an image.

To overcome this limitation in the existing low efficient image processing algorithms, there is a need to have a composite set of images being modified by the same algorithm and to make a combination of the processed images to extract the required object in the image. [3]One such algorithm is Contourlet image transformation that uses multi scaling to process the single image with different resolution ratios and stores these multi-resolution images in a filter bank and uses the combination of these images to get the best detection methods. There are so many advantages by using this method like avoiding additional noise while transforming the images with noise and best denoising results.

[6] Contourlet transform has very good efficiency in presenting the image salient features such as edges, lines, curves and contours than wavelet transform because of the anisotropy and directionality property. That is why this transform is well-suited for multi- scale image improvement. The contourlet transformation has two steps in enhancement process: the sub band decomposing and the directionality transformation. The Laplacian pyramid helps in detecting the discontinuities in point space, later being followed by the directionality filtered banks getting directed to the discontinuities in point space into linearly shaped structures. Final result is going to be a expanded digital image by utilizing fundamental parameters such as the segments made of contour and hence the transformation is called after it, the contourlet transformation. First, multi resolution decomposing is implemented by the Laplacian pyramidal, and later the directionality filtered banks are implemented for the higher frequency components of each band pass channel.

1.2. Block diagram:

[1] Fig 1.1. Block Diagram for understanding the operation of Contourlet Transformation

[1] The Contourlet transformation is a directional transform which has the capability of capturing contour and finer details in an image. The approach in this transformation begins with the discrete construction of the domain and then sparse expansion in the continuous domain. The key point that differentiated the Contourlet and other transformations is that, in this transformation Laplacian pyramid[14] along with the Directional Filter Banks are used. As a result, this not only detects the edge discontinuities, but also converts all these discontinuities into continuous domain. The block diagram of the process illustrates the Contourlet Transformation, in which the input image consists of frequency components like LL, LH, HL, and HH. The Laplacian Pyramid at each level generates a Low pass output (LL) and a Band pass output (LH, HL, and HH). The Band pass output is then passed into Directional Filter Bank [13] which results in Contourlet coefficients. The Low pass output is again passed through the Laplacian Pyramid to obtain more coefficients and this is done till the fine details of the image are obtained.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

[1] The contourlet coefficients are derived from the equations,

Ylow[n] =

Yband[n] = (1)

[1]where X[k] is the actual image and the coefficients are calculated from low frequency pass filter and also the band pass frequency filter processing of the image.[12]

Working Principle:

[2]Multi resolution, frequency time period distribution of the digital image will be contributed by the wavelets. But, the wavelets weren't efficient for presenting these digital images with smoothened contours in so many directions. Contourlet Transformation (CT) solves the current discussing issue as it has two most important characteristics namely, directionality and anisotropy.

Directionality: The representation should contain basis elements oriented at a variety of directions, much more than the few directions that are offered by wavelets.

Anisotropy: To capture smooth contours in images, the representation should contain basis elements using a variety of elongated shapes with different aspect ratios.

[2]Contourlet transform is a multi scale and directional image representation that uses a structure like a wavelet for the sake of detecting the edges, and then a directional transformation for the detecting the contour segments. The double filter bank design of the contourlet can be seen in Fig 1.2 to attain the sparse expansion of typical images that have smoothened contourlets. In this double filtered banking structure, Laplacian Pyramidal structure is [15] being implemented to detect the discontinuities in point space, later succeeded by a Directionality Filtered Banks (DFB),[16] that is utilized to map the discontinuities onto linear designs. The contourlets will have long lasting handlers at different resolutions, directions and aspect ratios. These factors prepare the contourlets to be similar to the smoothening contour with multiresolutions.

[3] Let us consider the condition when a smoothening contour is approximated, as seen in Fig. 1.2. As, 2-D wavelets were reconstructed by the tensor applications of 1-D wavelets, the "wavelet"-fashion approximator is constrained to utilizing the square-shaped strokes of brush as well as with the contour, utilizing various sizes that correspond to the multi-scaled design of wavelets. With the resolution becoming better, one can clearly view the disadvantage of the wavelet-fashion approximation which requires utilizing many finely divided points for detecting the contours. Newly fashioned approximater, the Contourlet traverses efficiently, the image smoothening by the contourlet with the help of brush strokes using various long structures and also in different directions that follow the contourlet.

[3] Fig 1.2 Wavelet transformation vs contourlet transformation design: which describes successful improvement done with these two transforms at a smoothening contourlet that is displayed by a thick curve that separates the smoothened regions.

[2]In the respective time inverse domain, the contour transformation gives a multiresolution and directionality decomposition. It can introduce redundancy (up to 33%) because of the LP stage. The discussed properties of Contourlet transform, i.e., directionality and anisotropy changed it to an efficient method for content based image retrieval.

1.3. Laplacian Pyramid Decomposition [2]:

For attaining a multi-resolution decomposition the Laplcian Pyramid is implemented. Laplcian Pyramid decomposition in every stage produces the down-sampled low pass filter image representation of actual input digital image and the variation of the actual, the predicted, which produces the band pass frequency filtered image. The Laplcian Pyramid decomposition as seen in Fig 1.3. In Laplcian Pyramid decomposition method, H and G are the corresponding single dimensional low pass filter analyzed and synthesized filters. M indicates the matrix for sampling. In this case, the band pass image attained in Laplcian Pyramid decomposition is later image processed by the DFB level. Laplcian Pyramid with orthogonality functioning filters presents a tightened frame that was bounded to one.

In Laplcian Pyramid decomposing of a digital image representation, f (i, j) indicated this actual image and also its low-passed image filtered representation is flo(i, j) and also the predicted image is f (i, j). The incorrectness of prediction is provided by the equation

Pe (i, j) = f (i, j) - (i, j) (2)

The directional decomposing is implemented on Pe (i, j) since it faces the large decorrelation and needs less numerous bits than f (i, j). In the equation (2), Pe (i, j) indicates a band pass filtered image. Furthermore decomposing could also be done using eq (2) with flo(i, j) repeatedly to obtain fl1(i, j), fl2(i, j),. . . . . . fln(i, j) , where, 'n' indicates the number levels of pyramids. In Laplcian Pyramid reconstruction, the digital image is attained by a simple adding the variation to the prediction from the coarse image.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

[2] Fig 1.3. Laplacian Pyramid Decomposition for one level

1.4. Directional Filter Bank Decomposition [2]:

Directional Filter Bank is prepared to detect the higher frequency objects in an image like the smoothened contours and edges that are directional. Directional Filter Bank is designed by using a k stages binary hierarchical decomposing that results in 2k directional sub - bands that has wedge shaped time inverse separation as seen in Fig 1.4. The Directional Filter Bank discussed in this report is a simplified Directional Filter Bank, that is built with two building blocks. The initial block is a 2 channeled (arrangement of things by fives in a square or a rectangle) filtered bank with the fanning filters which separates a 2D spectrum as 2 different directions, in horizontal and vertical directions. Later block is the operation for shearing that accounts for re-fashioning the pixels of the digital image. Because of the above mentioned two operations, the imfo on directions is saved. This is the most desirable condition in a CBIR system to enhance retrieval accuracy. The combined effect of a Laplacian Pyramidal filter and the Directionality Filtered Bank results in the doubled filtered bank design known as contourlet filter bank. Band pass resultant images from the Laplacian Pyramid are driven to Directional Filter Bank so that directions info can be detected. This method can be repeated on a coarse image. This combined effect of these filters gives a doubled repeated filtered bank design called as contourlet filtered bank which decomposed the actual digital image as a directional sub-bands at multi-resolutions.

[2] Fig 1.4. Directional Filter Bank Frequency Partitioning

[5]. The Directional Filter Bank s are arranged in a linear fashion so that the input image is processed through the series of Directional Filter Banks to get the enhanced output. This can be seen in the Fig 1.5. it is informative to view a level tree-structure.

[5] Fig 1.5. The multichannel view of tree-structured directional filter bank.

Directional Filter Bank equivalently as a 2l parallely channeled filtered banks with equivalnce filters along with all matrices for samplings as seen in Fig 1.5, where the equivalent (directional) synthesis filters are represented by .

The respective matrices for sampling will be having the mentioned diagonal forms:

(3)

This construction shows both the directionality and localization properties.

1.5. Algorithm Description [1]:

A. The process starts with the noisy image.

B. This image is converted into Contourlet Transform domain, using the decomposition process. In Wavelet, we determine the coefficients using scaling and a wavelet filter. But, in Contourlet, the discrete domain multiscale and multi directions using nonseparating filtered banks are constructed. The mentioned design gives the dynamic multi-scale, localized and directional digital image expansion that uses contourlet sectors and so named Contourlet transform.

C. Thus from the decomposition process the coefficients are determined.

D. Then for each noisy image pixels, the variance is estimated.

E. The resultant values are then compared with a threshold value for determining if the pixel is corrupted.

F. If these pixels are corrupted, they are suppressed or modified. Otherwise the pixels are preserved for further process.

G. Then all the resultant coefficients are reconstructed which results in denoised image.

1.6. Thresholding:

[1]Generally for denoising, the coefficients of the noisy image are compared with the threshold value. These threshold values are either obtained by trial and error method. Since human eyes are very sensitive to intensity of neighboring pixel values, in image denoising techniques, the variance in homogeneous regions must be less. Considering the threshold values depending on the variance, the noise level in the corrupted image still decreases. In this algorithm, a threshold value is set based upon the variance of the corrupted image. Based upon the results from various variance levels (nvar), the threshold is fixed. The intensity of the noise being added to the image (th) and the standard deviation of the noise less image (sigma) are also the deciding factors in fixing the threshold values. Based upon the various results obtained, we deduce that the threshold values must be fixed depending upon the high noise level and low noise level. In Speckle noise, the default variance level is 0.04, so considering Speckle noise variance (nvar) above 0.05 as high noise level and below 0.05 as low noise level, we introduced two threshold values separately. The results also prove that this two separate threshold values improve the denoising ability of the algorithm. Now to reconstruct the image, the coefficients above the threshold values are retained for Contourlet reconstruction and the coefficients below the threshold values are suppressed. The retained coefficients are reconstructed to obtain the Denoised image. This process is shown in the figure 1.6. [1]There are many other algorithms available for denoising the image particularly for the speckle corrupted images in remote sensing applications. But this algorithm is very simpler and effective compared to other algorithms.

[1]Fig 1.6. Thresholding algorithm.

If the algorithm is simpler, then the time consumed for complete denoising of the image will be less and the hardware implementation will also be feasible with high memory VLSI technologies.

CHAPTER 2

HADAMARD TRANSFORMATION

2.1. Introduction:

The concept of the fast Fourier transformation (FFT) technique drove for improvising the Fourier transformation image transforming method in which the 2D Fourier transformation of the digital image is sent through a single channel instead of the digital image alone. The act of refinement has further resulted to a relative digital image encoding scheme where image is converted by a operator that acts as a Hadamard matrix operator. The used matrix has an array of squares that has positive and negative 1's in which columns and rows are orthogonal to each another. A relatively more speed computing capacity algorithm, that is similar to FFT algorithm, that can compute the Hadamard transformation, was developed. As, only the real numerous subtractions and addition operations were needed for the Hadamard transform, a measure of amplitude efficiency advantage could possibly be made a comparison with the complexed integral Fourier transform. Transmission of the simpler Hadamard transforms of a digital image instead of the coordinate's domain of the digital image gives a potentially toleration for corrections over the channel and the probability of diminished bandwidth transmission.

The Fourier transformation is the normal operation variable that could get used to the digital image encoding and sending problem as it has the widespread implementation in other applications and the issue that a very high efficiency computational algorithm exists. In the case of image coding application, to be precise, the important things that are required is a 2D operator or transformation which can perform an inverse, which has the image averaging characteristic, and re-distribute digital image's energy uniformly. This stands as the benefit when considered in the view of implementing the faster computational design, persists and also computing operator is going to be self-inverse. Symmetric Hadamard matrices transform satisfies every requirement. This proves that Hadamard transformation has improved and better performance in so many aspects when compared to the Fourier transform that uses frequency domain for image encoding.

2.2. Hadamard Matrices:

The Hadamard matrix is a square array of positive and the negative 1's in which the columns (and also the rows) will be in an orthogonal fashion to each other. If H stands for a N X N Hadamard matrix later the result of the multiplication of H with the self - transpose will be the identity or unity matrices. Hence,

HH+= NI (4)

here I is referred as the unity or the identity matrix. If H is said to be a symmetrical Hadamard matrix then the equation becomes

HH = NI (5)

The columns (and rows) of a Hadamard matrix could be swapped with each other avoiding the effect of disturbing the orthogonal characteristics of the Hadamard.

Fig 2.1. The figure shows the Hadamard matrix of degree N=2n

The interpretation of frequency can be given to the Hadamard matrix. In every given row in a Hadamard matrix, the frequency will be nothing but the sign change number. The terminology of depicting the frequency from the sign changes can be called as "sequency" for assigning the sign changes count. Fig. 2.1 explains "sequency" depiction for several Hadamard matrices of binary order. There is a possibility for construction of the Hadamard matrices that has the degree N=2n which contains the sign changing element for each number starting from 0 to N- 1.

2.3. Hadamard Transformation of Images:

Let the array f(x, y) be the sample values of the intensity for the actual digital image against the collection of an array of N2 points. In this case, the two dimensional Hadamard transformation, F(u, v) of f(x, y) is shown in the equation below that results from the matrix product,

(6)

For the symmetrical Hadamard matrix of the degree N=2n, the 2D Hadamard transformation can be formulated in a series form as,

(7)

Where,

(8)

The variables xi, ui, yi and vi, were the presentation of the binary format of x, u, y and v correspondingly. Another series representation exists for the Hadamard matrices as "ordered" version in which the sequency of each row is larger than the preceding row which can be seen below:

(9)

Where

(10)

The 2D Hadamard transformation can be calculated as in a natural or else in the ordered format, as a form of algorithm that is similar to fast Fourier transformation computational algorithm.

2.4. Hadamard Image Transformation Properties:

There are so many interesting properties for Hadamard transformation. Out of them, higher priority property from the view of a digital image encoding can be dynamic range, energy conservation and entropy.

The energy term of zero sequency

(11)

is the measure of the average brightness of an image. Energy conservation property exists between the Hadamard domain and the spatial domain.

2.5. Computational Algorithm:

The calculation of hadamard transformation is carried out in two stages. The 1st stage is, 1D Hadamard transformation will be calculated for every row of the array f ( x , y) producing,

(12)

Later, stage is calculating the 1D Hadamard transformation for every column of F(u, y) producing the expression,

(13)

Computing the 1D Hadamard transform needs N2 number of calculations that indicates a calculation could possibly be an addition or else a subtraction. For reducing numerous calculations, Hadamard algorithm that can perform the calculations in Nlog2 N operations was developed.

Hadamard transformation of a digital image has to be quantize to successive digital image encoding and transmitting on a channel. For doing the quantization of the samples from Hadamard domain, it is required for determining the amount and placing of the levels of quantization.

2.6. Advantages[8]:

The simple implementation of using fast Hadamard transformation still provides a considerable priority during short process duration and also the easy nature in implementing hardware than another existing orthogonality transformations, like the discrete cosine transform(DCT) and the wavelet transformation.

The other advantage of Hadamard transformation is the components of transformation matrix H, were real, simple, binary numbers and the columns or rows of H, will be orthogonal to eachother.

[9]Due to its Simplicity in computation, this can be used in complex applications like Face recognition, fiber optic sensing e.t.c;

2.7. Disadvantages[10]:

The disadvantages of the Hadamard transformation in the parallel pipeline design are that the unnormalized Hadamard transform uses a large number of N accumulators with each accumulator doing (N−1) additions and that the normalized Hadamard transform needs an extra number of N multipliers. The disadvantages of the prior FHT parallel pipeline design are that the unnormalized Hadamard transform uses a large number of log2(N) stages with each stage having N adders. Another disadvantage is that the normalized Hadamard transform needs additional N multipliers.

This transformation affects the order of digital image degradation resulted from the quantization errors and the entropy of the quantized symbols.

To overcome all these disadvantages a high capability Contourlet image transformation that enhances the image quality by using the combination of mutli resolutions of an image.

CHAPTER 3

MATLAB

3.1. Introduction:

[17]MATLAB is the advanced technological computing interface and interactive stage for writing the algorithms, visualizing data, analyze the data, and numeric computation. With MATLAB, we can get the solutions for technical computing problems at a faster rate when compared with the traditional structured program writing and compiling languages, like COBOL, C, C++ e.t.c;. One could apply MATLAB in the existing and improving wide variety of applications that includes signaling, image processing techniques, electronics and communications, designing control methods, testing and measurements, financial data model and analyzing, biology that is computational. Toolbox add-ons make the MATLAB environment for solving particular cases of issues in these implementing areas. MATLAB offers the numerous options for documentation and also for the purpose of work-sharing. The MATLAB programs can be used and combined with other languages and applications and are ready for distribution for many applications.

3.2. History:

[19]MATLAB was initially designed in the late 1970s by Cleve Moler, later computer science department's chairman at the University of New Mexico, he prepared it for providing access to his class to LINPACK and EISPACK avoiding the need to learn Fortran. In the following period, it soon spread to other schools and universities and got a huge following in the entire applied mathematics community. Jack Little, an engineer, who got the idea of it during a visit with Moler made the MATLAB to make a way to Stanford University in 1983. Identifying its commercial potential, he woked along with Moler and Steve Bangert. They redesigned MATLAB in C and prepared MathWorks in 1984 to prolong its development and improvement. These redesigned libraries were known as JACKPAC. In 2000, it was made ready to use a new form of libraries for manipulating the matrices, LAPACK.

3.3. Main features[17]:

Advanced language for technical computing

Improved platform for managing programs, files, and data

Interactive tools for iterative exploring, designing, and problem solving

Mathematical operations for linear algebra, statistics, Fourier analysis, filtering,

optimization, and numerical combination

2-D and 3-D graphical operations for making the data visualized

Tools for building custom GUIs

Operations to combine MATLAB written programs and other external

Applications and languages, such as C, C++, Fortran, Java, COM, and Microsoft Excel

Simultaneously, MATLAB offers every possible important feature that a standard coding language should have, that includes arithmetical operations, controlling the flow, structured data, data types, object-oriented programming (OOP), and features of debugging.

3.4. Analysis of data[18]:

The MATLAB platform gives feasible tools and command prompt operations to analyze the data, that includes:

Interpolating and decimating

Extracting sections of data, scaling, and averaging

Thresholding and smoothing

Correlation, Fourier analysis, and filtering

1-D peak, valley, and zero finding

Basic statistics and curve fitting

Matrix analysis

3.5. Accessing the data[18]:

MATLAB is the most capable interface for the purpose of data access from other data carriers, files, other software, database, and also from external hardware devices. One can probably access the information from existing popular document formats, like the MS Excel, binary files or ASCII text; pictures, audio, and visual motion videos; and research scientific files, like the existing HDF and HDF5. Under leveled ASCII file input output operations will allow the people to access the documents which are in any given format. The extra functions available will allows the user to access the information from mass website pages and XML. A user can collect the information of hardware devices, like the personal computer's serial ports or audio, graphic cards. With the help of data acquisition toolbox provided in MATLAB, any user can send the ongoing, collected information exactly into MATLAB for analyzing and visualizing. The Control systems instrumental toolbox which is given separately activates the information transfer with hardware like GPIB and VXI hardware. MATLAB has mathematic, statistic, and many of the engineering operations for supporting all kind of general engineering and science functions. MATLAB contains the below operations to perform mathematic, arithmetic functions and for data analysis:

Matrix manipulation and linear algebra

Polynomials and interpolation

Fourier analysis and filtering

Data analysis and statistics

Optimization and numerical integration

Ordinary differential equations (ODEs)

Partial differential equations (PDEs)

Sparse matrix operations

MATLAB can perform arithmetic on a wide range of data types, including doubles, singles, and integers.

CHAPTER 4

MODELSIM

4.1. Introduction:

[20] ModelSim is a combination of enhanced performance and more capacity with program covering and efficient debugging capacity needed to simulate bigger modules, program flows and gain the ASIC gate leveled sign-off. The support of Modelsim for Verilog, VHDL, and SystemC gives a strong base of verifying environments for a one and multi-language. The 'vopt' mode usage in Modelsim attains the most leading efficiency and capability through most dominating, global simulation and compiles optimizing methods of Verilog and VHDL, and thus raising the Verilog and combined VHDL/Verilog RTL simulating performance for up to 10 times. For gate-level Verilog performance can raise up to 4 times and capability to over 2 times. ModelSim can also handle so faster time2next simulations and perfect library managing while maintenance of good performance by using its current black box model, called as bbox. Using bbox, unchanged parameters could be compiled and optimized one time and reusable while simulating a different version of the testbench. 'bbox' can deliver good throughputs by 3 times while running a large number of testcases.

ModelSim's good coverage code capacity provides improved metrics for veryfing systematically. The different types of coverage supported are:

Statement coverage: is the numerous statements that are executed while running

Branch coverage: is the set of syntax and flow statements that show their affect on the control of the execution of HDL.

Condition coverage: is the down breaking of the flow conditions on the different wings onto parameters which bring the result

Expression coverage: is similar to condition coverage, but it has the coverage for adjacent signal assigning, instead of branch decisions

Focused expression coverage: is the data of coverage for expression in a fashion which applies for individual independent input to the expressions that get the coverage results

Enhanced toggle coverage: is the default mode of coverage, that counts low2high and high2low transitioning of the signal; and in the extended mode, it counts transitioning from and to X

Finite State Machine coverage: state and state transition coverage

In Modelsim User-defined enumeration values could be easily assigned for easier analyzing of result. For a good debugging productivity, ModelSim also includes the graphics and text dataflow capacities.

ModelSim is the tool for verifying and simulating VHDL, Verilog, SystemVerilog, and mixed language designs.

4.2. Basic Simulation[21]:

Fig 4.1. Basic Simulation Flow - Simulation Lab

This chapter uses a sample verilog file counter.v to describe the design and simulation results. The process is same for both .v and .vhd files.

4.2.1. The Working Library should be created:

Even before simulating the design, a library must be created and the source code should be compiled into that library.

Copy the following project files in to a newly created directory.

Verilog: Copy these two verilog files counter.v and tcounter.v files from the directory of the files.

Begin ModelSim.

a. Start ModelSim with the help of icon on Windows. On launching ModelSim for the first time, a welcome screen pops up. Close the welcome screen.

b. Select File > Change Directory, and go for changing the directory that was created in step 1.

The working library should be created.

a. Select File > New > Library.

which launches a window where the physical and logical names can be specified. The library can be newly created or can be mapped to an existing library.

Fig 4.2. Create a New Library Dialog

b. Type work in the Library Name field (if it isn't already entered automatically).

c. Click OK.

ModelSim will create a new directory named as work and makes an edited file called _info in that new directory. The _info file have to be there in the new directory for distinguishing it as a ModelSim library. The folder contents should not be edited from the operating system; and every change has to be done from within ModelSim. ModelSim brings the library to the list in the Workspace Fig 4.3 and keeps the mapping of library for future use in the ModelSim initialization file (modelsim.ini).

Fig 4.3. Workspace listing Work Library

When OK is clicked in step 3c above, the commands below will be printed to the Transcript:

vlib work

vmap work work

The above two lines are the equivalents for command-line of the selections menu made by the user. So many equivalents of command-line will echo the menu-driven functions in this order.

4.2.2. Compiling the Design:

After creating the working library, the source files are ready for compiling.

The compiling can be done by using the options in the menu of the graphic interface as in the below example, or by giving a command at the ModelSim's prompt.

Compile the verilog files counter.v and tcounter.v.

a. Select Compile > Compile which launches the Compile files window(Fig 4.4).

Fig 4.4. Compile Source Files Dialog

b. Select the files counter.v and tcounter.v modules simultaneously in the Compile files window and select Compile. Then these files will be compiled into the work library.

c. When compiling is finished, select Done.

View the compiled design units.

Expand the Library tab, the compiled project contains two design units (Fig 4.5). Also, it displays their types if it is a module or entity, etc. and the destination path to the project files.

Fig4.5. Verilog Modules Compiled into work Library

The project should be loaded by selecting Simulate > Start Simulation available in the menu bar which launches the Start Simulation dialog. After the Design tab expanded, the counter and test_counter modules will be visible. Click on the test_counter module and select OK .

In the Objects pane observe design objects. Select the View menu and Click on Objects. The Objects pane (Fig 4.6) displays the names and present values of data objects in the present region. The Data objects include signals, nets, registers, constants and variables not declared in a process, generics, parameters.

Fig 4.6. Object Pane Displays Design Objects

Other windows and dialogs can be launched with the View menu or with the view command.

CHAPTER 5

RESULTS

5.1. Hadamard Transformation:

The two dimensional Discrete Hadamard Transform is simulated in MATLAB by adding random noise to the image. The resultant noisy image is processed by taking a Hadamard Transformation with a mask of 2 X 2 hadamard matrix. Various combinations of mask are implemented on the noisy image out of which the best possible result is obtained as shown in Fig 5.1.

Fig 5.1. a) Original Image b) Noised Image c) Denoised Image using hadamard transformation

6.2. Contourlet Transformation:

On the other hand, the same image which is distorted to the same ratio when denoised using Contourlet transformation can be seen in the following Fig 5.2.

Fig 5.2. a) Original Image b) Noised Image c) Denoised Image using Contourlet Transform

5.3. Result Comparison:

For a close observation only the output images from both the transformations can be compared in the following Fig 5.3.

Fig 5.3. a) Denoised Image using Contourlet Transform, b) Denoised Image using Hadamard Transformation

5.4. VHDL Simulation of Hadamard Transformation:

Hadamard Transformation was simulated on MODELSIM with the structure written on VHDL. The following results were obtained.

Fig 5.4. Hadamard simulation for a 8 X 10 input.

CHAPTER 7

CONCLUSION

Results of this project prove that the use of Contourlet Image Transformation is more effective than use of Hadamard transformation which has been used in many image detection methods. Contourlet Transformation has higher accuracy in removing the noise using different contourlet coefficients. Simulations show that Contourlet transformation has more SNR ratio for denoising the image which is higher than that of the Hadamard transformation. The reason for the reduced performance of hadamard transformation in denoising the distorted image is due to the degradation caused by quantization errors and the entropy of the quantized symbols and also it is more expensive in implementing as it needs more memory for performing complex additions and arithmetic operations.