Diagram Of An Image Retrieval System Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Computer assistance has reached virtually in every domain with in the field of medical imaging. Dedicated Computer aided diagnosis (CAD) tools with proven clinical impact exist for narrow range of applications. Medical imaging modalities such as X-Rays, CT, MRI, CT-PET, and PET provide visual information for accurate diagnosis and indexed medical treatment. Now a days Medical databases are used automatically to classify the visual features for retrieving image which provides a Indexed reference for easy therapy. Medical image retrieval provides a archive for identifying the similar features with the given query image. In this work it is proposed to implement a novel feature selection mechanism using discrete sine transform. This classification results use support vector machine (SVM) which classifies kernel function, Regression values, Synaptic weights, Activation functions using multilayer perceptron neural network. The results obtained are performed with noise and blur to obtain noise free image which is further computed with statistical values and histogram processing to determine the accuracy of similar feature extracted.

Keywords: Support Vector machine (SVM), Multilayer Perceptron Neural Network, Statistical Values

II. Introduction:

In the clinical practice of reading and interpreting medical images, clinicians (i.e., radiologists) often refer to and compare the similar cases with verified diagnostic results in their decision making of detecting and diagnosing suspicious diseases. However, searching for and identifying the similar reference cases (or images) from the large and diverse clinical databases is a quite difficult task. The advance in digital technologies for computing, networking, and database storage has enabled the automated searching for clinically relevant and visually similar medical examinations (cases) to the queried case from the large image databases.

There are two types of general approaches in medical image retrieval namely, the text (or semantic) based image retrieval (TBIR) and the content-based image retrieval (CBIR). Features from query image are extracted by the same indexing mechanism. Then these query image features are matched with feature database using a similarity metric and, finally, similar images are retrieved. A majority of indexing techniques are based on pixel domain features such as color, texture and shape. Some frequency domain techniques include wavelet domain features, Gabor transform and Fourier domain features for feature extraction. Texture refers to the visual patterns that have properties of homogeneity not resulting from presence of only one color or intensity. It is an innate property of virtually all surfaces, including clouds, trees, bricks, hairs, fabric, etc. It contains important information about the structural arrangement of surfaces and their relationship to the surrounding environment.

There are many pattern matching and machine learning tools and techniques for clustering and classification of linearly separable and non separable data. Support vector machine (SVM) is a relatively new classifier and it is based on strong foundations from the broad area of statistical learning theory.

Due to the huge growth of the World Wide Web, medical images are available in large numbers in online repositories, atlases, and other heath related resources. In such a web-based environment, medical images are generally stored and accessed in common formats such as JPEG (Joint Photographic Experts Group), GIF (Graphics Interchange Format), etc. These formats are used because they are easy to store and transmit compared to the large size of images in DICOM format, but also for anonymization purposes.

However, there is no header information attached to the images with these image formats other than DICOM format. In this case, the text-based approach is both expensive and ambiguous due to the fact that manually annotating these images is extremely time-consuming, highly subjective and requires domain-related knowledge. The content-based image retrieval (CBIR) systems overcome these limitations since they are capable of carrying out a search for images based on the modality, anatomic region and different acquisition views through automatically extracting visual information of the medical images. Currently, there exist some CBIR systems on medical image such as Med GIFT, COBRA and IRMA.

The CBIR extract the low level visual features such as color, texture, or spatial location automatically and the images are retrieved based on the low level visual features. Experiments demonstrate that the image retrieval performance can be enhanced when employing multiple features, since each feature extracted from images just characterizes certain aspect of image content and multiple features can provide an adequate description of image content. Further experiments also show that a special feature is not equally important for different image queries since a special feature has different importance in reflecting the content of different images.

Although some research efforts have been reported to enhance the image retrieval performance taking the feature fusion approaches, most of existed feature fusion methods for image retrieval only utilize query independent feature fusion which usually apply a single feature fusion model for all the image queries and do not consider that a special feature is not equally important for different image queries, the others usually require the users to tune appropriate parameters for the feature fusion models for different image queries.

In the Comb Sum Score, Comb Max Score, Comb Sum Rank, Comb Max Rank fusion models are used to fuse the multiple similarities obtained with multi-feature multi-example queries, which treat different features equally for all the queries and can be called as average fusion models. Obviously, the average fusion models are not optimal as different features usually have different retrieval performances. In literate, the genetic algorithm is used to learn the best weights for different features, and then the learned feature fusion model is applied for all the image queries. In literate, different features are assigned with different weights according to the average retrieval precision of these features, and then the adjusted feature fusion model is applied for all the image queries.

The feature fusion methods presented in and can enhance the retrieval performance to some extent as the different retrieval performances of different features are considered. However, firstly, a certain amount of training data in needed. Secondly, the learned fusion models are not optimal for each image query as a special feature is not equally importance for different image queries. In and the combined similarity between images is measured using one of the features selected by a feature fusion model expressed with logic operation based on Boolean model.

To overcome the limitation of traditional Boolean model, introduced a hierarchical decision fusion framework formulated based on fuzzy logic to extend AND and OR operations in Boolean logic. In the feature fusion models for different image queries are presented with logic-based expressions and they usually require the users to tune appropriate parameters for the fusion models which requiring the user having a good understanding of the low level feature of the query images. In literate, the author proposed a query dependent feature fusion method for image retrieval based on support vector machine (LSVMC).Regarding the multiply image examples provided by the user as positive examples and the randomly selected image examples from the image collection as negative examples, the author in formulate the query dependent feature fusion problem as a strict two class classification problem and solved it by support vector machines, with equal treatments on both positive and negative examples.

III. Literature Survey:

However, the strict two class classification based approach is not always reasonable since the negative examples randomly selected from the image collection can belong to any class and they usually do not cluster.

In this paper, with multiply image examples provided by the user, we propose a new query-dependent feature fusion method for medical image retrieval based one-class support vector machines. The query dependent feature fusion problem was formulated as a one class classification problem in our work and we solved it with one-class support vector machines because of its good generalization ability. The proposed query dependent feature fusion method for medical image retrieval can learn different feature fusion models for different image queries only based on multiply image samples provided by the user, and the learned feature fusion models can reflect the different importance of a special feature for different queries .

The remaining of the paper is organized as follows. In Section 2, we give the formal definition of the query dependent feature fusion problem as one class classification problem

Fig. 1 shows the block diagram of an image retrieval system.

Image retrieval plays a fundamental role in handling large amount of visual information in medical applications. The performance of the image retrieval system depends upon the multi-dimensional feature vector formed using information extracted from images, the computing of the similarity measures and accurate identification of images in database with lowest distance metrics with respect to the query image. The transform methods are widely used in the image processing as large number of coefficients can be ignored to reduce the size of the feature vector.

In this paper, it is proposed to extract the frequency vector from medical images using Discrete Sine Transform (DST) and feature reduction using Information Gain (IG).The proposedGaussian poly kernel SVM is used to classify the obtained feature vectors for the given class. This paper is organized into the following sections; introduces the proposed DST, Information gain and proposed Support Vector Machine.

IV. DISCRETE WAVELET TRANSFORM

2.1 Introduction

The transform of a signal is just another form of representing the signal. It does not change the information content present in the signal. The Wavelet Transform provides a time-frequency representation of the signal. It was developed to overcome the short coming of the Short Time Fourier Transform (STFT), which can also be used to analyze non-stationary signals. While STFT gives a constant resolution at all frequencies, the Wavelet Transform uses multi-resolution technique by which different frequencies are analyzed with different resolutions.

A wave is an oscillating function of time or space and is periodic. In contrast, wavelets are localized waves. They have their energy concentrated in time or space and are suited to analysis of transient signals. While Fourier Transform and STFT use waves to analyze signals, the Wavelet Transform uses wavelets of finite energy.

Figure 2.1 Demonstration of (a) a Wave and (b) a Wavelet [2].

The wavelet analysis is done similar to the STFT analysis. The signal to be analyzed is multiplied with a wavelet function just as it is multiplied with a window function in STFT, and then the transform is computed for each segment generated. However, unlike STFT, in Wavelet Transform, the width of the wavelet function changes with each spectral component. The Wavelet Transform, at high frequencies, gives good time resolution and poor frequency resolution, while at low frequencies, the Wavelet Transform gives good frequency resolution and poor time resolution.

The Continuous Wavelet Transform and the Wavelet Series:

The Continuous Wavelet Transform (CWT) is provided by equation 2.1, where x(t) is the signal to be analyzed. ψ(t) is the mother wavelet or the basis function. All the wavelet functions used in the transformation are derived from the mother wavelet through translation (shifting) and scaling (dilation or compression).

The mother wavelet used to generate all the basis functions is designed based on some desired characteristics associated with that function. The translation parameter Ï„ relates to the location of the wavelet function as it is shifted through the signal. Thus, it corresponds to the time information in the Wavelet Transform. The scale parameter s is defined as |1/frequency| and corresponds to frequency information. Scaling either dilates (expands) or compresses a signal. Large scales (low frequencies) dilate the signal and provide detailed information hidden in the signal, while small scales (high frequencies) compress the signal and provide global information about the signal. Notice that the Wavelet Transform merely performs the convolution operation of the signal and the basis function. The above analysis becomes very useful as in most practical applications, high frequencies (low scales) do not last for a long duration, but instead, appear as short bursts, while low frequencies (high scales) usually last for entire duration of the signal.

The Wavelet Series is obtained by discretizing CWT. This aids in computation of CWT using computers and is obtained by sampling the time-scale plane. The sampling rate can be changed accordingly with scale change without violating the Nyquist criterion. Nyquist criterion states that, the minimum sampling rate that allows reconstruction of the original signal is 2ω radians, where ω is the highest frequency in the signal. Therefore, as the scale goes higher (lower frequencies), the sampling rate can be decreased thus reducing the number of computations.

2.3 The Discrete Wavelet Transform

The Wavelet Series is just a sampled version of CWT and its computation may consume significant amount of time and resources, depending on the resolution required. The Discrete Wavelet Transform (DWT), which is based on sub-band coding is found to yield a fast computation of Wavelet Transform. It is easy to implement and reduces the

computation time and resources required. The foundations of DWT go back to 1976 when techniques to decompose discrete time signals were devised [5]. Similar work was done in speech signal coding which was named as sub-band coding. In 1983, a technique similar to sub-band coding was developed which was named pyramidal coding. Later many improvements were made to these coding schemes which resulted in efficient multi-resolution analysis schemes.

In CWT, the signals are analyzed using a set of basis functions which relate to each other by simple scaling and translation. In the case of DWT, a time-scale representation of the digital signal is obtained using digital filtering techniques. The signal to be analyzed is passed through filters with different cutoff frequencies at different scales.

2.4 DWT and Filter Banks

2.4.1 Multi-Resolution Analysis using Filter Banks

Filters are one of the most widely used signal processing functions. Wavelets can be realized by iteration of filters with rescaling. The resolution of the signal, which is a measure of the amount of detail information in the signal, is determined by the filtering operations, and the scale is determined by upsampling and downsampling (subsampling) operations[5].

The DWT is computed by successive lowpass and highpass filtering of the discrete time-domain signal as shown in figure 2.2. This is called the Mallat algorithm orMallat-tree decomposition. Its significance is in the manner it connects the continuoustime mutiresolution to discrete-time filters. In the figure, the signal is denoted by the sequence x[n], where n is an integer. The low pass filter is denoted by G0 while the high pass filter is denoted by H0. At each level, the high pass filter produces detail information, d[n], while the low pass filter associated with scaling function produces coarse approximations, a[n].

Figure 2.2 Three level wavelet decomposition tree

Highest frequency of ω, which requires a sampling frequency of 2ω radians, then it now has a highest frequency of ω/2 radians. It can now be sampled at a frequency of ω radians thus discarding half the samples with no loss of information. This decimation by 2 halves the time resolution as the entire signal is now represented by only half the number ofsamples. Thus, while the half band low pass filtering removes half of the frequencies andthus halves the resolution, the decimation by 2 doubles the scale. With this approach, the time resolution becomes arbitrarily good at high frequencies, while the frequency resolution becomes arbitrarily good at low frequencies.

The time-frequency plane is thus resolved as shown in figure 1.1(d) of Chapter 1. The

filtering and decimation process is continued until the desired level is reached. The maximum number of levels depends on the length of the signal. The DWT of the original signal is then obtained by concatenating all the coefficients, a[n] and d[n], starting from the last level of decomposition. d1[n] .

Figure 2.3 shows the reconstruction of the original signal from the wavelet coefficients. Basically, the reconstruction is the reverse process of decomposition. The approximation and detail coefficients at every level are up sampled by two, passed through the low pass and high pass synthesis filters and then added. This process is continued through the same number of levels as in the decomposition process to obtain orthogonal signal.

2.4.2 Conditions for Perfect Reconstruction

In most Wavelet Transform applications, it is required that the original signal be synthesized from the wavelet coefficients. To achieve perfect reconstruction the analysis and synthesis filters have to satisfy certain conditions. Let G0(z) and G1(z) be the low pass analysis and synthesis filters, respectively and H0(z) and H1(z) the high pass analysis and synthesis filters respectively. Then the filters have to satisfy the following two conditions as given in [4] :

The first condition implies that the reconstruction is aliasing-free and the second condition implies that the amplitude distortion has amplitude of one. It can be observed that the perfect reconstruction condition does not change if we switch the analysis and synthesis filters.

There are a number of filters which satisfy these conditions. But not all of them give accurate Wavelet Transforms, especially when the filter coefficients are quantized. The accuracy of the Wavelet Transform can be determined after reconstruction by calculating the Signal to Noise Ratio (SNR) of the signal. Some applications like pattern recognition do not need reconstruction, and in such applications, the above conditions need not apply.

2.4.3 Classification of wavelets

We can classify wavelets into two classes: (a) orthogonal and (b) bi orthogonal. Based on the application, either of them can be used. Features of orthogonal wavelet filter banks. The coefficients of orthogonal filters are real numbers. The filters are of the same

For perfect reconstruction, biorthogonal filter bank has all odd length or all even length filters. The two analysis filters can be symmetric with odd length or one symmetric and the other anti symmetric with even length. Also, the two sets of analysis and synthesis filters must be dual. The linear phase biorthogonal filters are the mostpopular filters for data compression applications.

Figure showing wavelet Families(a)Haar (b)Daubechies4 (c) Coiflet1 (d)Symlet2 (e)Meyer (f)Morlet (g)Mexigan Hat

2.6 Applications

There is a wide range of applications for Wavelet Transforms. They are applied indifferent fields ranging from signal processing to biometrics, and the list is still growing. One of the prominent applications is in the FBI fingerprint compression standard. Wavelet Transforms are used to compress the fingerprint pictures for storage in their databank. The previously chosen Discrete Cosine Transform (DCT) did not perform well at high compression ratios. It produced severe blocking effects which made it impossible to follow the ridge lines in the fingerprints after reconstruction. This did not happen with Wavelet Transform due to its property of retaining the details present in the data.

In DWT, the most prominent information in the signal appears in high amplitudes and the less prominent information appears in very low amplitudes. Data compression can be achieved by discarding these low amplitudes. The wavelet transforms enables high compression ratios with good quality of reconstruction. At present, the application of wavelets for image compression is one the hottest areas of research. Recently, the Wavelet Transforms have been chosen for the JPEG 2000 compression standard.

Rigau, et al., [14] proposed a two-step mutual information based algorithm for medical image segmentation. In the proposed method, the information channel between the

histogram bins and the regions of the segmented image is optimized. In the first step, binary space partition splits the image into relatively homogeneous regions. Second step involves clustering around the histogram bins of the partitioned image. The clustering is done by minimizing the mutual information loss of the reserved channel. The proposed algorithm preprocesses the images for multimodal image registration. The multimodal image registration integrates the information of different images of the same or different subjects. Experimental results using proposed algorithm on different images show that the segmented images perform well in medical image registration using mutual information-based measures.

4. SUPPORT VECTOR MACHINE

There are many pattern matching and machine learning tools and techniques for clustering and classification of linearly separable and non separable data. Support vector machine (SVM) is a relatively new classifier and it is based on strong foundations from the broad area of statistical learning theory. It is being used in many application areas such as character recognition, image classification, bioinformatics, face detection, financial time series prediction etc.

SVM offers many advantages over other classification methods such as neural networks. Support vector machines have many advantages in comparison with other classifiers:

• There are computationally very efficient as compared with other classifiers, especially

neural networks.

• They work well, even with high dimensional data. And with less number of training data.

• They attempt to minimize test error rather than training error.

• They are very robust against noisy data.

• The curse of dimensionality and over fitting problems does not occur during classification.

Fundamentally, SVM is a binary classifier, but can be extended for multi-class problems as well. The task of binary classification can be represented as having, (Xi, Yi) pairs of data. Where Xi∃Xp, a p dimensional input space and Yi∃ [−1, 1] for both the output classes. SVM finds the linear classification function g(x) = W.X +b, which corresponds to a separating hyperplane W. X+b = 0, where w and b are slope and intersection.

Figure 1. One against all classification showing three support vector machines.

SVM usually incorporates kernel functions for mapping of non-linearly separable input space to a higher dimension linearly separable space. Many kernel functions exist such as radial bases

functions (RBF), Gaussian, linear, sigmoid etc. Different options exist to extend SVM for multi

class cases; these include one against all, one against one and all at once. Figure 3.1 shows how

one against all SVM can be used for grouping of different classes inside an image database. Each support vector machine separates one class of images from the rest of the database.

Support vector machine (SVM) is a linear machine which constructs a hyper plane as a decision surface. It is based on the method of structural risk minimization; the error rate is bound by the sum of the training-error rate and a term that depends on the Vapnik-Chervonenkis (VC) dimensions. The SVM provides good generalization performance on pattern classification. The principle of SVM algorithm is based on the inner-product kernel between a "support vector" xi and the vector x drawn from the input vector.

In this paper it is proposed to modify the existing poly kernel of Support Vector Machine (SVM) [15]. The proposed Gaussian poly kernel of SVM, GPK-SVM is derived as follows. The function K(x, y) is a kernel function if it satisfies (6).

Int K(x, y)g(x)g(y)dxdy

Machine Learning is considered as a subfield of Artificial Intelligence and it is concerned with the development of techniques and methods which enable the computer to learn. In simple terms development of algorithms which enable the machine to learn and perform tasks and activities. Machine learning overlaps with statistics in many ways. Over the period of time many techniques and methodologies were developed for machine learning tasks.

Support Vector Machine (SVM) was first heard in 1992, introduced by Boser, Guyon, and Vapnik in COLT-92. Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression.

They belong to a family of generalized linear classifiers. In another terms, Support Vector Machine (SVM) is a classification and regression prediction tool that uses machine learning theory to maximize predictive accuracy while automatically avoiding over-fit to the data. Support Vector machines can be defined as systems which use hypothesis space of a linear functions in a high dimensional feature space, trained with a learning algorithm from optimization theory that implements a learning bias derived from statistical learning theory. Support vector machine was initially popular with the NIPS community and now is an active part of the machine learning research around the world. SVM becomes famous when, using pixel maps as input; it gives accuracy comparable to sophisticated neural networks with elaborated features in a handwriting recognition task.

It is also being used for many applications, such as hand writing analysis, face analysis and so forth, especially for pattern classification and regression based applications. The foundations of Support Vector Machines (SVM) have been developed by Vapnik and gained popularity due to many promising features such as better empirical performance. The formulation uses the Structural Risk Minimization (SRM) principle, which has been shown to be superior, to traditional Empirical Risk Minimization (ERM) principle, used by conventional neural networks. SRM minimizes an upper bound on the expected risk, where as ERM minimizes the error on the training data. It is this difference which equips SVM with a greater ability to generalize, which is the goal in statistical learning. SVMs were developed to solve the classification problem, but recently they have been extended to solve regression problems [5].

Support Vector Machines (SVM) is an approximate implementation of the structural risk minimization (SRM) principle. It creates a classifier with minimized Vapnik- Chervonenkis (VC) dimension. SVM minimizes an upper bound on the generalization error rate. The SVM can provide a good generalization performance on pattern classification problems without incorporating problem domain knowledge. Consider the problem of separating the set of training vectors belonging to two classes:

A linear separable example in 2D is illustrated in Figure 1.If the two classes are non-linearly separable, the input functions [10].

Table 1 shows three typical kernel

An optimal hyperplane is constructed for separating the data in the high-dimensional feature space. This hyperplane is optimal in the sense of being a maximal margin classifier with respect to the training data.

The distance indicates how much an example belonging toone class is different from the other one. These motivateus to use SVM for automatically generating preference weights for relevant images. Intuitively, the farther thepositive examples from the hyperplane, the more distinguishable they are from the negative examples. Thus, when we decide their preference weights, they should be assigned with larger weights. Currently, we simply set the relation between the preference weights and the distance as a linear relation in the numerical calculation. It can be easily extended to nonlinear relation. During the iterative query procedure, the positive and negative examples selected in the history are collected for learning at each query time.

PROPOSED SVM :

Support vector machine (SVM) is a linear machine which constructs a hyperplane as a decision surface. It is based on the method of structural risk minimization; the error rate is bound by the sum of the training-error rate and a term that depends on the Vapnik-Chervonenkis (VC) dimensions. The SVM provides good generalization performance on pattern classification. The principle of SVM algorithm is based on the inner-product kernel between a "support vector" xi and the vector x drawn from the input vector.

In this paper it is proposed to modify the existing poly kernel of Support Vector Machine (SVM) [15]. The proposed Gaussian poly kernel of SVM, GPK-SVM is derived as follows.

C:\Users\C V SUBBA REDDY\Desktop\GUI.png

Figure 1 shows a GUI model for medical image retrieval systems

C:\Users\Bujji\Desktop\FIG4.png

Figure 2 shows removal blur from retrieved image

C:\Users\Bujji\Desktop\FIG3.png

Figure 2 shows removal blur and noise from retrieved image

F

C:\Users\Bujji\Desktop\FIG2.png

Figure shows restoration of blurred noise image using NSR=0

C:\Users\Bujji\Desktop\FIG1.png

Figure 2 shows noise free image

C:\Users\Bujji\Desktop\HIS.png

Figure 6 shows histogram for the indexed image obtained

C:\Users\Bujji\Desktop\COMM.png

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.