This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In the advent of the Internet, a lot of different web pages and multimedia applications have been developed. Developers use images in their design to make it more appealing to the end-users but using numerous images in the design of applications will increase download time. This is where image compression is utilized.
In the past 15 years, numerous types of image compression have been developed. It attracted significant attention due to the increasing amount of data we are using everyday. Image compression software is widely used to store or transmit information by eliminating redundant information and thus, minimizing physical space .
Previous studies have used different optimization algorithm to achieve image compression. Ant colony optimization, genetic algorithm and particle swarm optimization was used and has already proved its worth in the optimization of image compression.
In 2001, Geem et. al  developed an optimization algorithm called harmony search. Harmony Search Algorithm is a music-inspired algorithm that mimics the approach of musician in looking for harmony in playing music. Although this optimization algorithm is relatively new, previous studies have shown its capabilities in solving optimization problems in different fields like engineering, transportation 1and environmental systems.
In this paper, the researcher will use Harmony Search Algorithm in optimizing lossy compression image.
Image is a two-dimensional picture that resembles the appearance of the subject. It is also referred to as the likeness seen or produced from something or somebody .
An image is composed of an array of numbers, ranging from 0 to 255, that represent light intensities at various points. These points are referred to as pixel or picture element. And these pixels are what make up a raster data. The pixels in the image contribute to its file size. Each pixel uses 3 bytes to represent the color - primary colors: red, green, and blue. When transmitting this image, the file size becomes a major factor. And for this reason, image compression would be beneficial .
This study focuses on digital image. Two major types of digital image are the vector and raster (also called Bitmap). Vector is made up of individual objects that are derived by mathematical statements and properties regarding its color and fill. This type of image can have its highest quality in any scale since it its resolution independent . On the other hand, raster or bitmap is composed of pixel that contains the information on the color to display. This type of image has fixed resolution and losses image quality when resized or displayed in another scale .
Taken from 
Figure 1.1: A Raster Image
Although vector has similar or even better capabilities compared to raster, it has not been popular. One of the reasons is its inability to produce photo-realistic imagery. It does not properly depict the continuous subtle tones of the photograph due to solid area of color. However, with the latest advancement in image processing, vector tools are now able to produce bitmapped texture to the objects .
Taken from 
Figure 1.2: Difference of Vector and Bitmap Images
The concept of image analysis is widely used in computer or machine vision. It is used to extract measurements and information from digital image through image processing techniques. It derives significant information with regards to the selected image feature such as areas, size distribution, length, etc. . Image analysis has been used as the underlying concept of bar code reading and face recognition.
RGB mapping and numeric values display are just 2 of the tools that are mainly used in image analysis. This has made significant impact on analyzing images that are used in different field such as medicine, microscopy, remote sensing, etc. .
1.1.2 Image Processing
A two-dimensional image is treated as the input of image processing and outputs a modified image or set of characteristics defining the image. It involves transformation and techniques that were derived from signal processing. Basic transformation in image processing includes enlargement, size reduction and rotation .
Image compression has large contribution in the success of implementation of computer vision, feature detection and augmented reality. It has also contribution of in the field of medicine. Medical image processing and medical image processing are only the few of them.
1.1.3 Evaluation of Image Quality
To evaluate the quality of an image compared to a certain image, some quantitative measures are used. Two of the most widely used tools are the Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR).
18.104.22.168 Mean Squared Error (MSE)
Mean Squared Error is a tool used to quantify the error between two images. It can also be defined as the average of the square of the error. Figure 1.3 shows the formula in finding the MSE. Lower MSE value is better which means that there is less difference between two images .
Figure 1.3: Formula for MSE
22.214.171.124 Peak Signal-to-Noise Ratio (PSNR)
Peak Signal-to-Noise Ratio estimates the quality of the compressed image compared to the original image. This is measured in decibels (dB). Figure 1.4 the formula for solving PSNR. A high PSNR value means that the quality of the compressed image is less degraded when compared to the original image  .
Figure 1.4: Formula for PSNR
1.2 Image Compression
Image compression is very beneficial in image transmission over the Internet and downloading from Web pages. Its main purpose is to minimize the size in terms of bytes of a graphic file without affecting the quality to an unacceptable level. Reduction of file size of an image leads to larger amount of data stored in same amount of memory space and lesser amount of time required to send an image . Nowadays, a lot of different compression applications are available for usage but the resulting image is less than optimal.
There are two image compression types. These are lossless and lossy compression. These two types are differentiated based on the recovery of the image when decompressed.
1.2.1 Lossless Image Compression
Loseless compression is a type of data compression technique wherein no data is lost and it retains the full data needed to reconstruct its original image. Since no data is lost, its compression is somehow bounded and can compress the original image just about 50% of its original file size  .
Graphics Interchange File (GIF) and Portable Network Graphics (PNG) are just two of the widely used loseless image compression. Both of these formats support transparency but GIF can support animation. Because of its wide support and portability, its usage on the web has increased.
1.2.2 Lossy Image Compression
From the name itself, lossy compression is another type of data compression wherein some of the information of the original image is permanently removed. Redundant information is eliminated by which users may not notice it . Since some data are permanently eliminated during compression, there is degradation of the visual quality of the image.
One of the most widely used lossy compressions is the Joint Photographic Experts Group (JPEG). This is commonly used in storing and sending images on the Web. Its compression is based on the reason of the inability of the human eye to distinct some delicate color and high frequency brightness variations of an image.
1.2.3 Compression Ratio
Compression ratio is a term in computer science that is used to measure the reduction in terms of data representation size produced by the data compression algorithm. It shows the compression power of an algorithm and files size the most commonly used parameter in measuring the compression ratio. The formula for compression ratio is defined as:
1.3 Color Space
Color space is an abstract model used in the method of specifying, creating and visualizing color. It has three main attributes; saturation, brightness and hue. A color can be identified in a color space by these attributes indicating its position within the color space and it is usually a conversion or combination of colors. There are six different color spaces at the present that are used in image and video processing; RGB (Red Green Blue), CMYK (Cyan Magenta Yellow Black), HSL (Hue Saturation and Lightness), YUV, YCbCr, YPbPr. Compression applications have used RGB, YUV and YCbCr among the six existing color spaces .
RGB stands for red, green blue components which are the primary colors of light. This color space is widely used both in image processing and computer graphics. Basically composed of three colors, all other colors can be achieved by combining these three colors. In the field of image processing, RGB represents the colors that each pixel in the image displays. The range value of RGB is 0 to 255 .
RGB is not an efficient color space when mapping colors in rendering standard video display. And this is the reason why it has to be converted. Figure 1.6 shows the equations on how to convert RGB to YUV which is another color space .
Taken from 
Figure 1.5: Image split into its RGB components
RGB to YUV Conversion
Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16
U = - (0.148 * R) - (0.291 * G) + (0.439 * B) + 128
V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
Figure 1.6: Formula for RGB to YUV Conversion
YUV is a color space that takes consideration on human visual perception. Visual perception focuses on the sensitivity of the human eye to brightness and color differences. This type of color space uses luminance (Y) and chrominance color components (U and V) of a color. The range value of YUV is 16 to 235 .
YUV to RGB Conversion
B = 1.164(Y - 16) + 2.018(U - 128)
G = 1.164(Y - 16) - 0.813(V - 128) - 0.391(U - 128)
R = 1.164(Y - 16) + 1.596(V - 128)
Figure 1.7: Formula for YUV to RGB Conversion
Taken from 
Figure 1.8: Image split into its luminance and chrominance components
1.4 Metaheuristic algorithms
Many of the real-world problems are focused on searching for productivity and require optimization. And for these problems, the discovery of optimum solution could not always be guaranteed. Instead of using exact methods, a more suited approach is heuristics. It uses approximate methods using iterative trial and error approach in solving for the best solution. Most of the heuristics are nature-inspired and a latest development of this method is the metaheuristic .
Metaheuristic fall under a larger field of algorithm called Stochastic Optimization. This type of optimization algorithm uses some level of randomness to search an optimal solution. It is used in solving problems where the search space for the solution is too large for brute force method. The determination of a good solution is done by testing a solution and evaluating its performance. Metaheuristic algorithms have two basic function f and g. Function f generate random solutions while function g evaluates the performance of a particular solution    .
Different approach has been used in previous studies done on image compression. Ant Colony Optimization and Genetic Algorithm, which are variants of metaheuristic algorithms, are just few of them that were used in previous studies. In this study, a relatively new variant of metaheuristic called Harmony Search Algorithm will be used in the implementation of image compression.
1.5 Harmony Search Algorithm
Computer scientists have found a connection between playing music and finding optimal solution. And this relationship has led to the creation of a new algorithm called Harmony Search. Harmony Search was first developed by Geem et al. in 2001. Though this metaheuristic algorithm is relatively new, its effectiveness and efficiency has been shown in various applications. Since its creation in 2001, this metaheuristic algorithm has been applied to many applications which include design of water distribution networks, groundwater modeling, energy-saving dispatch, truss design, vehicle routing, function optimization, engineering optimization and others .
Taken from 
Figure 1.9: Set of Jazz Instruments
Harmony search is a music-based optimization algorithm. The aim of music to search for perfect harmony is what inspired the creation of this metaheuristic algorithm. Finding harmony in music is analogous to finding an optimal solution in an optimization process. For example in an orchestra or a band of jazz musicians, each musician assigned to his own instrument plays a note contributing to the total quality of the harmony of the music produced  .
Taken from 
Figure 1.10: Relationship of Music Improvisation and Optimization
When in search for the best harmony, a musician implements one of the three methods possible to come up with possible optimal elements: (1) Playing from memory, (2) Pitch Adjustment and (3) Randomization   .
1.5.1 Harmony Search: Creating Good Music
The music assigned to each instrument, for example a jazz band, will contribute to the overall harmony of the music being produced. For musicians to play or improvise music it can use of combination of the three methods namely: (1) Playing music based on his memory, (2) Playing music comparable to the music on his memory and (3) Creating music through random notes.
1.5.2 Harmony Search: Searching for Optimum Solution
Back in 2001, Geem et al. has formalized the three elements of the newly developed optimization algorithm. The three corresponding component are (1) The use of harmony memory, (2) Pitch Adjustment and (3) Randomization. Each of these elements plays a vital role in searching for optimal solution in Harmony Search Algorithm   .
For musicians to create a good music, he can consider existing composition. In the case of Harmony Search, using harmony memory ensures that possible solutions are stored as elements in the new solution vector. Another way a musician can play good music is by playing music relative to an existing composition. Harmony Search also uses this concept called pitch adjustment mechanism. This is the one responsible for generating solutions that are slightly varied from the existing solutions. Pitch adjustment is also referred as the exploitation mechanism of Harmony Search Algorithm. Randomization comes in to play as the third method in HS. This ensures that the search for the solution is not limited in the local optima. It makes the solution set more diverse and it is referred to as the exploration mechanism of Harmony Search Algorithm   .
Taken from 
Figure 1.11: Harmony Memory Illustration
After initialization, the optimization process starts and terminates until termination condition is reached. The optimization in harmony search algorithm is done per decision variable. The value of each decision variable is decided with respect to harmony memory acceptance rate (rhmcr) on each pass. The rhmcr decides if the value of the ith variable will be taken from the values in the harmony memory   .
D + pitch adjustment
Taken from 
Figure 1.12: Generating New Solution from Harmony Memory
1.5.3 Harmony Search Algorithm Pseudo code
Harmony Search Algorithm
Define objective function f(x), x = (x1, x2… xd) T
Define harmony memory accepting rate (raccept)
Define pitch adjusting rate (rpa) and other parameters
Generate Harmony Memory with random harmonies
While (t < max number of iterations)
While (i <= number of variables)
If (rand < raccept) Choose a value for the variable i
If (rand<rpa) Adjust the value by adding certain amount
Else Choose a Random Value
Accept the new memory if better
Find current best solution
1.5.4 Harmony Search Algorithm Flowchart
2 Literature Review
Image compression has been very beneficial in data transmission and data storage. It makes the file size of an image smaller which allows more data to be stored in same amount of disk space and lessens the time of an image to be downloaded. Different applications have been developed to achieve image compression but the output image is less than the optimal.
Studies regarding image compression or steganography have used MSE (Mean Squared Error) and Peaks Signal-to-Noise Ratio (PSNR) to evaluate the quality of the image produced compared to the original image. Through this evaluation, researchers are able to see the efficiency and effectiveness of the performance of their proposed method.
In the paper of Donoho et. al , they discussed the relevance of harmonic analysis in data compression especially those that focused on wavelet-based compression. They indicated the challenges in achieving an optimal data compression. Those are (1) obtaining accurate models for the actual data, (2) obtaining optimal representation of the model, and (3) rapidly computing the optimal representation. The paper cited two significant developments in data compression especially in image compression through harmonic analysis; (1) fast cosine transform for JPEG standard and (2) fast wavelet transform fro JPEG-200.
Merlo, et. al presented a new method in image compression by using genetic algorithm, a metaheuristic, as the underlying algorithm . In this method, they considered a solution on the clustering problem based on genetic algorithm. It searches for the optimum solution by simultaneous consideration and manipulation a set of possible solution. They defined their fitness function that minimizes the disorder on the elements they were ordering. Through this, it would remove the confusion of the fitness function and the clustering method. They apply the clustering as the last step of the procedure when the ordered representation is already obtained. According to the results they obtained, genetic algorithm was an effective optimization algorithm for clustering thus resulting to image compression. However, their proposed method has shown limitations when dealing with larger images since longer computational time is required.
In the study of M. Mohamed Ismael et. al , they were able to explore the feasibility of Particle Swarm Algorithm (PSO) in clustering based image compression. The optimization algorithm was used to improve the accuracy of the prediction functions in lifting scheme. The lifting scheme was used for better identification of patterns in the image and tries to eliminate redundancy in an optimal manner. Using the standard data set in image processing, the proposed approach used in this study has shown the feasibility of PSO with promising results in image compression.
Amiya Halder et. al  used the concept of block optimization and byte pressing in their study to achieve image compression. In the block optimization phase of image compression, they divided the image into 4x4 pixel block. To be able to reduce the number of different colors in the image, they get the average of the pixel values within the block and then replace all the pixels in the block with the value derived. Byte compression was used in this process to be able to represent the colors in the image using fewer amounts of bytes. Usually, a color in an image is represented using 3 bytes. With byte compression, the proposed approach in this study was able to represent the colors using 2 bytes. Less number of bytes used means less file size which is necessary in image compression. The results they were able to gather shown the effectiveness and efficiency of their proposed approach. However, since they just used averaging in the block optimization phase, the resulting image is not assured to be the optimal imaged that can be produced.
Harmony Search Algorithm of Geem et. al is a new metaheuristic formally defined in 2001  . Even though it is a relatively new optimization algorithm, the effectiveness and efficiency on optimization problems has already been shown in previous studies. With respect to the benchmark problems like Traveling Salesman Problem, Harmony Search has outperformed the other previously existing optimization algorithm which includes Genetic Algorithm, Ant Colony Optimization and Particle Swarm Optimization. The performance of Harmony Search has been tested not only on benchmark problems. This new metaheuristic has also outperformed other optimization algorithm when tested on real world problem. Water distribution network is one of those problems. Harmony Search was able to find a more optimal solution than Genetic Algorithm in terms of cost. Having shown the potential of Harmony Search Algorithm in solving optimization problem, it would be interesting to use this metaheuristic as an underlying algorithm for other fields especially image compression.
Geem and Williams  has applied Harmony Search on ecological optimization specifically Maximal Covering Species Problem (MCSP). This is an ecological conservation problem that tries to preserve species and their habitat. MCSP attempts to find the maximum number of species while limiting the number of parcel. The algorithm was tested on what they call Oregon data which consists of 426 species and 441 parcels. To be able to suit the algorithm to the problem, they modified the structure of HS. Since their decision variable has only two possible values, they omitted the pitch adjustment operation. Based on the results they were able to gather, they have shown the feasibility of HS on ecological optimization. Not only was it feasible, HS has outperformed Simulated Annealing, which is another metaheuristic, in solving this problem.
3 Statement of the Problem
Image compression has been used in making digital transmission faster and it enabled more data to be stored on same amount disk. Through removal of redundant data and unnoticeable changes, it decreases the physical size of the image.
To be able to have a successful image compression, it must have a color reduction method and it can represent color using fewer amounts of bytes. Different optimization algorithms have been used to be able to reduce the number of different colors in an image with minimal effects to the quality of the image.
Harmony Search Algorithm, a relatively new optimization algorithm, has been used in different problems that require optimization. In this study, the researcher aims to explore the feasibility of Harmony Search Algorithm as the underlying algorithm in achieving image compression.
Nov 13 - 16, 2010
Research and reading of journals, articles and published studies about the assigned algorithm (HSA) and topic of interest.
Nov 17 - 19, 2010
Making of report about the 3 proposed topics
Nov 22 - 25, 2010
More research and reading about the 3 proposed topic and making of presentation
Dec 1 - 3, 2010
Revision of report on the topic proposal for the chosen topic.
Dec 4 - 14, 2010
More research and reading on the chosen topic and assigned algorithm,
Making of the first part of the proposal (Introduction, Literature Review, etc.)
Dec 15, 2010 - Jan 4, 2011
Implementation of proposed approach with the assigned algorithm