# Modified Run Length And Adaptive Predictive Coding Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Run length encoding is useful for coding areas where consecutive pixels are present whereas adaptive predictive coding is used in fast changing areas where pixels undergo vast changes in intensity.Predictive coding achieves high coding efficiency especially along edges . An optimal mask is computed using dynamic programming and it is XORed with input data. In this paper, a switching coding scheme which combines the advantages of run length encoding and adaptive predictive coding for lossless compression of images is proposed. Canny edge detector is used which detects if the coding pixels are around the edge. With the proposed switching structure, very good prediction results can be obtained with respect to energy coefficients in both slowly varying areas and pixels around boundaries.

Index Terms-Linear predictive coding Canny edge detection, Modified run length encoding, image compression

## INTRODUCTION

Lossless Image Compression is a technique for removing redundancy in data representation in order to save disk space and achieve efficient use of transmission bandwidth.Efficient compression technique for images which use few colors , needs lesser number of bits per pixel for representation.These images are classified as low bit depth images.For images with dual colors, which contain only two narrow bands of color intensities , image compression is done using k-means clustering method and bit map generation.

## RELATED WORK

Oyvind Ryan explains the compression of low bit depth images in [12].Images which essentially use few colors for representation are called low bit depth images.In this type of images,single code can represent sequence of pixels.Numbers are used here to represent the colors.0 always indiicate the sstart of the line.A layered approach is followed for low bit depth images.The advantage of this paper is that web map service use images which use few colors and this need to be compressed efficiently.Hence run length encoding is used here for mapping services.The drawbacks are that this method is suitable only for 1-D images.Run length based algorithm is highly potential only for images which involve few colors. Mustafa Safa Al-Wahaib and KokSheik Wong suggested that codewords based run length processing are used for images in [13].These codewords are based on the level of pixel intensity and their probability of occurrence.The alternate 1s and 0s present in the input string represent the duplication problem where there no matching pattern can be obtained.An entropy based approach is followed which can be utilized for any pixel intensity level and a single codeword is constructed for compression. A rule-based generative coding method is proposed which is a combination of three groups of bits that we refer as starting group d, middle group x and ending group y.The major advantage of this method is that there is no duplication problem as the traditional RLE usually suffer.A rule based approach essentially uses codewrods for representation instead of bits.But this approach cannot be suitable for all types of images which undergo drastic color intensity variation. Ayan Banerjee and Amiya Halder proposed a simple image compression technique for images which contain only two colors.Dual color images are considered here which has only two colors.A bit map generation and k means clustering approach are proposed for this.Dual color images are identified by the presence of two narrow bands of intensity level n the histogram. As a first step we need to determine the

k centroids, one for each cluster. The next step is to consider each point belonging to a given data set and associate it to the nearest centroid with respect to that data set. Hence the mean of the clusters is calculated and k new centroids are found out resulting from the previous step.The major advantages of this method are simple implementation, less time consuming and more suitable for dual color images.But this method is not applicable for 24-bit color scheme in which each pixel is represented by three bytes.Each byte represents the intensity of three primary colors namely red,blue and green.

## PROPOSED COMPRESSION SYSTEM

The proposed system has two operation modes run length mode and adaptive mode.Run length mode is activated if the current pixel is in area of smaller changes where the color intensity with respect to gradient variation do not vary much.Hence the pixels are encoded using modified run length encoding.If not, adaptive mode is triggered and the pixels are encoded using predictive coding.

## Run length Encoding

Most bit map files support run length encoding data compression method.RLE is used for repeating string patterns inorder to increase the compression ratio.The pattern which is repeating is called the run.It consists of two bytes.The first byte is called the run and it contains the count.The second byte consists of the value called the run value.For example,consider the eight character input string as AAABBCDD.After the application of RLE,this could be compressed into 3A2B1C2D.This requires 4 two bytes packets.

## Run length mode

Run length encoding is most efficient for encoding of consecutive pixels with identical values .The run length encoding technique can be applied to any data at bit level. The bit level compression is independent of the types of the data - text, image or video files. A modified run length encoding involves transformation and permutation.

This is based on the idea of flipping the selected bits of a given bit stream data. The input data is divided into blocks of windows each of a fixed size k. A mask of size k is created and it is applied to each of the windows in the input data stream. Mathematically, the mask is XORed with each of the windows of the source data. Now, the problem is to select the optimal mask such that when it is applied to the data, the resultant number of RLE blocks is minimized. Since the mask can also be appended with the RLE compressed data and the mask is of very small size, the algorithm is a lossless compression as the original data can be obtained again using the same mask.

Original bit-stream :

1001011010101001100110011001100110011100111100001100000110011001

Mask Size : 4

Optimal mask : 0110

Bit-stream after applying mask : 1111000011001111111111111111111111111010100101101010011111111111

RLE Blocks in Original : 33

RLE Blocks in Masked : 19

Time Complexity Analysis :

Brute Force Approach

Since we have 2k binary codes for binary mask of size k, we can test the application of each of the 2k masks and find the mask which provides the minimum possible RLE compressed size. But this method will take more running time since it is proportional to the exponential of the window size - O(2k).

Linear Time Solution

There are two possibilities of having the masks. One which starts with 1 and another which starts with zero. But only one of them would give the best RLE compression. A linear time solution is proposed here.For instance, consider that the starting bit is zero. Now for the second position of the mask, based on the comparison between cost[2][0] and cost[2][1], it can be decided whether to follow the previous mask value or not. If cost[2][1] is greater than cost[2][0], change the mask value else retain the old mask. This same approach should be followed for the remaining positions of the mask.

## Find-Mask(data[N], k)

Mask[0] = 0 // Basis

/* Costs Calculation */

for(i = 1; i < k; i++)

for(j = 0; j < 2; j++)

for(t = 0; t < N; t += k)

cost[i][j] = cost[i][j] + ( ( data[t + i - 1] - data[t + i] ) xor j )

/*Mask Construction up to k - 2*/

for(i =1; i < k - 1; i++)

If(cost[i][0] < cost[i][1])

Mask[i] = Mask[i-1]

else

Mask[i] = Mask[i - 1] xor 1

if(cost[k - 1][0] + cost[0][0] < cost[k - 1][1] + cost[0][1])

Mask[k - 1] = Mask[k - 2]

Else

Mask[k - 1] = Mask[k - 2] xor 1

Return Mask

## ApplyMask(data[N] , Mask)

for(int I = 0; I < N; I++)

Rdata[I] = data[I] xor Mask[i%k]

Architecture

Proof of the Algorithm using Mathematical Induction

In this section, the proof for the claim that the above algorithm solves the problem by finding the optimal mask in linear time is given.

Lemma 1:

The number of RLE Blocks in a data S and its complement S' are equal.

This lemma is intuitive because the length of a run and the number of runs is independent of the value of the run but depended only on the variations of the value.

Lemma 2:

The number of RLE Blocks obtained after applying mask M and its complement M' on a given data are equal.

This lemma can be proved from Lemma 1, by arguing that M and M' on a data S will give data T and T' which are complement of each other.

Theorem 1:

The algorithm findMask always returns the optimal mask for RLE compression.

Basis: Mask[0] = 0

By Lemma 2, a mask and its complementary provides optimal solution. Thus if a mask starts by 1, the complementary optimal mask will be generated.

Induction Hypothesis: An optimal mask is constructed for mask size p < k-2

To prove: Optimal mask is generated for mask size p + 1.

It is made sure that bits upto position 'p' are set in an optimal manner. Now, it is enough to show that bit 'p + 1' is set correctly.

The algorithm decides whether 'To Flip' or 'Not to Flip' at the position 'p + 1' using precomputed switch costs. If the decision is 'To Flip', the algorithm 'Toggles' the previous decision at p, rather than deliberately deciding to flip the bit at 'p + 1'. If the decision is 'Not to Flip', the algorithm 'Retains' the previous decision at p, rather than just retaining the original bit.

The above two arguments show that the algorithm chooses the optimal decision based on the resultant data rather than the original data. Also, decision at 'p + 1' is dependent only on the decision at 'p'. Future decisions ( > p + 1 ) will not affect the decision taken at 'p + 1'. Thus, it is proved that for a mask size of 'p + 1' the constructed mask is optimal, assuming the same for size p < k - 2.

## C. Adaptive Mode

Pixels are encoded using predictive encoding.This is mainly used in the areas where color intensity changes are vast.Edges in images are areas with strong intensity contrasts from one pixel to the next. The Canny edge detection algorithm is known to many as the optimal edge detector. Edge detecting an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. The first criteria is low error rate. It is important that edges occuring in images should not be missed and that there be no responses to non-edges. The second criterion is that the edge points be well localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum. A third criterion is to have only one response to a single edge.

## D. Canny Edge Detection

Filter out any noise in the original image before trying to locate and detect any edges.

After smoothing the image and eliminating the noise find the edge strength by taking the gradient of the image.

After the edge directions are known, nonmaximum suppression now has to be applied. Nonmaximum suppression is used to trace along the edge in the edge direction.

The Canny algorithm contains an adjustable parameters, which can affect the computation time and effectiveness of the algorithm.

The size of the Gaussian filter

Thresholds

Adaptive linear predictive coding is based on adaptive linear prediction

and has two main steps:

Pixel prediction

Entropy coding of the prediction error.

An input image is encoded in a single pass. Pixels are encoded using raster scan order .The luminosity of each pixel PIX(x, y) is predicted by a weighted sum of its neighbors (or context). The predictor's weights are adapted to take into account local features of the image.It is optimized in the encoding process.Weights are changed and optimized on a per-pixel basis.The pixels present near the window are partitioned into clusters so that the distance between their contexts are minimized. A centroid is determined for each cluster and the context of the current pixel is determined.Only the cluster whose centroid is closest to the current pixel's context is used to select and refine a predictor.Once the samples in the window are classified and a representative centroid is determined for each cluster, one of the clusters is selected according to the minimum distance between the context of the corresponding centroid and the context of the pixel being encoded. Similarly, in a set of predictors, the one that achieves the lowest prediction error on the selected centroid is chosen.

## E. Algorithm for Adaptive Predictive Coding

For every pixel PIX(x,y) in the input image

Do begin

Step 1: Collect the pixels in W(Rp) and their context.

Step 2: Determine n centroids Cn on the contexts.

Step 3: Let Kn be the corresponding clusters.

Step 4: Classify each pixel and context in one of the clusters.

Step 5: Classify the context of the current pixel PIX(x,y).

Step 6: Let k be the index of the cluster

Let Pi=Wn be the predictor that achieves the smallest error on Cx among a set of predictor.

Step 7:Apply the gradient Descent on the pixels in Ck to refine the predictors Pi.

Step 8: Use the refined predictor Pi to predict the PIX(x,y).

Step 9:Generate the prediction error ERR(x,y).

End

IV. RESULTS AND EXPERIMENTS

In the figure 1 , A random byte stream of size 5KB has been compressed using the Bit-Flip RLE . The random byte-stream characteristics have been studied based on the Average Run-Length in the generated stream. As the window size increases the average no of RLE blocks decreases .

Fig1.No.of RLE blocks with Bit flip RLE

For areas having large variation of pixels edge is detected at the first stage as in the figure 2 and adaptive linear predictive coding is applied.

Fig2. Original image and edge detected image

The lena image is tested with different coefficients as shown below.A signal to ratio graph is drawn with different coefficients of compression.A DCT coefficient is calculated for all the images

Fig 3 Compressed image set

Fig 4. Comparison graph of SNR