Analysis On Lossy Compression Of Encrypted Images Computer Science Essay

Published:

In recent years, Compression of encrypted data has attracted considerable research interest. For image compression, it is very necessary that the selection of transform should reduce the size of the resultant data as compared to the original data set. This paper does the comparison work for lossy image compression for encrypted images. It analyzes the Lossy Compression of encrypted image with flexible compression ratio, which is made up of image encryption, compression, and iterative decompression. For comparitive work the "Lossy Compression and Iterative Reconstruction for Encrypted Image" [1] paper is considered. LCIREI method remove the redundant and trivial data from the encrypted image, and retrieve the principal content of the original image using an iterative procedure. The Compression Ratio and the quality of the reconstructed image are dependent on the values of compression parameters. This work also compares with the performance of the existing JPEG system which do the encryption process after compression.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Keywords - Lossy Compression, Flexible Compression Ratio, Encrypted Image

Introduction

The goal of data compression is to represent an information source (e.g. a data file, a speech signal, an image, or a video signal) as accurately as possible using the fewest number of bits[6]. In the computerized world there is a need for images, both to be compressed and protected. In the classical model, images are first compressed and next protected. This had some drawbacks. This is the case, for example of an encrypted content that is transmitted without compression through the internet, but at a certain point needs to be compressed in order to match the characteristics of the transmission link. If the node is not trusted it may not be provided with the encryption key, then compression of the encrypted content is required. To solve this, it is possible to compress the encrypted data, without supplying the key to the encoder and obtaining in theory the same results that would have been obtained by compressing the non-encrypted bitstream. This work considers the lossy compression of grey level medical CT images. This methodology is possible, by taking care to exploit the spatial correlation between pixels, and that between adjacent bit planes.

This work aims at the analysis of Gray Scale Image compression after encryption using the LCIREI algorithm for better Peak signal-to-noise ratio (PSNR) & Compression ratio (CR). A comparative study of performance of existing JPEG algorithms is made in terms of PSNR, Mean Square Error (MSE) and overall Compression Ratio to illustrate the effectiveness of this method in Image compression. Extensive analysis has been carried out before arrival at the conclusion. To "encrypt" the source data "directly" before any compression [8] LCIREI is used.

At first the image is encrypted using pseudorandom method. Next the rigid and elastic pixels are separated using the alpha value. Next the compressed elastic data are formed using the orthogonal transform and Sk value calculation. These steps are done in the compression section. The compressed file contains information about rigid, Sk values. Here the elastic data involves lossy compression.

In the decompression section first the rigid and elastic pixels are extracted. The elastic pixels are expanded using the orthogonal transform and closest value calculation. The elastic pixel restoration is done until the stop criteria is reached.

This work compares the performance metrics of the above said algorithm with the JPEG Algorithm to analyze which is the best and effective compression Technique.

Methodology

This work analyses the performance metrics of lossy compression on encrypted image with flexible compression ratio, which is made up of image encryption, tailor-made compression, and iterative decompression phases. The network provider may remove the redundant and trivial data from the encrypted image, and a receiver can retrieve the principal content of the original image using an iterative procedure. The compression ratio and the quality of the reconstructed image are dependent on the values of compression parameters. The main modules of this work are

Image Reading and Encryption.

Rigid and elastic pixels separation.

Orthogonal transform based elastic data generation.

Rigid and Sk value extraction.

Elastic data estimation.

Orthogonal transform and Q value updation.

Closest value calculation for elastic data using iterative process.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Reconstruction of decompressed image.

The input image size, used in this work is ranging from 32 X 32 to 512 x 512 dimensions. The gray color images are used as input for this work. The user select the input image and that image is read.

Assume the original image is in uncompressed format and each pixel with a gray value falling into [0, 255] is represented by 8 bits. Denote the numbers of the rows and the columns in the original image as r and c, and the number of all pixels is r x c. Then, the amount of bits of the original image is r x c x 8. For image encryption, the data sender pseudo randomly permutes the pixels and the permutation way is determined by a secret key. The permuted pixel-sequence is viewed as the encrypted data.

Since only the pixel positions are permuted and the pixel values are not masked in the encryption phase, an attacker without knowledge of the secret key can know the original histogram from an encrypted image. However, the number of possible permutation ways is large, so that it is unpractical to perform a brute force search when is fairly large. That means the attacker cannot recover the original content from the encrypted image with ordinary size and fluctuation. Although there is a leakage of statistical information, the permutation-based encryption can be used in most scenarios without a requirement of perfect secrecy.

A pixel is generally thought of as the smallest single component of a digital image. The word is a combination of picture and element, via pix. When having the permuted pixel sequence, the network provider divides it into two parts: the first part made up of pixels and the second one containing the rest of the pixels. Denote the pixels in the first part will be reserved while the data redundancy in the second part will be reduced. We call the pixels in the first part rigid pixels and the pixels in the second part elastic pixels.

Fig 1 Compression

Orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product. That is, for each pair u, v of elements of V, we have

Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. Sample orthogonal matrix is as follows

1

1

1

1

1

Fig 2. Orthogonal Matrix

Perform an orthogonal transform in the elastic pixels to calculate the coefficients Q1,Q2…Q(1-Alpha).N

Here, H is a public orthogonal matrix with a size of (1-Alpha).N x (1-Alpha).N , and it can be generated from orthogonalizing a random matrix.

For each coefficient, calculate

where Delta and M and are system parameters and can be 50 and 4.

The compression ratio can be as below:

The rigid data and encoded elastic data (Sk) are stored as compressed data in the storage media.

Fig 3 Decompression

Decompose the compressed data and obtain the gray values of rigid pixels, the values of all sk, and the values of parameters. Here, with the knowledge of and M , the receiver may calculate L2, and then get the values of by converting binary blocks with L2 bits into digit pieces in an M-ary notational system. According to the secret key, the receiver can retrieve the positions of rigid pixels. That means the original gray values at the positions, which distribute over the entire image, can be exactly recovered.

The rigid pixels are inverse permuted and the missing elastic pixels are predicted

The elastic pixels are expanded using orthogonal transform. This will help for the closest value calculation.

For the pixels at other positions, i.e., the elastic pixels, their values are firstly estimated as the values of rigid pixels nearest to them. That means, for each elastic pixel, we find the nearest rigid pixel and regard the value of the rigid pixel as the estimated value of the elastic pixel. If there are several nearest rigid pixels with the same distance, regard their average value as the estimated value of the elastic pixel. Because of spatial correlation in the natural image, the estimated values are similar to the corresponding original values.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Rearrange the estimated values of elastic pixels using the same permutation way, and denote them as

Calculate the coefficients

and

Modify the coefficients to the closest values consistent with the corresponding Sk

Then perform an inverse transform

Calculate the average energy of difference between the two versions of elastic pixels

If D is not less than a given threshold, for each elastic pixel, regard the average value of its four neighbour pixels as its new estimated value and go to iteration step. Otherwise, terminate the iteration and output the image made up of the rigid pixels and the final version of elastic pixels as a reconstructed result.

The rigid and elastic pixels are combined and they are inverse permuted for decryption purpose. All rows of the image data are undergone the same decompression process. Thus the decompressed image is generated and displayed.

Jpeg Compression with Encryption

Joint Photographic Experts Group (JPEG) has become an international standard for image compression. The goal is to reduce memory requirements while increasing speed by avoiding decompression and space domain operations [2]. This method is the most popular and comprehensive continuous tone still frame compression [5].

The JPEG still image compression has become a standard. JPEG is designed for compressing full-color or grayscale images of natural, real-world scenes. To exploit this method, an image is first partitioned into non overlapped 8Ã-8 blocks. A discrete Cosine transform (DCT) is applied to each block to convert the gray levels of pixels in the spatial domain into coefficients in the frequency domain. The coefficients are normalized by different scales according to the quantization table provided by the JPEG standard conducted by some psycho visual evidence.

The quantized coefficients are rearranged in a zigzag scan order to be further compressed by an efficient lossless coding strategy such as run length coding, arithmetic coding, or Huffman coding. The decoding is simply the inverse process of encoding. So, the JPEG compression takes about the same time for both encoding and decoding. The encoding/ decoding algorithms provided by an independent JPEG group are available for testing real world images. The information loss occurs only in the process of coefficient quantization. The JPEG standard defines a standard 8Ã-8 quantization table for all images which may not be appropriate. To achieve a better decoding quality of various images with the same compression by using the DCT approach, an adaptive quantization table may be used instead of using the standard quantization table[3].

Using this method the image is compressed first then it is encrypted. The QF are just reference numbers which range from 1 to 100, where QF 100 means that all quantizer steps are unity, thus yielding the best quality JPEG can possibly achieve[4].

Computer Simulation Results

Various experiments were conducted to analyze the performance of the two image compression models on image compression.

Table 1 Performance analysis of the Compression System with = 50 for 256 x 256 Lena Image

CR(%)

CTT(Sec)

DTT(Sec)

PSNR(db)

MSE

0.40

45

1.33

4.19

21.7477

434.818

0.50

39

0.33

3.64

22.9003

333.468

0.60

30

0.27

2.42

24.6847

221.113

0.75

21

0.27

1.71

26.7843

136.350

0.90

9

0.17

0.86

30.5450

57.3564

In value 0.75 the resolution of the picture is reasonably good, the time taken for Compression and Decompression are nominal and the PSNR and MSE values are desirable.

From the table 1 it is observed that for the value 0.75 and the Compression Ratio 21, the PSNR value is considerably better than others. So, the value 75 is chosen for further analysis. The respective PSNR value is found to be 26.7543 db. Similar table is also depicted in the graph.

Figure 4 Performance analysis of the Compression System with = 50 for 256 x 256 Lena Image

In the Fig 4 CR means Compression ratio. CTT means Compression time taken. DTT means Decompression time taken. PSNR means Peak signal to noise ratio. MSE means Mean square error.

Table 2 Compression Ratio to PSNR and MSE for the Lena Image using JPEG Algorithm

CR(%)

PSNR(db)

MSE

9

27.12

112.28

12

26.34

145.27

16

26.33

152.83

18

15.99

166.61

24

15.80

173.84

Figure 5 Compression Ratio to PSNR and MSE for the Lena Image using JPEG Algorithm

It is used to calculate the PSNR value and MSE value for the different Compression Ratio in the Lena Image. If the Compression Ratio is low, the PSNR value is high and vice versa. If the Compression Ratio is low, the MSE value is also low and vice versa. Similar table is also depicted in the graph.

Table 3 Performance comparison between the LCIREI method and the JPEG method.

CR(%)

LCIREI Algorithm

JPEG Algorithm

PSNR(db)

MSE

PSNR(db)

MSE

9

38.4936

9.1985

27.12

21.228

12

37.4124

11.7989

26.34

24.527

16

36.4073

14.8714

26.33

27.283

18

35.5423

18.1487

15.99

34.610

24

33.9056

26.4558

15.80

46.840

Figure 6 Performance comparison between the developed Compression System and the JPEG System

From the observed table 3 comparing the LCIREI Algorithm with JPEG Algorithm both the PSNR and MSE values are considerably good. The resolution of the picture is reasonably better. Similar table is also depicted in the graph

Conclusion

Secured image compression is the recent talk in computer field. This work compresses the encrypted images in the lossy way[1]. This unique and practical scheme is made up of image encryption, Lossy compression and Iterative reconstruction. The encryption process is performed by pseudorandom permutation and compression is done by iterative reconstruction. The rigid pixels are processed in the way of lossless type and the Elastic pixels are processed in lossy type. Finally reconstructed image is generated in the lossy compression form. Analyzing parameters such as Compression Ratio, PSNR and MSE proved the efficiency and the performance quality of this method. By considering the overall gain, this work declared that this methodology is better than the, existing JPEG lossy compression on encrypted images [2].

In this work the original image is encrypted and it is divided into Rigid and elastic pixels. Here the rigid data does not undergo any compression and elastic data are compressed in the lossy model. To increase the compression ratio it must compress the rigid data also. So in the extension work vector quantization concept shall be integrated with this lossy compression to get more compression ratio. In this work 512 x 512 grayscale image is used. So in the extension work Color images as well as videos shall be compressed.