This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Direct Cosine Transformation can also be used to hide data. Jonathan et al mentioned in their paper that this is one of the main components in JPEG compression technique. They also gave the steps in which it works as follows:
1. First the image is split up into 8 x 8 squares.
2. Next each of these squares is transformed via a DCT, which outputs a multi-dimensional array of 63 coefficients.
3. A quantizer rounds each of these coefficients, which essentially is the compression stage as this is where data is lost.
4. Small unimportant coefficients are rounded to 0 while larger ones lose some of their precision.
5. At this stage you should have an array of streamlined coefficients, which are further compressed via a Huffman encoding scheme or similar.
6. Decompression is done via an inverse DCT.
By mere looking at the value of the image pixel one can't be aware that something is missing, this makes hiding by DCT useful. You can distribute the data you hide evenly over the whole image in a way that it makes it hard to destroy. One technique hides data in the quantizer stage (Jonathan et al 2004). If there is any need to encode the bit value 0 inside any specific 8x8 square of pixels, this can be achieved by making sure that all coefficients are even, example by tweaking them. When the coefficients are been tweaked the bit value one (1) can be stored so that they are odd. This then makes it capable to be difficult in detecting when data is stored in a large image compared to Least Significant Bit (LSB) method. This method is very simple it is capable of keeping distortions down, but its disadvantage is its vulnerability to noise.
2.4.3 Discrete Wavelet Transformation
According to Reddy and Raja (n.d.) "wavelet transform is used to convert a spatial domain into frequency domain". The use of wavelet in the image model stenographic resides in the fact that the wavelet transform of the information plainly separates high and low frequency in a pixel by pixel basis. Discrete Wavelet Transform it is an algorithm that is fast in machine computation, just as the Fast Fourier Transform, this technique is a linear operation which operates on data vector changing it to a vector that is numerically different but of the same length (Hannu 2011). Also, for the Fast Fourier Transform the basis functions are sines and cosines while for DWT they are a system of wavelet functions that ascertain some mathematical criteria and they are changes and scaling of one another
Furthermore Reddy and Raja (n.d.) said this technique is always preferred to Discrete Cosine Transforms (DCT) due to the fact that images with low frequency at different levels will offer similar resolution that is needed. A filter bank algorithm is repeated for a DWT that is one dimensional and its input is roll together with both low pass and high filter. The outcome of the low pass filter is a flat surface of the input's version while the high filter captures the high frequency part. The rebuilding involves a roll together with the filter for synthesis and the outcome is added. But for two dimensional transform, you will first have to apply a step of the one dimensional transform to every row and the repeat for all the columns. This then leads into four classes or band coefficients (Reddy and Raja n.d.). The easiest of the wavelet transform is the Haar Wavelet Transform, for this to generate the low frequency wavelet coefficient the value of two pixels are averaged while to generate the high frequency coefficient you need to take half of the difference of same two pixels. They went further to classify the four obtained bands as the following: - approximate band, horizontal band, diagonal detail band and lastly vertical band. For the approximation band it has low frequency wavelet coefficients, this are important part of the image spatial domain while the other bands which is the detail bands has high frequency coefficients which includes the spatial domain images' edge details.
According to research done for the human perception, the eye's retina divides an image into different frequency channels, each extending a bandwidth of almost one octave (Reddy and Raja n.d.), each one is processed independently. Likewise in multilevel processing an image is divided into bands of almost equal bandwidth on a logarithmic scale. So therefore it is assumed that the use of DWT will enhance processing of the outcome divisions independently without perceiving any key interaction within them, this then makes the process of imperceptibility marking very effective. This justifies why the decomposition of wavelet is used commonly to fuse images. The method of fusion involves the ease method of averaging a pixel to more difficult methods as principal part analysis and wavelet transform fusion. There are numerous approaches to which image fusion can be differentiated; it depends if the image can is been fused in a spatial domain or in any other domain, and then transform fused there. The process of image fusion involves producing an image from a set of input images. The information contained in any fused image is more accurate than any other individual input. Because this is sensor-compress information problem it goes that wavelets are useful for human visual processing, compression and rebuilding of data are important for merging such. Other useful applications for image fusion include remote sensing, robotics, medical imaging, computer vision and microscopic imaging.
2.4.4 Masking and Filtering
The technique of masking and filtering are mostly used on 24 bit and grey scale images (Samir et al 2008). They are sometimes used as watermarks because the embed information in the same way as watermarks do on actual paper. Masking an image requires changing the luminance of the masked area. The chance of detection can be reduced if the luminance change is small. Compared to LSB (Least Significant Bit) insertion masking is more robust in respect to cropping, some image processing and compression. This technique embeds information in areas that are significant so that the hidden message is more essential to the cover image than just embedding it in the noise level. With lossy JPEG (Joint Photographic Expert Graphics) image it is more suitable than LSB (Least Bit Significant)
2.5 Algorithms use in Steganography
Umamaheswari, Sivasubramanian and Pandiarajan (2010), identified five algorithms that are currently implemented by Steganography each of them use Least Significant Bit (LSB) while some of them filter the image first. These are blind hide, hide seek, filter first, battle steg and dynamic battle steg and filter first. Furthermore, Juan J.R, Jesús M.M (n.d) in their paper identified another Steganography algorithm known as Selected Least Significant Bits (SLSB) algorithm.
This algorithm is the way to embed information in an image, it said to blindly hide due to the fact that it starts at the images top left corner then working its way right across the image (down in scan lines) for each pixel (Umamaheswari, Sivasubramanian and Pandiarajan 2010). This then leads it to changing the Least Significant Bits (LSB) of the pixel to match the message. To extract the hidden message the Least Significant Bits (LSB) starting from the top left are read off, this method is not very secure. It is also very smart, because it's easy know what has been changed due to the fact that the space the message is supposed to fill is not filled up, it's only the upper part that is degraded leaving the bottom of the image untouched. This makes it easy to know that what has been changed.
In this algorithm the message is distributed across the message (Umamaheswari, Sivasubramanian and Pandiarajan 2010). The name was gotten from a Steganography tool used in Windows 95 that uses the same technique. It generates a random seed using a password, this seed is then used to select the first position it's going to hide in. The random generating of position is continued until the entire message has been embedded. This algorithm is slightly smarter than BlindHide because in order to break the algorithm you need to try combining all the pixels, you can only avoid doing that if you know the password. This still doesn't make it the best method because it doesn't look at the pixels it is hiding in. It doesn't leave the image in a good condition. Noise is introduced and this is randomly placed which often causes the stego image to look speckled (Kathryn 2006).
Umamaheswari, Sivasubramanian and Pandiarajan (2010) stated using one of the filters that are inbuilt, it filters the image then embeds first the highest filter values. It is basically an intricate version of BlindHide as it does not need a password to extract the message. Adding to this Kathryn (2006) said this algorithm uses a technique known as an edge detecting filter, just like the Laplace formula, this finds the places on the image where there are some pixels which are least like their neighbours. The filter first's paramount values is where it hides. The Least Significant Bits (LSB) is left unchanged, while the most significant bits are filter.
The fact that the pixels are been changed, care has to be taken when filtering the picture so that information that might change doesn't need to be used (Umamaheswari, Sivasubramanian and Pandiarajan 2010). If we do the contrary then it will be difficult or impossible for the message to be extracted. Hiding in those places that are least makes it less detectable on an image.
2.5.5 Dynamic BattleSteg and FilterFirst
The two algorithms work in a similar pattern as BattleSteg and FilterFirst, though hiding the process is faster and less memory intensive because they use dynamic programming (Umamaheswari, Sivasubramanian and Pandiarajan 2010). The order in which the pixel is been kept in the dynamic array is not the same, this makes it incompatible with the original algorithms.
Soumyendu et al (n.d) defined Steganalysis as the process of identifying Steganography by inspecting various parameter of a stego media, the main thing for this process is identifying the suspected stego media. Anu, Rekha and Praveen (2011) gave their own definition of Steganalysis as the science of attacking Steganography in a war that never ends. To test the strength of his/her own algorithm a Steganographer can create Steganalysis. Also Steganalysis is said to be the identification and destruction of embedded message (Swagota and Monisha, 2010).
According to Zoran, Michael and Sushil (2004) the aim of digital Steganalysis is to detect image files with information hidden in them and with the possibility of the information been extracted. When the stego media is identified Steganalysis process decides if it contains hidden message(s) or not, if it does it then tries to retrieve the message from it (Soumyendu et al n.d). Furthermore, the aim of Steganalysis is to recognise any streams of information that seems to be suspicious and determine if they have hidden message or not embedded in them and if there is means retrieve the information that is hidden (Vijay and Vishal 2012).
Vijay and Vishal (2012) in their paper also outline the following challenges faced by Steganalysis:
1. The suspect information stream, such as a signal or a file, may or may not have any data embedded into them.
2. The embedded data, if any, may have been encrypted before being inserted into the signal or file.
3. Some of the suspect signal or file may have noise or irrelevant data encoded into them (which can make analysis very time consuming).
4. Unless it is possible to fully recover, decrypt and inspect the hidden data, often one has only a suspect information stream and cannot be sure that it is being used for transporting secret information
In addition they went further to discuss the types of attacks that stego-images can suffer. Attacks and analysis on embedded information may take different forms, this include: extracting (retrieving), detecting, disabling, modifying or destroying of embedded information. An approach in attack depends on what information is the Steganalyst (the person who is attempting to detect steganography-based information streams or files) has available. The attacks possible on a stego-image or media can be any of the following:
1. Steganography-only attack: The only medium available for analysis is the Steganography medium.
2. Known-carrier attack: The carrier that is, the original cover and steganography media are both available for analysis.
3. Known-message attack: The embedded message is known.
4. Chosen-steganography attack: The steganography medium and tool (or algorithm) is both known.
5. Chosen-message attack: A known message and steganography tool (or algorithms) is used to create steganography media for future analysis and comparison. The goal in this attack is to determine corresponding patterns in the steganography medium that may point to the use of specific steganography tools or algorithms.
6. Known-steganography attack: The carrier and steganography medium, as well as the steganography tool or algorithm, are known.
2.6.1 Classification of Steganalysis
Steganalysis can be classified into the following: Statistical and Signature Steganalysis. This is divided on whether the embedded message is detected using the Statistics of the image or the Signature of the Steganography technique used (Arooj and Mir 2010). Arooj and Mir (2010) went further to divide each of this technique into universal and specific approaches.
126.96.36.199 Signature Steganalysis
According to Arooj and Mir (2010) in their paper also agree that degradation or unusual characteristics are been introduced when properties of any electronic media is alter during embedding of any secret message. This then bring about the existence of signatures that announces the existence of a secret message. The secret message can be detecting by finding they patterns (signatures) which are obvious of a Steganography tool. In the early stage of Steganalysis the possibility of embedded message where revealed by using signatures specific to specific Steganography tool. The method basically looks at the pallete tables in GIF images and inconsistences caused by common Steganography tools. Arooj and Mir (2010) question the reliability of this method even though they are simple, gives results that are promising when message is hidden. This method is sub divided into: specific and universal signature Steganalysis.
188.8.131.52 Statistical Steganalysis
Arooj and Mir (2010) question the reliability of this method even though they are simple, gives results that are promising when message is hidden. Furthermore, to develop this type of technique you need to analyse the embedding operation and also determine some image statistics that are changed as a result of the process of embedding. For one to be able to design such technique you need a very good knowledge of embedding process. It works best and gives accurate result when used for a specific Steganography technique, same as what is applied in this project. This technique was divided into two by Arooj and Mir (2010) in their paper, they are: LSB (Least Significant Bits) embedding technique Steganalysis and LSB matching Steganalysis.
LSB (Least Significant Bits) embedding technique is known to be the most popular and frequently used Steganography method by distant to other techniques. It hides message bits in the LSB (Least Significant Bits) of sequentially or randomly selected pixels. Its selection of a pixel is based on the secret stego key the parties involved in communication share. It gained popularity because it is easy to use or apply. The approach deals specifically with embedding LSB (Least Significant Bits) and is not based on visual inspection but rather on powerful first order statistical analysis. On the other hand, LSB (Least Significant Bits) matching Steganalysis is another model of LSB (Least Significant Bits) embedding which is more complex and difficult to be detected in comparison to a simple LSB (Least Significant Bits) replacement