Image based steganography

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

ABSTRACT

Steganography is a technique used to hide the message in vessel data by embedding it. The Vessel Data which is visible is known as external information and the data which is embedded is called as internal information.The extrenal information is not much useful to the data owner.

The techniques used in Steganography makes hard to detect hidden message within an image file. By this technique we are not only sending a message but also we are hiding the message. Steganography system is designed to encode and decode a secret file embedded in image file with a random Least Significant Bit(LSB) insertion technique. By using this technique the secret data are spread out among the image data in a random manner with the help of a secret key. The key generates pseudorandom numbers and identifies where and in which order hidden message is laid out. The advantage of using this method is that it includes cryptography. In cryptography, diffusion is applied to secret message.

INTRODUCTION:

The information communicated comes in number of forms and is used in various number of applications. In large number of these applications, it is desired that the communication has to be done in secrete. Such secret communication ranges from the obvious cases of bank transfers, corporate communications, and credit card purchases,and large percentage of everyday e-mail. Steganography is an ancient art of embedding a message in such a way that no one,except the sender and the recipient,suspects the existence of the message. Most of the newer applications use Steganography as a watermark, to protect a copy right on information. The forms of Steganography vary, but unsurprisingly, innocuous spam messages are turning up more often containing embedded text. A new transform domain technique for embedding the secret information in the integer wavelet which is transformed on a cover image is implemented here.

A technique which is used to scramble a secrete or a confidential message in order to make it unreadable for a third party is known as the Cryptography.Now-a-days its commonly used in the internet communications.cryptography can hide the content of the message but it cant hide the location of the secrete message.This is how the attackers can target even an encrypted message.Water marking is the another information of hiding the digital data or a picture or musical sound.The main purpose of this watermarking information is to protect the copyright or the ownership of the data.In this technique the robustness of the embedded evidence,that can be very small, is the most important.The external information which is visible is the valuable information in the watermarking technique.

steganography is a technique which is used to make the confidential information imperceptible to the human eyes by embedding the message in some dummy data such as the digital image or a speech sound.There is a research topic about the steganography known as the steganalysis.The main objective of this steganalysis is to find out the stego file among the given files.It is a technique which is used to detect the suspicious image or sound file which is embedded with the crime related information.So,we need to make a "sniffer-dog-program" to break the steganography.However,it is too difficult to make a program that really works.

All the traditional steganography techniques have very limited information-hiding capacity.They can hide only 10% (or less) of the data amounts of the vessel.This is because the principle of those techniques which were either to replace a special part of the frequency components of the vessel image, or to replace all the least significant bits which are present in a multivalued image with the secrete information.In the new steganography which we are using uses an image as the vesel data, and we need to embed the secrete information in to the bit planes of the vessel.The percentage of information hiding capacity of a true color image is around 50.All the "noise-like" regions in the bit planes of the vessel image can be replaced with the secret data without deteriorating the quality of the image,which is known as "BPCS-Steganography", which stands for Bit-Plane Complexity Segmentation Steganography.

BACKGROUND HISTORY:

The word Steganography is of Greek origin and means "covered, or hidden writing". Its ancient origins can be traced back to 440BC.

THEORY:

Steganography is a technique which is used now a days to make confidential information imperceptible to the human eyes by embedding it in to some innocent looking "vessel" data or a "dummy" data such as a digital image or a speech sound.In a multi bit data structure a typical vessel is defined as a color image having Red,Green and blue components in it.By using a special extracting program and a key the embedded information can be extracted,the technique of steganography is totally different from "file deception" or "file camouflage" techniques.

A technique to hide the secrete data in a computer file which almost looks like a steganography is known as a "file deception" or "file camouflage".But actually, it is defined as a trick which is used to disguise a secret-data-added file as a normal file.This can be done as most of the computer file formats have some "don't-care portion" in one file.For instance if we take some file formats as jpeg,mpeg3 or some word file these looks like the original image,sound or document respectively on the computer.Some of them could have misunderstood that such a trick is a type of Steganography.However,such files can have an extra lengthy file sizes, and they can be easily detected by most of the computer engineers.So, by this we can understand that the file deception is totally different from that of the steganographic techinque which we are discussing here.

Many of the "Steganography software" which is in the market today is based on the file decepetion.If we find a steganography program that increases the output file size just by the amount we have embedded, then the program is obviously a file deception.If there is some secrete data then we should encrypt in such a way that it is not readable for the third party.A solution to Keep secrete information very safe is known as Data Encryption.It is totally based on scrambling the data by using some type of the secrete key.

However,encrypting the data will draw more attention of the people who have not encrypted the data.So, it is very to the owner to know whether the data is encrypted or not.By, this we can know that the encrypting is not enough. There is another solution which is known steganography.

There are two types of data in steganography, one is the secret data that is very valuable and the other is a type of media data "vessel" or "carrier" or "dummy" data.Vessel data is essential, but it is not so valuable.It is defined as the data in which the valuable data is "embedded". The data which is already embedded in the vessel data is called "stego data".By using the stego data we can extract the secret or the valuable data. For embedding and extracting the data we need a special program and a key.

A typical vessel is an image data with Red, Green, and Blue color components present in it in a 24 bits pixel structure. The illustration below shows a general scheme of Steganography.

Steganography is a technique which is used to hide secret data by embedding it in some innocent looking media data like Mona lisa in the above picture.The data which is embedded is very safe because Steganography hides both the content and the location of the secret information.In the media data there are many different methods to embed the data.It is highly impossible to detect which method is used for embedding the data.Steganography can co-operate with cryptography in the sense that it can embed the encrypted secret data and make it much safer.

The most important point in the steganography technique is that the stego data does not have any evidence that some extra data is embedded there.In other way, the vessel data and the stego data must be very similar.The user of the steganography should discard the original vessel data after embedding,so that no one can compare the stego and the original data.

It is also important that the capacity for embedding the data is large.As it is larger it is better.Of all the currently available steganography methods the BPCS method is the best.

LEAST SIGNIFICANT BIT INSERTION

One of the most common techniques used in Steganography today is called least significant bit (LSB) insertion. This method is exactly what it sounds like; the least significant bits of the cover-image are altered so that they form the embedded information. The following example shows how the letter A can be hidden in the first eight bytes of three pixels in a 24-bit image.

The three underlined bits are the only three bits that were actually altered. LSB insertion requires on average that only half the bits in an image be changed. Since the 8-bit letter A only requires eight bytes to hide it in, the ninth byte of the three pixels can be used to begin hiding the next character of the hidden message.

A slight variation of this technique allows for embedding the message in two or more of the least significant bits per byte. This increases the hidden information capacity of the cover-object, but the cover-object is degraded more, and therefore it is more detectable. Other variations on this technique include ensuring that statistical changes in the image do not occur. Some intelligent software also checks for areas that are made up of one solid color. Changes in these pixels are then avoided because slight changes would cause noticeable variations in the area .While LSB insertion is easy to implement, it is also easily attacked. Slight modifications in the color palette and simple image manipulations will destroy the entire hidden message.

Some examples of these simple image manipulations include image resizing and cropping.

Applications of Steganography :

Steganography is applicable to, but not limited to, the following areas.

  1. Confidential communication and secret data storing.
  2. Protection of data alteration
  3. Access control system for digital content distribution.
  4. Media Database systems.

The area differs in what feature of the Steganography is utilized in each system.

Confidential communication and secret data storing:

The "secrecy" of the embedded data is essential in this area.

Historically, Steganography have been approached in this area.Steganography provides us with:

  1. Potential capacity to hide the existence of confidential data.
  2. Hardness of detecting the hidden (i.e., embedded ) data.
  3. Strengthening of the secrecy of the encrypted data.

In practice , when you use some Steganography, you must first select a vessel data according to the size of the embedding data.The vessel should be innocuous.Then,you embed the confidential data by using an embedding program (which is one component of the Steganography software ) together with some key .When extracting , you (or your party ) use an extracting program (another component) to recover the embedded data by the same key ("common key " in terms of cryptography ).In this case you need a "key negotiation " before you start communication.

Protection of data alteration:

We take advantage of the fragility of the embedded data in this application area.

The embedded data can rather be fragile than be very robust. Actually, embedded data are fragile in most steganography programs.

However, this fragility opens a new direction toward an information-alteration protective system such as a "Digital Certificate Document System." The most novel point among others is that "no authentication bureau is needed." If it is implemented, people can send their "digital certificate data" to any place in the world through Internet. No one can forge, alter, nor tamper such certificate data. If forged, altered, or tampered, it is easily detected by the extraction program.

Access control system for digital content distribution:

In this area embedded data is "hidden", but is "explained" to publicize the content.

Today, digital contents are getting more and more commonly distributed by Internet than ever before. For example, music companies release new albums on their Webpage in a free or charged manner. However, in this case, all the contents are equally distributed to the people who accessed the page. So, an ordinary Web distribution scheme is not suited for a "case-by-case" and "selective" distribution. Of course it is always possible to attach digital content to e-mail messages and send to the customers. But it will takes a lot of cost in time and labor.

If you have some valuable content, which you think it is okay to provide others if they really need it, and if it is possible to upload such content on the Web in some covert manner. And if you can issue a special "access key" to extract the content selectively, you will be very happy about it. A steganographic scheme can help realize a this type of system.

We have developed a prototype of an "Access Control System" for digital content distribution through Internet. The following steps explain the scheme.

  1. A content owner classify his/her digital contents in a folder-by-folder manner, and embed the whole folders in some large vessel according to a steganographic method using folder access keys, and upload the embedded vessel (stego data) on his/her own Webpage.
  2. On that Webpage the owner explains the contents in depth and publicize worldwide. The contact information to the owner (post mail address, e-mail address, phone number, etc.) will be posted there.
  3. The owner may receive an access-request from a customer who watched that Webpage. In that case, the owner may (or may not) creates an access key and provide it to the customer (free or charged).

In this mechanism the most important point is, a "selective extraction" is possible or not.

Media Database systems:

In this application area of steganography secrecy is not important, but unifying two types of data into one is the most important.

Media data (photo picture, movie, music, etc.) have some association with other information. A photo picture, for instance, may have the following.

  1. The title of the picture and some physical object information.
  2. The date and the time when the picture was taken.
  3. The camera and the photographer's information.

DIGITAL IMAGE PROCESSING

BACKGROUND:

Digital image processing is an area that is characterized by the need for extensive experimental work to establish the viability of the proposed solutions to a given problem. An important characteristic which is underlying in the design of image processing systems is the significant level of testing & the experimentation that normally required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation. What is DIP?

An image is defined as a two-dimensional function f(x, y), where x & y are the spatial coordinates, & the amplitude of function "f" at any pair of coordinates (x, y) is called the intensity or gray level of the image at that particular point. When both the coordinates x and y & the amplitude values of function "f" all have finite discrete quantities, then we call that image as a digital image. The field DIP refers to processing a digital image by the means of a digital computer. A image which is composed of finite number of elements,each element has particular location and value is named as a digital image.These elements are called as pixels.

As we know that vision is the most advanced of our sensor,so image play the single most important role in human perception.However, humans are limited to the visual band of the EM spectrum but the imaging machines cover almost the entire EM specturm,ranging from the gamma waves to radio waves.These can operate also on the images generated by the sources that humans are not accustomed to associating with the image.

There is no agreement among the authors regarding where the image processing stops and other related areas such as the image analysis and computer vision start.Sometimes a difference is made by defining image processing as a discipline in which both the input & output at a process are the images. This is limiting & somewhat artificial boundary.The area which is present in between the image processing and computer vision is image analysis(Understanding image).

There are no clear-cut boundaries in the continuum from the image processing at one end to complete vision at the other end . However, one useful paradigm is to consider the three types of computerized processes in this continuum: low-level, mid-level, & the high-level processes.The Low-level process involves the primitive operations such as image processing which is used to reduce noise, contrast enhancement & image sharpening. A low- level process is characterized by the fact that both the inputs & outputs are images. Tasks such as segmentation, description of an object to reduce them to a form suitable for computer processing & classification of individual objects is the Mid level process on images. A mid-level process is characterized by the fact that the inputs given to the image are generally images but the outputs are attributes extracted from those images. Finally the higher- level processing involves "Making sense" of an ensemble of recognized objects, as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision.

Coordinate convention:

The result which is generated by using sampling and quantization is a matrix of real numbers.There are two principal ways to represent the digital images.Assume that an image with function f(x,y) is sampled in such a way that the resulting image has M rows and N columns.then the size of the image is MXN.The values of coordinates (xylem) are the discrete quantites.For the notational clarity and convenience, we can use the integer values for these discrete coordinates. In many of the image processing books, the image origin is defined at (xylem)=(0,0).The values of the next coordinate along with the first row of the image are (xylem)=(0,1).It is very important to keep in our mind that the notation (0,1) is used to signify the second sample along with the first row. It does not mean that these are the actual values of the physical coordinates,when the image was sampled.The figure below shows the coordinates convention. Note that the x ranges from 0 to M-1 and y ranges from 0 to N-1 in integer increments.

The coordinate convention which is used in the toolbox to denote arrays is different from that of the preceding paragraph in two minor ways. Firstly, instead of using (xylem) in the toolbox it uses the notation (race) to indicate the rows and the columns. Note:However,the order of coordinates are the same as in the previous paragraph, in the sense the first element of the coordinate topples, (alb), refers to a row and the second one to a column. The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); r ranges from 1 to M and c from 1 to N in the integer increments.The documentation of the IPT refers to the coordinates. Less frequently toolbox also employs another coordinate convention called spatial coordinates, which uses x to refer to column and y to refer to row. This is the quite opposite of our use of variables x and y.

RGB Image:

An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet corresponding to the red, green and blue components of an RGB image, at a specific spatial location. An RGB image may be viewed as "stack" of three gray scale images that when fed in to the red, green and blue inputs of a color monitor

Produce a color image on the screen. Convention the three images forming an RGB color image are referred to as the red, green and blue components images. The data class of the components images determines their range of values. If an RGB image is of class double the range of values is [0, 1].

Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit 16 respectively. The number of bits use to represents the pixel values of the component images determines the bit depth of an RGB image. For example, if each component image is an 8bit image, the corresponding RGB image is said to be 24 bits deep.

Generally, the number of bits in all component images is the same. In this case the number of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component image. For the 8bit case the number is 16,777,216 colors

INTRODUCTION TO WAVELETS

Fourier Analysis

Signal analysts have already at their disposal on an impressive arsenal of tools. Perhaps one of the most well-known of all these is Fourier analysis, which breaks down a signal into constituent sinusoids of different frequencies. Another way to think of the Fourier analysis is as a mathematical technique of transforming our view of the signal from time-based to frequency-based.

For many signals, Fourier analysis is extremely useful because the signal's frequency content is of great importance. So why do we need other techniques, like wavelet analysis?

Fourier analysis has a serious drawback. In transforming to the frequency domain, time information is lost. When looking at a Fourier transform of a signal, it is impossible to tell when a particular event took place. If the signal properties do not change much over time - that is, if it is what is called a stationary signal-this drawback isn't very important. These characteristics are often the most important part of the signal, and Fourier analysis is not suited to detecting them.

Short-Time Fourier analysis:

In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier transform to analyze only a small section of the signal at a time-a technique called windowing the signal.Gabor's adaptation, called the Short-Time FourierTransform (STFT), maps a signal into a two-dimensional function of time and frequency.

The STFT represents a sort of compromise between the time- and frequency-based views of a signal. It provides some information about both when and at what frequencies a signal event occurs. However, you can only obtain this information with limited precision, and that precision is determined by the size of the window.

Wavelet Analysis

Wavelet analysis represents the next logical step: a windowing technique with variable-sized regions. Wavelet analysis allows the use of long time intervals where we want more precise low-frequency information, and shorter regions where we want high-frequency information.

Here's what this looks like in contrast with the time-based, frequency-based, and STFT views of a signal:

You may have noticed that wavelet analysis does not use a time-frequency region, but rather a time-scale region. For more information about the concept of scale and the link between scale and frequency, see "How to Connect Scale to Frequency?"

What Can Wavelet Analysis Do?

One major advantage afforded by wavelets is the ability to perform local analysis, that is, to analyze a localized area of a larger signal. Consider a sinusoidal signal with a small discontinuity - one so tiny as to be barely visible. Such a signal easily could be generated in the real world, perhaps by a power fluctuation or a noisy switch.

A plot of the Fourier coefficients (as provided by the fft command) of this signal shows nothing particularly interesting: a flat spectrum with two peaks representing a single frequency. However, a plot of wavelet coefficients clearly shows the exact location in time of the discontinuity.

Wavelet analysis is capable of revealing aspects of data that other signal analysis techniques miss, aspects like trends, breakdown points, discontinuities in higher derivatives, and self-similarity. Furthermore, because it affords a different view of data than those presented by traditional techniques, wavelet analysis can often compress or de-noise a signal without appreciable degradation. Indeed, in their brief history within the signal processing field, wavelets have already proven themselves to be an indispensable addition to the analyst's collection of tools and continue to enjoy a burgeoning popularity today.

What Is Wavelet Analysis?

Now that we know some situations when wavelet analysis is useful, it is worthwhile asking "What is wavelet analysis?" and even more fundamentally,

"What is a wavelet?"

A wavelet is a waveform of effectively limited duration that has an average value of zero.

Compare wavelets with sine waves, which are the basis of Fourier analysis.

Sinusoids do not have limited duration - they extend from minus to plus infinity. And where sinusoids are smooth and predictable, wavelets tend to be irregular and asymmetric.

Fourier analysis consists of breaking up a signal into sine waves of various frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the original (or mother) wavelet. Just looking at pictures of wavelets and sine waves, you can see intuitively that signals with sharp changes might be better analyzed with an irregular wavelet than with a smooth sinusoid, just as some foods are better handled with a fork than a spoon. It also makes sense that local features can be described better with wavelets that have local extent.

Number of Dimensions:

Thus far, we've discussed only one-dimensional data, which encompasses most ordinary signals. However, wavelet analysis can be applied to two-dimensional data (images) and, in principle, to higher dimensional data. This toolbox uses only one and two-dimensional analysis techniques.

The Discrete Wavelet Transform:

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates an awful lot of data. What if we choose only a subset of scales and positions at which to make our calculations? It turns out rather remarkably that if we choose scales and positions based on powers of two-so-called dyadic scales and positions-then our analysis will be much more efficient and just as accurate. We obtain such an analysis from the discrete wavelet transform (DWT).

An efficient way to implement this scheme using filters was developed in 1988 by Mallat. The Mallat algorithm is in fact a classical scheme known in the signal processing community as a two-channel sub band coder. This very practical filtering algorithm yields a fast wavelet transform - a box into which a signal passes, and out of which wavelet coefficients quickly emerge. Let's examine this in more depth.

One-Stage Filtering: Approximations and Details:

For many signals, the low-frequency content is the most important part. It is what gives the signal its identity. The high-frequency content on the other hand imparts flavor or nuance. Consider the human voice. If you remove the high-frequency components, the voice sounds different but you can still tell what's being said. However, if you remove enough of the low-frequency components, you hear gibberish. In wavelet analysis, we often speak of approximations and details. The approximations are the high-scale, low-frequency components of the signal. The details are the low-scale, high-frequency components.

The filtering process at its most basic level looks like this:

The original signal S passes through two complementary filters and emerges as two signals.Unfortunately, if we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with. Suppose, for instance that the original signal S consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a total of 2000.

These signals A and D are interesting, but we get 2000 values instead of the 1000 we had. There exists a more subtle way to perform the decomposition using wavelets. By looking carefully at the computation, we may keep only one point out of two in each of the two 2000-length samples to get the complete information. This is the notion of own sampling. We produce two sequences called cA and cD.

Wavelet Reconstruction:

We've learned that how a discrete wavelet transforms can be used to analyze or decompose the signals and the images. This process is called decomposition or analysis. The other half of the theory is how those components can be assembled back into the original signal without the loss of information. This process is called reconstruction, or synthesis. The mathematical manipulation that which effects synthesis is called the inverse discrete wavelet transforms (IDWT). To synthesize a signal in the Wavelet Toolbox, we reconstruct it from the wavelet coefficients:

Reconstruction Filters:

The filtering part of a reconstruction process also bears some discussion, because it is considered as the choice of filters that which is crucial in achieving a perfect reconstruction of the original signal. The down sampling of the signal components performed during the decomposition phase introduces a distortion which is called aliasing. It turns out that by carefully choosing filters for the decomposition and reconstruction phases that are closely related (but not identical), we can just "cancel out" the effects of aliasing.

The low- and the high pass decomposition filters (L and H), together with their associated reconstruction filters (L' and H'), are used to form a system which is called quadrature mirror filters:

Reconstructing Approximations and Details:

We have seen that it is highly possible to reconstruct our original signal from coefficients of the approximations and details.

It is also possible to reconstruct all the approximations and details themselves from their coefficient vectors respectively.

As an example, let's consider how we would reconstruct the first-level approximation A1 from that of the coefficient vector cA1. We pass the coefficient vector cA1 through the same process which we have used to reconstruct the original signal. However, instead of combining it with the level-one detail cD1, we now feed in a vector of zeros in place of the detail coefficients

The above process yields a reconstructed approximation A1, which has a same length as the original signal S and which is a real approximation of it. Similarly, we can reconstruct a first-level detail D1, using the analogous process:

The reconstructed details and the approximations are true constituents of the original signal. In fact, we can find when we combine them that:

Note that coefficient vectors cA1 and cD1-because they are produced by down sampling and are only half of the length of the original signal - cannot directly be combined to just reproduce the signal.

It is necessary to reconstruct the approximations and details before combining them. Extending this particular technique to the components of a multilevel analysis, we find that the similar relationships hold for all the reconstructed signal constituents.

That is, there are many different ways to reassemble the original signal:

Relationship of Filters to Wavelet Shapes:

In section of "Reconstruction Filters", we spoke about the importance of choosing the right filters. In fact, the choice of filters not only determines whether there is perfect reconstruction possible are not , it also determines the shape of the wavelet we use to perform the particular analysis. To construct a wavelet of some practical utility, we seldom start by drawing a waveform. Instead, it usually makes more sense to design an appropriate quadrature mirror filters, and then use them to create the waveform. Let's see how this is done by focusing on an example.

The above curve begins to look progressively more like the db2 wavelet. This means that a wavelet's shape is determined entirely by the coefficients of the reconstruction filters. This relationship has profound implications. It means that you cannot choose just any shape, call it a wavelet, and perform an analysis. At least, you can't choose an arbitrary wavelet waveform if you want to be able to reconstruct the original signal accurately. You are compelled to choose a shape determined by quadrature mirror decomposition filters.

The Scaling Function:

We've seen the interrelation of wavelets and the quadrature mirror filters. The wavelet function ?? is determined by using the high pass filter, which also produces the details of a wavelet decomposition.

There is an additional function associated with some of the wavelets, but not all of them. This is the so-called scaling function . The scaling function is very similar to the wavelet function. It is also determined by the low pass quadrature mirror filters, and thus is associated with the approximations of the wavelet decomposition. In the same way that iteratively up- sampling and convolving the high pass filter produces a shape approximating the wavelet function, iteratively up-sampling and convolving the low pass filter produces a shape approximating the scaling function.

This process involves two aspects: breaking up the signal to obtain wavelet coefficients, and reassembling the signal from coefficients. We have already discussed about decomposition and reconstruction at some length. Of course, there is no point in breaking up a signal merely to have the satisfaction of immediately reconstructing it. We may modify the wavelet coefficients before performing the reconstruction step. We perform wavelet analysis because the coefficients thus obtained have many known uses, de-noising and compression being foremost among them. But wavelet analysis is still a new and emerging field. No doubt, many uncharted uses of the wavelet coefficients lie in wait. The Wavelet Toolbox can be a means of exploring possible uses and hitherto unknown applications of wavelet analysis. Explore the toolbox functions and see what you discover.

CONCLUSIONS:

This project describes a technique to embed data in an color image.

Additional features that could be added to the project which includes support for file types other than bitmap and implementation of other Steganography methods.

However this research work and software package provides a good starting point for anyone who is interested in learning about Steganography.

The data extracted from the cover image depends on the pixel values of the image.

The will be further developed to hide secret image in cover image.

FUTURE SCOPE:

REFERENCE:

  1. B.Schneier, "Terrorists and Steganography", 24 Sep. 2001, available: http://www.zdnet.com/zdnn/stories/comment/0,5859,2814256,00.html.
  2. Y. Linde, A. Buzo, and R. M. Gray, "An Algorithm for Vector Quantizer Design," IEEE Transactions on Communications, pp. 84-95, January 1989.
  3. Andersen, R.J., Petitcolas, F.A.P., On the limits of steganography. IEEE Journal of Selected Areas in Communications, Special Issue on Copyright and Privacy Protection 16 No.4 (1998) 474-481.
  4. Johnson, Neil F. and Jajodia, Sushil. "Steganography: Seeing the Unseen." IEEE Computer, February 1998, pp.26-34.
  5. William Stallings; Cryptography and Network Security: Principals and Practice, Prentice Hall international, Inc.; 2002.[2]
  6. Eric Cole ,"Hiding in Plain Sight: Steganography and the Art of Covert Communication"
  7. Gregory Kipper,"Investigator's Guide to Steganography "
  8. Stefan Katzenbeisser and Fabien, A.P. Petitcolas ," Information Hiding Techniques for Steganography and Digital Watermarking "
  9. Hiding secrets in computer files: steganography is the new invisible ink, as codes stow away on images-An article from: The Futurist by Patrick Tucker.
  10. Ismail Avcibas¸, Member, IEEE, Nasir Memon,Member, IEEE, and Bülent Sankur, Member, "Steganalysis Using Image Quality Metrics," IEEE Transactions on Image Processing, Vol 12, No. 2,February 2003.
  11. Niels Provos and Peter Honeyman, University of Michigan, "Hide and Seek: An Introduction to Steganography" IEEE Computer Society IEEE Security &Privacy.
  12. R. Chandramouli and Nasir Memon, "Analysis of LSB Based Image Steganography Techniques", IEEE 2001.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.