Detect Malicious Manipulation With Digital Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Fridrich et al. proposed the first method to detect malicious manipulation with digital images. To detect copy-move forgery where a part of the image is copied and pasted somewhere else in the same image to hide some information. The need for detection of digital forgeries is due to the fact that availability of editing tools for digital forgeries is increasing rapidly. In order to detect this copy-move forgery the robust method is proposed in this paper. Basic algorithms for detection of the Copy-Move forgery are by using exact match in exhaustive search or autocorrelation among each pixel. Other is that based on an approximate match. In approximate match algorithm the detection begins from upper left corner to the lower right corner of the image while sliding a B-B overlapping block. For each block, the DCT transform is calculated and the DCT coefficients are then quantized and stored as one row, forming the matrix of each block as one row. The rows are lexicographically sorted and quantized values of DCT coefficients for each block are compared and the copied blocks are detected. The algorithm can falsely identify few segments as copied due to uniform areas which require human interpretation before the output of any Copy-Move detection algorithm is stated as forgery in image.

Popescu et al. (2004) an efficient technique that automatically detects duplicated regions in a digital image was proposed. In this technique principal component analysis is performed on images to small fixed size image blocks to yield a reduced dimension representation. This representation is robust to minor variations in the image due to additive noise or lossy compression. Duplicated regions are then detected by lexicographically sorting all of the image blocks. The effectiveness of this technique on plausible forgeries is shown, and quantified its sensitivity to JPEG lossy compression and additive noise. It is shown that detection is possible in the presence of significant amounts of corrupting noise. Detection accuracy good, except for small block sizes and low JPEG qualities.

Lowe D.G (2004), proposed the invariant feature extraction method for matching the objects in images. These features are invariant to rotation and scaling which provides robust matching in different view of the scene in image. The feature set extracted is highly distinctive and can be used to correctly match the single feature from large database of features of many images. The approach was then named as Scale Invariant Feature Transform (SIFT) as the image is transformed to scale invariant local features. The major stages to calculate SIFT features are: scale-space extrema detection, keypoint localization, orientation assignment, keypoint descriptor. After these stages a set on features along with the unique fingerprint, called descriptor, is provided. Now these features are matched for object recognition using fast nearest-neighbour algorithm and clusters of matched features are formed by Hough transform. For matching the features of the images are stored in the database and single feature is matched to the database. These features are distinctive even if object is changed in size or rotation.

W .Luo et al. (2006) developed a robust method to detect and locate tampering. The algorithm was able to detect tampering even under post-processing operations like blurring, noise addition, lossy compression and combination of these. The steps involved in algorithm are:

Extracting characteristics vector of blocks: The overlapping blocks of colored input image is extracted. For each block 7 characteristics, like average of red, green, blue components; dividing the luminance channel into two equal parts in 4 directions.

Searching similar blocks: The array is then lexicographically sorted. Every similar pair will be together in the list and difference of two consecutive characteristics of blocks is compared on basis of few conditions.

Finding correct match: All the similar blocks are not forged. On the basis of shift vector histogram the regions with same shift vector are marked as forged. The forged region is then covered with white value and rest of the image in black value.

This method efficiently detects the forgery if the block is bigger than 1.9% of the image but for smooth regions the results are not good.

Min-Jen Tsai and Guan-Hui Wu (2006), Digital image processing technology uses the data mining method to get image features. These features were trained and classified to identify the camera sources from the images. A set of image features is obtained about the characteristics of the camera to identify the source camera of a certain image. The color image formation process of the camera are different among different manufacturers of camera, still the image captured is affected by the two factors: Color Filter Array (CFA) configuration and demosaicing algorithm, and the color processing and transformation. In order to capture the image characteristics between images from different cameras, each camera's images are processed and their image features are compared. A forecasting model is built for image identification by classification of these image features. Using image feature vector one can distinguish between the images source cameras among different brands, no matter what content an image contains.

Sintayehu Dehnie et al. (Dehnie et al., 2006), Digital image forensics technique are used to distinguish images captured by a digital camera from computer generated images. It is known that image acquisition in a digital camera is different from the generative algorithms used to generated images using some software in computer. This difference is lies in terms of the properties of the residual image such as pattern noise which can be extracted by a wavelet based de-noising filter. It is established that each digital camera has a unique pattern noise associated with itself. The results of the test shows that the residuals obtained from different digital camera images and computer generated images shows some common characteristics that is not present in the other type of images. Steps used to build the method are: generate a reference noise pattern for a class of computer generated images using a given algorithm. Than obtain the reference pattern by applying a wavelet based de-noising filter to extract the noise from each image. Let X denote an image and X' denote its de-noised version. The pattern noise, e, is given by:

e = X - X'

The reference noise pattern is obtained by averaging over many instances of e. To identify the image type correlation between image residual and the pre-computed reference error pattern associated with a generative algorithm is obtained. It is found that computer generated images have higher correlation within themselves. It is seen that there is low correlation between computer generated image reference error pattern and test images from a given camera and multiple camera.

Gou et al. (2007), In order to distinguish between the original digital image form the camera and its tempered image we need to confirm the integrity of that digital image. There are various methods to detect tampering. These methods are classified into two categories. In first category, a particular kind of tampering is detected such as compression, filtering, etc, these method are manipulation specific methods. Second method is to find general tampering using classifier based approach. In this paper new method is introduced to find tampering that is by using the noise features of digital images. Basic idea is that whenever an image is tempered it changes the noise statistics of image that helps to identify tampering. Image noise can be characterizing as de-noising, wavelet analysis and neighbourhood prediction. The statistical feature from each characterization is used further to detect tampering. Image de-noising algorithm is applied to get the first set of image noise features. Second set of features are derived from non-Gaussian property of wavelet sub-band coefficients. Neighbour prediction gives the third set of noise features using prediction error. Using these three sets of features, a robust classifier is formed that can distinguish direct camera images from their tampered images.

Tjoa et al. (2007), For image authentication we require to have source image in order to prove that the image is not tempered in any means but most of the times the source image is not available. Therefore non-intrusive forensic analysis is a research area in which information about the output signal is extracted from it even when the input signal is unavailable. Various non-intrusive forensic analysis methods in digital images are used to identify operations such as blurring, sharpening, resizing, rotation, luminance adjustment, gamma correction. The problem of this paper was to identify which transform coder was used during compression of an image. In this method to identify if the image was compressed or not firstly the sub-bands of all the transform coefficients are obtained. Than for each sub-band, histogram of original and un-quantized coefficients is estimated using nonlinear least-squares method. The relative entropy between the obtained histogram and the estimated original histogram for each sub-band is calculated, this gives final distance measure; if this measure is high, then we classify the transform tested as being the original transform used during compression. This method classifies the transform used during compression among six different transforms only. The main goal of this method is to identify the compression techniques used over the digital image. The benefits of such a system are significant like in the detection of patent infringement and verification of digital image integrity.

Babak Mahdian and Stanislav Saic (2010) proposed that when initial methods like digital watermarks or signatures are not available, then we use blind approach to detect the tampering of the image. Various blind methods discussed in this paper are:

Near-duplicated image regions,

Interpolation and re-sampling,

Inconsistencies in chromatic aberration,

Noise inconsistencies,

Double JPEG compression,

Inconsistencies in color filter array (CFA) interpolated

Inconsistencies in lighting

A. Detection of Near-Duplicated Image Regions

The common type of digital image forgery is when copy-move of the part of image is there. A part of the image is copied and pasted into another part of the same image, typically with the intention to hide an object or a region. The copy-move forgery brings into the image several near-duplicated image regions. So, detection of such regions may signify tampering.

B. Detection of Traces of Re-sampling and Interpolation

When images are spliced together to create high quality and consistent image tampering, then always geometric transformations like scaling, rotation or skewing are used. The Expectation/Maximization (EM) algorithm is used to detect these transformations. The output of the method is a probability map containing periodic patterns if the investigated signal has been re-sampled. Therefore, by having sophisticated re-sampling/interpolation detectors, altered images containing re-sampled portions can be identified and hence the tampering is detected. Existing detectors says that the interpolation process brings into the signal specific detectable statistical changes to detect tampering.

C. Detection of Inconsistencies in Chromatic Aberration

Optical imaging systems are not ideal and there are different types of aberrations into the captured images. Chromatic aberration is caused by the failure of the optical system to perfectly focus light of all wavelengths. This type of aberration can be divided into longitudinal and lateral. Aberration causes various forms of color imperfections in the image. When an image is altered, the lateral chromatic aberration can become inconsistent across the image and hence the forgery can be detected.

D. Detection of Image Noise Inconsistencies

A commonly used tool to trace tampering is addition of locally random noise to the altered image regions. The noise degradation is the main cause of failure of many active and passive image forgery detection methods. Typically, the amount of noise is uniform across the entire authentic images. Adding locally random noise may cause inconsistencies in the images noise. Therefore, the detection of various noise levels in an image may signify tampering.

E. Detection of Double JPEG Compression

When the image is loaded onto photo-manipulating software, it is first decompressed and after the editing process is finished, the digital image is compressed again and re-saved. There is a neural network classifier based method to estimate the original quantization matrix from double compressed images.

F. Detection of Inconsistencies in Lighting

The estimation of the illumining direction is a difficult task computer graphics. Photographs are taken under different lighting conditions. Thus, when two or more images are spliced together to create an image forgery, it is difficult to keep the lighting conditions (light sources, directions of lights, etc.) correct and consistent across the image. Therefore detecting lighting inconsistencies can be used for image forensics.

G. Detection of Inconsistencies in Color Filter Array Interpolation

The color images are obtained in conjunction with a color filter array using a CFA, at each pixel location only a single color sample is captured. Missing colors are computed by an interpolating process, called CFA interpolation. This process introduces specific correlations between the pixels of the image (a subset of pixels within a color channel are periodically correlated to their neighbouring pixels), which is corrupted when tampering process is carried out in the image.

Vivek Kumar et al. (2011), the aim is to detect copy move type of forgery where one portion of image is copied and pasted somewhere else in the same image. The motive of this type of image forgery is to hide some important features from the original image. In this paper, a method is proposed which is more efficient and reliable in comparison to other methods. This method is robust for some manipulations in images like for scaling, rotation, Gaussian noise, smoothing and JPEG compression etc. This approach proves to be fast and accurate because the blocks are divided into sub blocks and some sort of mathematical functions are applied to get feature values. These values are then sorted by using radix sort. It is shown that this approach is good to detect copy move forgery even for manipulations like JPEG compression, rotation (up to some limit), Gaussian Noise and smoothing. To make this approach fast radix sort followed by count sort is used and is more efficient. Some post processing techniques like erosion and then dilation are used to remove false matches. This method is not robust to detect attacks like of rotation by an arbitrary degree and for scaling.

V. Christlein et al. (2010) analysed the features for various copy-move forgery detection. A comparative study of 10 copy-move forgery detection technique is performed to present best feature vector method. A database was generated which consist of 48 original images and various real world copy-move forgery was applied to the images. The common pipeline for detection algorithm consist of dividing the image into blocke, then features extraction then matching and clustering to similar regions. Various algorithms follow the same pipeline only the difference lies in their feature extraction method. For matching the blocks lexicographical sorting and kd-tree to find nearest neighbour were used.

Evaluation result proved that on large images DCT with lexicographical sorting provides the best accuracy. For small images kd-tree is better option. False positive rate is when the forgery is detected when there is no forgery, false negative rate is that when forgery is not detected and there is forgery. The false negative and false positive rate should be low for high accuracy. The false positive rate for lexicographical sorting is low and kd-tree has low false negative rate. FMT proved to be good features vector for overall performance but not for geometrical transformation. DCT and PCA showed remarkable results when used same shift vector as matching condition.

Shivakumar et al. (2011), When an image is has undergone copy-move forgery many editing operations are applied on it to hide traces of copy-move. Thus forgeries made are than indistinguishable by naked eye from authentic photographs and documents. In this method Harris Interest Point detector along with SIFT descriptors are used to detect copy - move forgery. KD-Tree is used for matching. It only tested on scaling transformation of the image and results were good but this method is not robust to post-processing operations like Gaussian noise and rotation.