This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Blind image deconvolution is the term used to describe the restoration of images degraded by some unknown blur. Deconvolution and linear image restoration go hand in hand due to the fact that the degradation itself is a result of a process known as convolution with a point spread function, whereas restoration seeks to reverse the process in order to obtain the original input. [1,2,3]
1.1 Introduction and Motivation
Images are a 2D representation of 3D scene acquired by a recording system that is bound to have imperfections. This implies that the obtained image is a degraded version of the scene that one is attempting to capture. Blurring is only one of the major causes of image degradation and its reduction is the main focus of this thesis. More specifically this thesis focuses on a blurs that can be described by a linear model known as linear blurs. Out of focus blurring results when an object during exposure is outside the imaging systems depth of field. Thus images can posses? objects at different distances some of which are in focus and others which are out of focus.
In the frequency domain of an image the low frequency components represent the smooth areas of the image whereas details such as edges are represented by the high frequencies. Blurred objects can be characterized by a reduction of details and a general smoothing of the appearance of the object especially visible in the edges. This implies that blurring is due to an attenuation of the high frequency components of the image.
Motion blurs result when the imaging system itself during exposure moves leading to the entire image being blurred. Objects moving relative to the imaging system during exposure, leads to blurring of only that particular object in the image.
Image distortions can said to be spatially invariant or spatially variant where by spatially invariant indicates that every pixel in the image has undergone the same type of degradation regardless of its position within the image. Contrary to this the quantity and type of degradation that a pixel has undergone in an image that was blurred by a spatially variant blur depends on its position within the image.
Image restoration is required in various fields such as Military reconnaissance, medicine, astrology, criminal investigations, conventional photography and each field has its own reasons for performing the restoration. The motivation for the topic in general arises from the fact that even though there exist numerous methods for image restoration the proposed method could lead to a restoration result that is more suited to a particular field or even be more visually appealing than other restoration techniques. The motivation for the use of a neural network to perform the restoration arose from the sheer curiosity of how well would a neural network be able to reverse the process of convolution by analysis of the inputs and the outputs of convolution. [1,2,3]
Image restoration can be divided into two main classes: The classical restoration techniques assume that the PSF (point spread function) is already known and use simple algorithms to restore the image. The other class, blind deconvolution, is the more realistic kind where the PSF is not known a priori and both the PSF and the restored image need to be estimated from only the blurred image and in some cases knowledge of the imaging system. The classical methods can further be subdivided into those algorithms that work in the frequency domain and those that work in the spatial domain. The second class can also similarly be subdivided into the direct kind which is the simplest approach where the PSF is first estimated, then using this obtained PSF the image is restored using some classical or emerging technique. The indirect approach generally uses iteration to simultaneously restore the image and identify the PSF. The disadvantage of this approach is the fact that they are computationally intensive and do not always converge. [2,3]
In most cases one PSF is used to restore the entire image, for images where different levels of out of focus or motion blur exist within the image sub optimal solutions are produced. In the proposed approach (which is restricted to out of focus blurs ) a separate PSF is identified for different parts of the image segmented by the user. In some approaches genetic algorithms and other automatic object detection techniques are used to detect particular objects separate from the background, other techniques work by detecting the degree of out of focus blur in every part of the image and using these results to segment the image automatically. These sub segments of the image which are then treated as individual images restored and recombined.
When a direct approach for restoration of a segment of an image or the entire image is used a method for the identification of the blur function is required. Many approaches assume that the blur function can be represented by a parametric form which is what is assumed in the proposed technique. Blur identification then consists of the estimation of the particular parameter or parameters. In the case of out of focus blur this parameter is referred to as the radius of the circle of convolution where as in the case of motion blur direction and length of the blur are the two parameters describing this blur. Other techniques estimate the blur function by treating it as a matrix of coefficients where identification involves estimating every coefficient within the matrix.
Neural networks are an emergent method for image restoration where processing relies on a large number of neurons all linked by synaptic weights to a achieve good performance. They are very suited in situations when many hypothesis need be attempted in parallel and high computation is required. In the field of image restoration neural networks have been implemented both as forms of classical restoration techniques where the PSF was known prior to restoration and also as indirect blind deconvolution where the image is restored without knowledge of the PSF.
In most cases neural networks are employed to restore the image by means of minimization of a cost function. This iterative form of restoration is very well suited to the neural networks learning abilities as the process is not only efficient but produces high quality results. 
The first attempt at image restoration with a neural network was by Zhou et al which basically performed the classical iterative approach of constrained least squares restoration with a Hopfield neural network . This network was optimized by Paik and Katsaggelos . These two basic approaches were criticized due to the fact for all areas of the image the same compromise between blur removal and noise suppression was made when in reality different areas depending on the texture levels require more or less blur removal which results in less or more noise suppression. Thus various techniques that extended the basic Hopfield structure to cater for this features where developed. Other approaches developed perform restoration by the minimization of other cost functions to reduce the amount of blurring at the edges which the basic approach results in. One approach makes use of multilayered neural networks to detect edges in the blurred image and incorporates the knowledge of edges into the cost function. Multilayered neural networks have also been used to perform the actual restoration, one method trains a Multi layered Perceptron with supervised learning with images of blurred concentric circles as the input images and the unblurred image as the target, this approach assumes that the images to be deblurred posses the same degradations that present in the training images. Another technique for restoration using multi layer neural networks trains the image on a percentage of all possible grey scale images with a particular size and grey scale value. The training images where degraded with the same degradation as the image to be restored and due to the structure of the network and the way it was designed training it becomes infeasible for realistic images.
Recently a multilayered neural network was been implemented making use of morphological neurons to restore color images by applying the principles of mathematical morphology for grey level image filtering to each spectrum of the image (thus treating it as a grey level image). Mathematical morphology enables the selection of required elements from an image and the removal of unwanted elements such as noise by making use of a structuring element which also a simple image itself.
Some approaches work by simultaneously identifying the blur parameters and the restored image simultaneously. In one approach self organizing neural network minimizes an error function in order to identify both the blur function and the resorted image. The basis of this neural network and other networks that function in a similar fashion is the fact that an image can be modeled as an ARMA process (auto-regressive moving average) where the auto regressive part determines the image model and the moving average part determines the blur. Blind image deconvolution becomes a ARMA parameter estimation problem which has been solved via a number of approaches which all have higher costs than neural networks.
1.3 Aims and Objectives
When camera autofocus?s there will generally still be parts of an image lying outside the depth of field and so will be out of focus, these areas can thus only be restored with what is known as digital auto focusing. This project aims to provide the user with a method to bring back into focus those areas that where compromised when the image was taken.
This project should thus provide a system for enabling the user to segment the image into areas to be restored representing objects or the background itself at different distances from the lens. The system should be able to reduce the amount of blur of the various segments resulting in an image which is completely in focus.
The main objective of the proposed system is the investigation of a technique for the blind deconvolution of images with different levels of out of focus blur consisting of the five steps described in the method section below.
The proposed method consists of 5 main steps outlined below:
- Identification of the blurring function
- Training of the neural network/s
Using a graphical interface the user will be able to select regions of an image that should be treated as regions with different degrees of uniform out of focus when compared to the rest of the image.
Once segmentation is complete a kernel (Point spread function) will be estimated from every degraded segment. This kernel will be as close as possible to the kernel which when convolved with the ideal undegraded image will yield a result equal to the observed degraded image.
A number of training images will then be degraded using the degradation function obtained from the current segment being considered. A neural network will be trained by using the degraded training images and original undegraded training images.
For the case of images that are blurred by an actual camera and not synthetically blurred, in order to cater for the noise present the neural network will be trained upon images having the same radius of out of focus blur taken by the same camera.
Training of the neural network can also be improved by selecting a particular category of images that the image to be deblurred falls under. For example if the image to be restored is a text document the user will indicate this and blurred text documents will be used during the training of the neural network.
Through trial and error the best way to deal with the restoration of color images will be determined, possible ways include:
- Converting the color images in to an HSI (Hue Saturation and Intensity) representation, applying the restoration technique to the intensity component, which is equivalent to the grey levels of the image and then reconverting it back to RGB form.
- Considering each separate spectrum of the RGB image, treating it as a grey level image, performing the restoration on each spectrum and combining the results to produce the restored color image.
Once the network has converged to a reasonable degree of accuracy the network will be used to restore the current segment. This process of training and restoration will be repeated for every segment in the image.
The final restored image will be produced by combining all the restored segments into one image.
Evaluating the results obtained and assessing the quality of the restored images is an entirely separate field of image processing. It can be subdivided into quality assessment when the original unblurred image is at hand and quality assessment when no target image is available (Blind image quality assessment). 
In this case the target image is available for evaluation and thus is quality assessment is the simpler of the two areas.
The performance of the neural network for blind image deconvolution is evaluated as done in various literatures by the computation of the following two meaningful measures:
- Normalized Mean Squared Error between the restored image and the undegraded image.
- Improved signal to noise ratio between the restored image and the undegraded image.
Both these values will be computed and compared to the values obtained by another restoration approach.
However as stated in  it is believed that human objective evaluation to be the best judgment thus as further evaluation the following technique consisting of two general steps will be used in order to prove that restoration is in fact taking place
1) Blurred images of faces and/or text segments are used as follows:
- A record of the number of words correctly identified by unbiased individuals from the blurred text fragment is kept.
- A record of the number of blurred faces recognized by the same individuals correctly is kept.
2) The faces and/or text segments used above are restored with our proposed technique are used as follows:
- A record of the number of words correctly identified from the restored images is kept
- A record of the number of faces recognized correctly is kept.
A significant increase in words and faces recognized is a simple method of showing that the technique does in fact restore blur from images.
When comparing the proposed technique to other methods two approaches can be taken:
- In order to compare the proposed technique as a form of blind image deconvolution with other standard blind deconvolution techniques available the degraded images are restored using the available blind deconvolution functions of ?Matlab? and the number of correctly recognized words and faces are compared to the results obtained by our proposed method.
- Similarly In order to compare solely our proposed technique of image restoration, the blurred images are used along with the PSF obtained by the PSF identification algorithm and deblurred using standard classical image restoration techniques available in ?Matlab?. The same approach as the above step is used for comparison between the two approaches.
The image segmentation tool will be evaluated in the following way:
- Images consisting of text fragments are blurred in such a way that different levels of out of focus blur (representing objects at different distance from the lens) are present in one image.
- These images are segmented, restored and recombined into one image.
- The count of correctly recognized words in the restored images is recorded.
- For every test image every sub image that was blurred with a different level of blur is extracted blurred, restored and the number of words in the restored image correctly recognized is recorded.
- For every test image the total number of correctly recognized words is computed by summing up the counts of the individual sub images.
- For the segmentation tool to have worked correctly the for every image the count computed in step 3 should be equal to the count computed in step 5.
Another method of confirming the relevancy of the segmentation technique is to attempt to restore the images consisting of multiple regions of different out of focus blur with a standard blind deconvolution technique which estimates one spatially invariant blur function for the entire image, record the count of correctly recognized words and comparing them to the counts obtained by the restored images with our proposed segmentation approach.
The PSF identification tool will be evaluated as follows:
- 100 Images will be synthetically blurred with out of focus blurs of known radii.
- The proposed technique will be used to determine the radii and the returned results recorded.
- The total percentage error for between the computed radii and the actual radii will be computed and compared to other known PSF identification techniques.
The main deliverable will consist of a Matlab based application which provides an interface to whereby the user can perform the necessary segmentation and initialize the restoration process which leads to the restored image being displayed for analysis and comparison.
2. Work Plan
- PSF Identification tool
- Neural Network
- Testing of combined tools
- Image Segmentation tool
- Combination of sub components
- Testing of overall System
- Evaluation of proposed system
In this period technologies dealing with blur removal and neural networks and blur identification techniques implemented until now will be identified and examined in order to gain an insight into their operation.
This phase revolves around determining which technique will be used in order to identify the blurring function as well as deigning the tool based upon previously implemented methods for PSF identification. Once the technique has been finalized and designed it will be implemented and tested on a variety of artificially blurred images and also actual out-of- focus images.
This is the main stage in the project and includes identifying the actual structure of the neural network, learning algorithms to use and training data. Once a suitable network has been identified through a process of repeated implementation and testing, it will be used as the final neural network architecture, and will be rigorously tested on artificially degraded images with precisely known PSF?s.
The combination of both the PSF identification tool and the implemented Neural network will be tested on both artificially blurred images and actual blurred photos. This will be done first identifying the PSF using the PSF identification tool and using this PSF to train the Neural Network.
This phase involves designing and implementing a tool which will enable the user to segment an image in different areas depending upon their distance from the lens.
This is the phase that brings all the components together to produce the final application where a user will be able to subdivide an image and deblur it by using both the PSF identification tool and the implemented neural network.
This stage involves the writing of the documentation which will be take place in parallel with the last few stages of development.
This testing will be based upon all the prior testing of the sub components. The system will be tested on a number out of focus images as well as simpler artificial images. Any small faults found in this section will be used to modify the existing solution where as larger ones will be used to identify the future improvements of the system.
In this stage the entire system will be evaluated as a whole and further improvements will be provided. The results of the system will be evaluated as described in section 1.4 and if possible the PSF identification and restoration will be compared to other techniques available.