The field of computer graphics

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

CHAPTER 1

INTRODUCTION

1.1 Background

The field of computer graphics is concerned with generating and displaying three- dimensional objects in a two-dimensional space (e.g., the display screen). Whereas pixels in a 2-D graphic have the properties of position, color, and brightness, a 3-D pixel adds a depth property that indicates where the point lies on an imaginary Z- axis.

A simple or 2D image doesn't attract much attention, but the third dimension tends to make an impression. That's why they are used so successfully and effectively in advertising and entertainment to attract viewers. 3D advertisements are more memorable than ordinary flat ones. One of the most impressive features of the human visual system is our ability to infer 3D shape information from a single photograph.

Many real-world objects, such as people, have complex shapes that cannot easily be described by the polygonal representations commonly used in computer graphics. Creating 3D images used to be a tedious process that involved special equipment (a

This dissertation proposed 2D to a 3D image converter that only needs one picture as an input and no need to use other equipment such as 3D camera, 3D scanner, or anaglyph 3D glasses. It uses the threshold to distinguish the skin color of the human face from the photo, then converts it to 3D face, in which the user can see his/her photo in 3D form effectively and in an easy way. Some results maybe incorrect due to similarity of colors between the face and the background. If the background color imitates the skin color, the output will be assimilated picture of the face and the background together

1.2 Problem Statement

Creating a 3D image from 2D image is still difficult and time-consuming processes, often required expensive camera, 3D scanner, or even an anaglyph 3D glasses which cost are more than hundred thousand Malaysian Ringgit.

1.3 Project Objective

The objective of this work is to produce a converter system that user can convert the 2D photo to 3D form using only one image as an input and no need for special equipment, or large number of photographs to do that.

1.4 Project Scope

The scope of the project shall focus on dealing with a frontal human face picture 4 X

6 cm sizes and assume that the picture is captured from 90 degree under good lighting condition. That is to say the light must be over the person who is taking the picture.

1.5 Target Users

There are three target users for the application as listed below:

  • Students
  • Assuming this as a prototype, students most probably can use it as references for their own future plan.

  • Researchers
  • Researchers will be able to visualize the fundamental concept of this project and use it as their initialize experiment.

  • Normal users

For fun any user can use this system.

1.6 Thesis Organization

In CHAPTER 1: INTRODUCTION, a brief description of the system is presented, followed by a presentation of the problematic statement and the objective of the system. The outline of the rest of the thesis will follow.

In CHAPTER 2: LITERATURE REVIEW, critically reviewing 2D to 3D image converter research, software and related work.

In CHAPTER 3: METHODOLOGY covered the system design and implementation processes that were used to implement this project.

In CHAPTER 4: RESULTS, the output of the system are presented in details.

In CHAPTER 5: DISCUSSION, results were discussed in this chapter with reference to the system objective and performance.

In CHAPTER 6: CONCLUSION AND FUTURE WORK, The advantages and the constraints of the system are mentioned in this chapter. Besides that, a few suggestions on further development of the system are stated. There will be some illustration of possible use of 2D to 3D converter and added suggestion for future work.

CHAPTER 2

LITERATURE REVIEW

2.1 Introduction

This chapter gives an overview for 2D to 3D image converter research and summarizes the literature review and related work of the study.

2.2 Background information

Johan [11] proposed a multiview 3D displays which require no special glasses and can be viewed by several people at the same time. What a viewer sees on such a display depends on the viewing angle. Because both eyes of a viewer have a different viewing angle they both see a different image on the display. It can be used to create a 3D experience by showing different views of a scene.

Johan [11] created these views by taking photographs of a human face, and used a 2D image in combination with its depth (pixel intensity). From the image and the depth, new views can be reconstructed.

2.2.1 Stereoscopic vision

Humans and many animals have two eyes which look (roughly) in the same direction. Each of our eyes sees the world from a different point of view, and therefore the images received with both eyes are slightly different. In the brain these

This type of vision system is called binocular (or stereoscopic) vision. In nature it is common amongst many of the hunting animals, but also amongst primates which need to navigate through complex three-dimensional environments. It is one of the methods used by humans to perceive depth and to estimate distances.

Another type of vision system is monocular vision; it is common amongst many of the hunted animals in nature. Most of them have eyes on both sides of their head to get a wider field of view. This reduces the chance that a predator can sneak up on them. The image received with each eye is processed separately in the brain [11].

2.3 Depth perception

Stereoscopic vision is the major mechanism of depth perception, but it is not the only one. Some other mechanisms are [4]:

Motion parallax: When you move your head from side to side, objects that are close to you move relatively faster than objects that are further away.

Interposition: An object that is occluding another is in front of that object.

Light and shade: Shadows give information about the three-dimensional form of an object. If the position of the light source is known the shadow can also provide information about the position of the object itself.

Relative size: Objects of the same size at different distances are perceived with different sizes by the eye. This size-distance relation gives information about the distance of objects of known size. In a flat image this effect can be recreated by using a perspective ion.

Distance fog: Objects far away appear hazier.

All these mechanisms are used together by our brain for depth perception.

2.4 3D displays

When looking at a traditional display both eyes view the same image. Although this flat image can contain almost all depth cues, it is missing the most important one: the stereoscopic effect. This effect can be recreated by providing both eyes with a different image. A display that does this is called a stereoscopic display [11].

2.4.1 Screen disparity

The distance between the left and the right corresponding image points is called screen disparity, and may be measured in millimeters or pixels. It can be classified into three categories:

  • Positive disparity - objects appear behind the screen (figure 2.2).
  • Zero disparity - objects appear on the plane of the screen.
  • Negative disparity - objects appear in front of the screen (figure 2.3).

2.5 Known Solutions

In this section, a short description of known methods for 2D to 3D conversion is presented as way of establishing a framework for this research [15].

In general, the process of extracting 3D information from 2D images is performed in two steps. First, the image registration is performed [6, 24]. This operation is performed only in the 2D image domain. Its purpose is to extract pixels information from images that represent the position and the value of the pixel. In the second step, the intensity of pixel and the location are found.

An example of a working system used to extract 3D information from 2D images was introduced by Philip H. S. Torr [21]. Torr's work concentrated on motion segmentation and determining the maximum amount of information that can be gained from two or three flat images. 3D information is extracted from images taken from large sequences of images (movie frames). The differences between the images were small enough to permit image registration using the Harris corner detector and cross-correlation over 9 - 9 pixel windows. To extract 3D information, Torr estimated the fundamental matrix using an algorithm based on RANSAC [15].

2.6 Image Processing Terminology

This section gives an introduction to the basic definitions of the terms associated with this software.

Definition 1. A Two-dimensional (2D), refers to objects that are constructed on two planes (X and Y, height and width, row and column, etc.). Two-dimensional structures are also used to simulate 3D images on screen [19].

Definition 2. A Three-dimensional (3D), describes an image that provides the perception of depth [19, 16].

Definition 3. Pixel. A pixel (also referred to as an image element, picture element, or pel) is the smallest component (a point) of a digital image. A pixel has its fixed location and a color that can be modified [8, 9, 13].

In a digital image, a pixel is an element that has a numerical value that represents a grayscale (256 possible values) or RGB intensity value (2563 = 16, 777, 216 possible values).

2.7 Multiple Images (Manual Reconstruction )

2.7.1 Façade, which was introduced by Debevec et. al. [3], modeling and rendering existing architectural scenes from a sparse set of still photographs, which combines both geometry-based and image-based techniques, has two components. The first component is a photogrammetric modeling method which facilitates the recovery of the basic geometry of the photographed scene. The photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, the stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, this approach can model large architectural environments with far fewer photographs than current image-based modeling approaches.

  • Not fully automatic (user inputs blocky 3D model) - Using blocks leads to fewer params in architectural models
  • User marks corresponding features on photo
  • Computer solves for block size, scale, and camera rotation by minimizing error of corresponding features.
  • Reprojects textures from the photographs onto the reconstructed model.
  • User-Marked Edges
  • Block Model
  • Recovered Model

Figure 2.4: (a)A photograph of the Campanile, Berkeley's clock tower, with marked edges shown in green. (b)The model recovered by photogrammetric modeling method. (c)A synthetic view of the Campanile generated using the view-dependent texture-mapping method.

  • Facade one of the most successful systems for modeling and rendering architectural buildings from photographs. Unfortunately it involves considerable time and effort from the user in decomposing the scene into prismatic blocks, followed by the estimation of the pose of these primitives. However the high quality of the results obtained with the Facade system has encouraged others to design interactive systems.

2.7.2 Photobuilder, introduced by Cipolla et. al. [1], recovering 3D models from uncalibrated images of architectural scenes, but at the expense of considerable user interaction and a specific domain of applicability.

  1. The user needs to selects a set of image edges which are either parallel or perpendicular in the world. These primitives are precisely localized in the image using the image gradient information.
  2. The next step concerns the camera calibration: the intrinsic parameters of the camera are determined for each image. This is done by determining the vanishing points associated with parallel lines in the world. Three mutually orthogonal directions are exploited to give three intrinsic parameters and the orientation of each viewpoint.
  3. A projection matrix for each viewpoint is computed from the image edges and vanishing points. These matrices are further refilled by exploiting epipolar constraints and additional matches to give the motion (a rotation and a translation) between the viewpoints
  4. The last step consists in using these projection matrices to find more correspondences between the images and then to compute 3D textured triangles that represent a model of the scene.

2.7.3 Ziegler et. al. [23], introduced their 3D reconstruction using labeled image regions, reconstructing 3D scenes from a set of images. The user defines a set of polygonal regions with corresponding labels in each image using familiar

2D photo-editing tools. The reconstruction algorithm computes the 3D model with maximum volume that is consistent with the set of regions in the input images. The algorithm is fast, uses only 2D intersection operations, and directly computes a polygonal model.

Meanwhile 3D model creation introduced by Eric. M. and William. B. (1995) requires only simple 2D photo-editing operations from the user. The input to the system is a set of calibrated images of a scene taken from arbitrary viewpoints. The user first identifies 2D polygonal regions in each image using simple segmentation tools, such as polylines and intelligent scissors [5]. Each region is assigned an ID, such that regions with corresponding IDs in two or more images are projections of the same object in the world. For example, images of a person could be segmented into head, torso, legs, and hands. Figure 2.6 shows an example of input images and the corresponding user segmentation.

The algorithm automatically computes the 3D geometric model that is consistent with the user's segmentation. The solution is unique, in that is it the maximum volume that reproduces the user's input. The algorithm is fast and directly computes one or more watertight polygon meshes of the scene.

Left: Example input photograph. Middle: User labeling of image regions. Right: 3D model reconstructed from labeled images.

A fundamental problem in 3D reconstruction is assigning correspondences between points in two or more images that are projections of the same point in three dimensions. Other work uses pixels or object silhouettes to specify correspondence. This approach used correspondence between arbitrary image regions.

2.8 Single Image (Manual Reconstruction)

2.8.1 Criminisi et. al. [2], and Liebowitz et. al. [12], introduced creating 3D graphical models of scenes from a limited numbers of images, i.e. one or two, in situations where no scene co-ordinate measurements are available. The methods employ constraints available from geometric relationships that are common in architectural scenes - such as parallelism and orthogonality - together with constraints available from the camera. In particular, by using the circular points of a plane simple, it offer the most accurate (but also the most labor-intensive) approach, recovering a metric reconstruction of an architectural scene by using projective geometry constraints (figure 2.7).

Merton College, Oxford. (b) and (c) Views of the 3D model created from the single image.

2.8.2 Zhang et. al. [22], introduced models free-form scenes by letting the user place constraints, such as normal directions, anywhere on the image plane and then optimizing for the best 3D model to fit these constraints.

Reconstructing free-form, texture-mapped, 3D scene models from a single painting or photograph. Given a sparse set of user-specified constraints on the local shape of the scene, a smooth 3D surface that satisfies the constraints are generated. This problem is formulated as a constrained variational optimization problem. In contrast to other work in single view reconstruction, this technique enables high quality reconstructions of free-form curved surfaces with arbitrary reflectance properties. A key feature of this approach is a novel hierarchical transformation technique for accelerating convergence on a non-uniform, piecewise continuous grid. The technique is interactive and updates the model in real time as constraints are added, allowing fast reconstruction of photorealistic scene models. This approach is shown to yield high quality results on a large variety of images.

2.8.3 Takayuki et. al. [20], introduces a 3D modeling system of human face, modeling 3D face from the 2D facial images captured from the surrounding 2D cameras by which the color texture and surface shape information of the face are synchronously measured

2.8.4 Fr´ed´eric et. al. [7], introduced synthesize realistic facial expressions from photographs, creating photorealistic textured 3D facial models from photographs of a human subject, and creating smooth transitions between different facial expressions by morphing between different models by employed a user-assisted technique to recover the camera poses corresponding to the views as well as the 3D coordinates of a sparse set of chosen locations on the subject's face.

2.9 Summary

In this chapter, several methods were presented and critically discussed. It can be figured out that most of the word done in this field was on the architectural side. Light has been shed on the necessity of adding more than one image to get the 3D form. Researches found that the user must interact with the system to get better results. Some other researches worked on human faces and faced the same problem; that is user has to interact with the system.

In the following chapter, methodology of this project will be presented in which light has been shed on the system design, implementation of the system, and system framework.

CHAPTER 3

METHODOLOGY

3.1 Introduction

The methodology of this project covers the system design and implementation processes to implement this project. The system designs subheading covering the work or process flow of this project and it explains the goals and function for each subsystem. In implementation processes subheading, it covers the methods, techniques and approaches of 2D to 3D image converting process.

3.2 2D to 3D image converter methodology

The methodology of the system development that was used for the 2D to 3D image converter is System Development Life Cycle (SDLC), which consists of specific traditional steps, which contributes widely, in the development of a new system. Development using the SDLC involves 4 phases, they are:

  1. Planning
  2. Analyzing
  3. Designing and producing
  4. Implementing and Testing

3.2.1 First phase: Planning

There are 2 stages in this phase, namely: Project Identification, and Project selection. The project which was carried after reviewing several relevant articles it was concluded that the project which will be carried out is identified (2D to 3D image converter system).

After identification of project, reviewing on available software and applications was arrived out and the specification of 2D to 3D image converter was determined. It was determine that the size should be small, and the project duration will take 6 months. It also was decide that the user scope should be students, researchers and normal user.

3.2.2 Second phase: Analyzing

After planning phase, analyzing phase was carried out. This was the phase where the 2D to 3D image converter was understood and also determined on how each component of 2D to 3D image converter will be used as well as integrated.

These are the stages which involved in analyzing phase:

Problem Analysis

Available techniques were studied to gain information and inspiration to develop 2D to 3D image converter. Existing algorithms and methods on 2d to 3d converting were studied. However, there is no existing 2D to 3D image converter system available for human face using one picture as input for testing.

3.2.2.1 Requirements determination

Analysis the problem, to develop this system it need a minimum hardware and software requirements for 2D to 3D image converter were determined as follows.

• Hardware used during project development:

  • CPU type : Intel Pentium 4, 2400 MHz
  • System Memory : 512 MB (PC3200 DDR SDRAM or above)
  • Monitor
  • Mouse and Keyboard
  • Webcam

• Software used during project development:

• Programming

  • Microsoft Visual Basic 6.0
  • Microsoft Visual C++ with OpenGL

• Operating System - Microsoft Windows XP Service Pack 2

3.2.2.2 Research on required knowledge

Techniques about 2D to 3D image converted were reviewed and studied. Besides, Visual Basic and Visual C++, as the main programming language for 2D to 3D image converter was also been learned.

3.2.3 Third phase: Designing and Producing

The third phase for the development of 2D to 3D image converter is designing and producing. During this phase, real production was carried out.

3.2.3.1 Process Modeling

A Graphical User Interface (GUI) was developed. GUI provides interactivity between users and 2D to 3D image converter. It contains some buttons through which we can use to do various functions of the project. The menu bar contains three drop down lists. The first one is the File Menu in which we only have 'Open' and 'Exit' commands. The second menu is the Converter in which we can use the commands 'Grayscale' 'Show face', '2D to 3D'. The Grayscale command is used to convert the color image into grayscale image. The 'Show face' command is used to show the face that we get from using the threshold value in order to get the initial image of the face before we convert it into the 3D form. The third command is the '2D to 3D' which is actually the final command we use in order to get the final output of the 3d face. The third menu is the Help Menu through which we can get information on how to use the software and some other information about the

3.2.4 Fourth phase: Implementing and Testing

The final phase for developing 2D to 3D image converter is implementing and testing. This phase is essential as it ensures the stability and functionality of the system. This phase includes 4 stages, which are:

  1. Implementing
  2. Testing.
  3. Applying changes.
  4. Delivery.

3.2.4.1 Implementing

Implementing is divided into two parts. The first part is to extract the three axes(x, y, and z) and the second part is building the 3D face.

3.2.4.1.1 Extracting the three axes

To extract the value of each axis from the input image, we start with converting the input image from color image to grayscale image, after that we remove the background of the image and just take the human face by checking the hole image pixel by pixel to see whether the pixel value is human skin or not. If the value is considered as a skin tone, then save the position of that pixel for x and y axes and take the intensity of the pixel as the z-axis to the file (this file now contain x, y, and z points). By using the threshold to determine the skin value, it is divided into two parts; automatic threshold and user define threshold.

Automatic threshold contains three values as shown in the table 3.1, Asian race threshold skin tone pixel values are between 140 and 210 as threshold range, African

User define threshold which can be used by the user to set the lower threshold and upper threshold manually, the value for both threshold must be occurred between 0 as lowest threshold and 256 as maximum value for upper threshold.

3.2.4.1.2 3D face building

This step done by using VC++ as a platform and OpenGL as graphics library, after getting the data file that contains x, y, and z points from the previous step using OpenGL library the 3D face can be build.

3.2.4.2 Testing

Testing plays important role in a system development. It ensures 2D to 3D image converter runs smoothly with error-proof. After the system is developed, it was tested by the respondents. 15 respondents involved are postgraduate students with computer graphic background. These participants were chosen randomly. By search out a list of postgraduate students enrolled in the faculty of computer science, UPM. Then select number two from the list and its duplication. Therefore, the participants were systematically selected in a random way of sampling. First of all, the software was explained to them and then asks them to use it. After using the software and applying all the commands in it, participants were asked to do the questionnaire.

3.2.4.3 Applying changes

Changes were applied based on the testing result. Changes were made in order to ensure 2D to 3D image converter can perform better and error-proof.

3.2.4.4 Delivery

Once 2D to 3D image converter was tested and able to run smoothly and free from errors, it was packed into 1 folder and files were organized for easier understanding.

2D to 3D image converter now is ready to deliver to end user.

3.3 Summary

This chapter has presented the methodology of the system. We have explained the development life cycle, framework, design, and the implementation and testing 2D to 3D converter. In the following chapter, results of the pilot study and questionnaire were critically shown.

CHAPTER 4

RESULT

4.1 Introduction

This chapter explained the result and finding of the project in detail. The project already developed using Microsoft Visual Basic 6.0 and Microsoft Visual C++ with OpenGL library. The result then was tested by 15 respondents who are postgraduate students with computer graphic background

4.2 Executing the System

One of the requirements to execute the system is that the user must make sure that the environment is supported by OpenGL function. This can be done by copying glut32.dll file into C:\Windows\System32 folder before executes the main program of the system.

4.3 System Interface

The project interface consists of two windows, main window and output window.

4.3.1 Main window

Which contains the main element to open the input image, tools to draw ellipse, paint tool, fill tool, greyscale button, and convert button to convert to 3D form, and frame that display the pixel information such as pixel position, RGB colours and values, and the greyscale value for an instance.

4.3.2 Output window

The output window consists of two windows 'Help window' and 'result window'. The help windows function is to guide user how to use the system by using keyboard or mouse.

The user can use right click on the output window to choose the rotation for the output from x, y, or z-axes and to maximize or minimize the 3D face. For the keyboard, user can rotate the model automatically by 'x', 'y', or 'z' key, or manually using left and right arrows for x-axis rotation, up and down for y-axis rotation, page up, page down for z-axis rotation key. Besides that, the user can zoom in or zoom out the model using '+' or '-' key. Also by using'd' key user can change the rotation direction in clockwise or contraclockwise. To increase the speed of rotation by lowercase letter 's' key or capital 'S' key to decrease the rotation speed. If the user wants to exit the system, he or she can just press the escape key.

The main window function is to show the 3D face with the direction or rotation selected by the user. The background of the window is black, so that the 3D face colour already designed with much brighter colour.

4.4 Findings

The finding was extracted from analysing the questionnaire which consists of 7 questions. It aimed at finding users comments regarding the application and use of the system.

15 respondents who are postgraduate students with computer graphic background are involved in the survey. The respondents are asked for their opinion about the system.

4.4.1 Results for each question

4.4.2 Evaluation of the finding

The questionnaire was analyzed and evaluated. The results show that 100% of the respondents had positive opinion about the system for the first four questions. It appeared that all the participants had same view about the system with reference to the first four questions. With reference to question number five, however, one of the respondents suggested that one step is enough to convert from 2D to 3D. That is we can use only one button to convert 2D to 3D form. Nevertheless, the reason behind making two buttons (grayscale button and the convert button), was due to some necessary processes to be taken before converting. One of these processes is to see the face in 2D before converting it into 3D. These two buttons are necessarily used to find if there is a similarity between the color of the skin tone and the background color to avoid mixing colors in the output. .

Asking the participants about their satisfaction of the results in question number 6, only 1 (7%) of the respondents stated that the output is unrealistic. It is true, but this is suggested for future work.

Talking about the improvement of the system in question number 7, most of the students stated that the software has to be improved for more face detection, e.g. taking only the human face and extracting the background or other color that seems to be close in color to human skin color. Another suggestion was that the converting process needs to cover the whole human body. At the same time, another suggestion was put for a better and more realistic output by using surface fitting.

4.5 Summary

This chapter has presented the result of the system. We have explained how to execute the system, the main interface window, and the output window. The findings obtained from the questionnaire were also discussed and evaluated. In the following chapter, discussion about the 2D to 3D converter was critically explained.

CHAPTER 5

DISCUSSION

The advantage of the system was that the user could easily view and use the converting process from 2D to 3D. The interface was designed with easy command so that it would help the users to use the system.

This program or system was developed without considering the face detection. Therefore, the detection of the human face without the background of the picture is done by threshold value.

Besides, from the finding, all the respondents agreed that the system are user friendly, effective, interesting, fun to use, the results are good, and it just needs to be improved to cover the whole body e.g. 3D body scan.

CHAPTER 6

CONCLUSION AND FUTURE WORKS

6.1 Conclusion

2D to 3D converter System finally succeeded and developed. It only needs one photo as an input, and any working webcam or digital camera can be used to take an image of a face as input and does the converting by pressing the convert button. The result is then shown since the system is automatic.

This project provided a concept of converting 2D to 3D. This is very efficient for the development of system for public and limited budget because no expensive device is needed as part of the system. The user can use normal webcam or digital camera to take the input image.

User can see his/her photo not only in 2D form, but also in 3D form. Some results maybe incorrect.

6.2 System Constraints

Any system designed to meet the standards of the modern world will definitely have the feeling of its constraints. 2D to 3D converter has a few constraints that are unavoidable. These constraints are as follows:

  • Size of image: this software only deals with 4x6 cm images.
  • Color of the background that resembles the skin tone may create a negative output as that color appears on the image.
  • Cropping the face.

However, these constraints such as the size of the image can be solved and improved in further research that might deal with more complicated images such as human body image and the input image background can be any environment that is not similar in color with the skin of the face in the image. This is because 2D to 3D converter uses the skin as criteria to do the face detection.

6.3 Future works

Due to constraints of the system at hand, future works is needed for improving and enhancing this system. Future work might be conducted to improve the system to be able to take the whole body of the human being and convert it into 3D. It is also recommended to enhance the system in order to make the system do 3D body scan focusing on the human face under some condition. Another suggestion for further work is to develop the system and add surface fitting to the output to make it more realistic. This system might also be enhanced by making face detection to take only human face. It would also be more practical to develop the system and make it work with all types of photo e.g. (indoor and outdoor image).

REFERENCES

  1. Cipolla, R., Robertson, D., & Boyer, E. (1999). Photobuilder - 3d models of architectural scenes from uncalibrated images. In IEEE Int. Conf. on Multimedia Computing and Systems, vol. I, 25-31.
  2. Criminisi, A., Reid, I., & Zisserman, A. (2000). Single view metrology. Int. Journal of Computer Vision 40, 2, 123-148.
  3. Debevec, P., Taylor, C., & Malik, J. (1996). Modeling and rendering architecture from photographs: A hybrid geometry- and image-based approach. In ACM SIGGRAPH 96, 11-20.
  4. Nick H. (2003), 3D Display Systems. Retrieved February 12,2008, from Holliman web site: http://www.dur.ac.uk/n.s.holliman/Presentations/3dv3-0.pdf
  5. Eric. M. and William. B. (1995). Intelligent scissors for image composition. In Computer Graphics, SSIGGRAPH 95 Proceedings, pages 191.198, August.
  6. Fitzpatrick, J., Hill, D., Calvin, R. (2000). Image Registration, Handbook of Medical Imaging, Vol. 2: Medical Image Processing and Analysis, Editors: Milan Sonka and J. Michael Fitzpatrick, SPIE PRESS Vol. PM80, Bellingham, USA, 449, 488-489, 499.
  7. Fr´ed´eric, P., Jamie. H., Dani. L., Richard. S., & David. S. Synthesizing realistic facial expressions from photographs. In SIGGRAPH 98 Conference Proceedings, pages 75-84. ACM SIGGRAPH, July 1998
  8. Gonzalez, R., Woods, R. (2002). Digital Image Processing, Second Edition, Prentice Hall, USA, 49-50, 52-54, 66, 116-118, 271-276, 289-302, 519, 539, 577-579, 587-591, 672-675.
  9. Heaton, K. (1994). Physical Pixels, M.Sc. Thesis, MIT.
  10. Jeffery M. (2004). 2D to 3D conversion. Retrieved October 30, 2007, from designpresentation web site: http://www.designpresentation.com/2dto3dconvers ion.php3
  11. Johan, C. (2006). 3D graphics rendering for multiview displays. Unpublished master's thesis, Technische Universiteit Eindhoven, Eindhoven.
  12. Liebowitz, D., Criminisi, A., & Zisserman, A. (1999). Creating architectural models from images. In Proc. Eurographics, vol. 18, 39-50.
  13. Loog, M., Ginneken, B., Duin, R. (2004). Dimensionality reduction by canonical contextual correlation projections, T.Pajdla, J. Matas (Eds.) LNCS 3021, Springer, Berlin, 562-573.
  14. Lynch, T. (2005). 3-D graphics. Retrieved October 18, 2007, from webopedia web site : http://www.webopedia.com/TERM/3/3_D_graphics.html
  15. Maciej, B. (2007). 2D to 3D Conversion with Direct Geometrical Search and Approximation Spaces. Unpublished doctoral dissertation, University of Manitoba, Canada.
  16. Orlando, F. (2002), 3D Multimedia and Graphics. Retrieved February 18, 2008, from Whatis web site: http://whatis.techtarget.com/definition/0,,sid9_gci 211499,00.html
  17. Pardew, L., Seegmiller, D. (2004). Mastering Digital 2D and 3D Art. UK. Course Technology Ptr.
  18. Rachel, C. (2005), Vision 3D, What is Stereo Vision? , Retrieved February 18,2008, from Optometrists Network web site: http://www.vision3d.com/ster eo.html
  19. Roy L. (1995), The Dictionary of Computer Graphics and Virtual Reality. Retrieved February 18, 2008, from Edinburgh web site: http://homepage s.inf.ed.ac.uk/rbf/GRDICT/grdict.htm
  20. Takayuki F., Hiroyasu K., Kouta F., Gorou F., Yoshiaki N., & Naoya I. (2001). 3D Modeling System of Human Face and Full 3D Facial Caricaturing. In IEEE Int. Conf. on Virtual Systems and Multimedia, vol.37, pp. 385.
  21. Torr, P. (1995). Motion Segmentation And Outlier Detection. Ph.D. Dissertation, University of Oxford, Engineering Dept., 1-7, 32-34, 242-243.
  22. Zhang, L., Dugas, G., Samson, J., & Seitz, S. (2001). Single view modeling of free-form scenes. In Computer Vision and Pattern Recognition, 990-997.
  23. Ziegler, R., Matusik, W., Pfister, H., & Mcmillan, L. (2003). 3d reconstruction using labeled image regions. In Eurographics Symposium on Geometry Processing, 248-259.
  24. Zitov´a, B., Flusser, J. (2003). Image Registration Methods: A Survey, Image and Vision Computing 21, 982.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.