Three Dimensional And Virtual Reality Technology Computer Science Essay

Published:

Virtual reality (VR) is a technology that it provide interface between humans and computerized applications based on real-time and three-dimensional (3D) graphical worlds. The interaction of three-dimensional (3D) graphic is an important factor of the Virtual Reality. There are 3 types of the Virtual Reality that is desktop, projection and immersive.

Firstly, the desktop Virtual Reality that it allow user to interact and build a Virtual Reality via personal computer. It display 3D virtual world on personal computer without any special movement-tracking devices or equipment. Nowadays, many computer games can be used as example as desktop virtual reality. The game likes (Figure 1), it is a 3D virtual game and inside the character and object all is 3D graphic. User can use various triggers and responsive characters to make the user feel as though they are in a virtual world.

Figure 1: The Sim 3. The 3D Human models are live in the virtual world. [Source: http://www.tuaw.com/2009/02/06/the-sims-3-coming-to-mac-and-iphone-summer-2009/]

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Secondly, projection Virtual Reality that it displays the virtual world on large-screen stereoscopic graphics and polarizing the 3D graphic from a pair of conventional video projectors. It play very important role in real world, it help human reduce many dangerous work like aircraft practical. Human can use the airplane simulation system to as his aircraft practical. The below picture (Figure 2) is the projection of Virtual Reality.

Figure 2: The above is an example of projection Virtual Reality. [Source: http://www.converj.com/sites/converjed/2006/05/vr_rooms.html]

Lastly, Immersive Virtual Reality is a more reality concept of the Virtual Reality technology. Immersive Virtual Reality is using sporting head-mounted displays and interactive gloves as it display or interactive equipment. Example like the picture (Figure 3) is Immersive Virtual Reality. The Polhemus Fastrak Systemmakes head and hand position are track and feed into the 3D world. It make user can more experience that they are being there or feel they are partial of them. Immersive Virtual Reality help human to exploit the natural human skills like limb movement, gestures and stereoscopic vision. In order to let human too have more experience on it.

Figure 3: The user is wearing stereoscopic, sporting head-mounted display and interactive gloves to control the object inside the virtual world. [Source: http://medgadget.com/archive s/2006/11/virtual_reality_3.html]

Augmented Reality

Augmented reality (AR) is combination of a physical real-world environment's elements and virtual computer-generated imagery and is a growing area in virtual reality. In general it stacks graphics on a real or actual world environment in real time. It makes the user's view of the world and the computer interface combination becomes one scene.

Augmented reality presented to the user strengthens that person's performance in and perception of the world. It goal is to create a system make user cannot differential between the real world and virtual graphic and looking at a single real scene. The real world environment provides wealthy information for us that are harder to replica in a computer. These worlds are oversimplification such as the environments created for immersive games and entertainment, or the system that can create a more realistic environment that has a very expensive price like an aircraft simulators. Augmented reality combination the real scene viewed of user and a virtual scene that generated from computer that augments the scene with additional information.

Figure 4: The above diagram is Reality-Virtuality Continuum.

Nowadays, the Marker-based augmented reality is widely in used. Marker-based augmented reality is using a camera and a maker as to determine the position and direction of the models. Marker-based augmented reality system(figure 5) is using a toolkit called ARToolkit for adjustment camera in the augmented reality system using marker.

Figure 5: The above image is about marker-based augmented reality. The combination of the actual human hand and the 3D model are augmented reality.[Source: http://www.psfk.com/2009/03/baseball-cards-add-augmented-reality.html]

Augmented Reality vs. Virtual Reality

Virtual reality is complete submergence in digital world. However, augmented reality is digital stack on the real world. Augmented reality augmented the real world with digital data such as 3D graphics. It is much interesting than the completely concoct or fake environment. The different between augmented reality and virtual reality is the augmented reality is closer to the real world. It is because it contains the most part of real world element and minority computer-generated images.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

The issues of both virtual reality and augmented reality, both of it faced a similar problem in convincing the user. Virtual reality needs to build the details of the whole world. It can be expensive to false in the face of danger. However, the augmented reality faced that computer-generated images must show the photo of realistic to perfect combine with the real world element. Furthermore, the augmented reality is a real time system, it generate the 3D graphic in real time. So, the combination of the graphic is need to be very precise, if any lag or imprecise occur in the system will disrupt the experience and the realities.

The display device for the both augmented reality and virtual reality are different. Augmented reality display device is less demand requirement than the virtual reality system demand. It is because the augmented reality does not replace the real world. In augmented reality, a simple image maybe sufficient to be part of the augmented reality. However, the virtual reality is needed a complete 3D graphic.

From previous cases that show that augmented reality had lower requirements than virtual reality. Virtual reality is not the case for tracking and sensing. In this case, the requirement of the augmented reality is much stringent than those for virtual reality systems. A major reason for this is the registration problem.

Basic Subsystems

Virtual Reality

Augmented Reality

Scene Generator

More Advanced

Less Advanced

Display Device

High Quality

Low Quality

Tracking and Sensing

Less Advanced

More Advanced

Table 1: Comparison of requirements of Augmented Reality and Virtual Reality

Augmented Reality System

Augmented reality, there are few field are important to create an augmented reality system. Computer graphic, computer vision and user interface are contributing to build an augmented reality systems.

Typical Augmented Reality System

Figure 6 - Components of an Augmented Reality System

Perspective projection of the 3D world onto a 2D image plane executes by the video camera. The video camera gets the position, pose, focal length, and lens distortion to calculate what projected onto its image plane. Computer graphics system complete the generation of this virtual image. Modeling of virtual objects in the frame of reference objects. Graphics systems needs the real scene imaging information and render the objects correctly. This information controls the composite camera and use to generate virtual object image. The virtual object image will combine with the real world object to achieve augmented reality.

Performance Issues

Augmented reality system allow user to move freely in the scene and see the properly rendered graphic. So that the real time system apply in the augmented reality is preferable. The two performances standard must consider in the augmented reality system is the generating augmenting image update rate and registration real and virtual image precision. Real-time constraint is a virtual object render cannot have any visible jump when user is viewing an augmented image. If the graphic system able renders the virtual scene in at least 10 times per second, then it can achieve no any visible jump. This method is running well in the simple to medium graphics scenes. The more photorealistic graphics rendering is required, the more reality of the virtual object in the scene.

There is having two possible cases failure in the second performance standard which is misregistration with the real and virtual scene and time delays in the augmented reality system.

Noise in the system is the cause of misregistration of the real and virtual scene. The position and orientation of the camera aspects of the real scene must be aware of. The noise measurement probability has registration error of the virtual and real image scene. The Fluctuations of values will cause jitter in the viewed image. The augmented reality is sensitive with visual error because the virtual object is not fixed in the real scene or it would be position wrongly. Even pixel Misregistrations can detect the right conditions.

Misregistration is time delays in the system is another one factor of failure in the second performance standard. The first paragraph had mentioned 10 times per second is the suitable real-time performance, if less than that the render object will have jittered. The virtual object will jitter or system laggy because of the put-off calculation of the camera position or incorrect positioning graphic camera. Augmented reality system should design as reduce the delay to have a good real-time performance.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Display Technologies

Alternative Augmented Reality Approaches: Concepts, Techniques, and Applications --

There have 3 types of display technologies for the augmented reality that is Monitor Based display, Video See-through Display and Optical See-through Display. The monitor based display is the most common use in nowadays. The equipments are also cheaper and easy find.

Monitor Based Display

Figure 7 - Monitor Based Augmented Reality

Monitor Based augmented reality also called as "window on the world". Monitor Based augmented reality use composite video and display object on a regular monitor. Monitor Based augmented reality displays are outward, no distortion and provide a remote view on the real world.

Video See-through

Figure 8 - Video See-through Augmented Reality Display

There are having two types of Head-mounted displays (HMD) and they called video see-through and optical see-through. See-through term is mean to allow user see the real world scene display in front of user when user is wearing HMD. HMD allow user have a complete visual isolation. The HMD use video camera to align with the display to capture the real world scene. This display technology is similar with the monitor based display.

Optical See-through

Figure 9 - Optical See-through Augmented Reality Display

In order to find the true perspective on the world stage light the Optical see-through eliminates video channel. Optical see-through optically display the combination of real world object and virtual object in front of the user. Optical see-through are same with heads up displays (HUD). HUD often appears in aircraft cockpits and experiments car. Optical see-through display real world scene is immediately and it impossible to make up for system delays.

Tracking Requirements -hvnt

With an augmented reality system the registration is with the visual field of the user. The type of display used by the augmented reality system will determine the accuracy needed for registration of the real and virtual images. The central fovea of a human eye has a resolution of about 0.5 min of arc. In this area the human eye is capable of differentiating alternating brightness bands that subtend one minute of arc. That capability defines the ultimate registration goal for an augmented reality system. The resolution of the virtual image is directly mapped over this real world view when an optical see-through display is used. If it is a monitor or video-see through display then both the real and virtual worlds are reduced to the resolution of the display device. The current technology for head tracking specifies an orientation accuracy of 0.15 short of what is needed to maintain single pixel alignment on augmented reality displays. These magnetic trackers also introduce errors caused by any surrounding metal objects in the environment. This appears as an error in position and orientation that cannot be easily modeled and will change if any of the interfering objects move. In addition, measurement delays have been found in the 40 to 100 msec range for typical position sensors which is a significant part of the 100 msec cycle time needed for real-time operation.

Augmented Reality Main Process

The augmented reality has two important processes that are Recognition Marker and Rendering. These two processes make up augmented reality system. Below is the step of augmented reality system.

Step of augmented reality system.

Place the marker in front of the webcam. It let the webcam to capture the marker.

The algorithm will compare the detected marker with the library marker.

If the marker is in the library, then will render object on the marker.

Figure 10 - Marker-based Augmented Reality system process step.

Recognition Marker

http://studierstube.icg.tu-graz.ac.at/thesis/marker_detection.pdf

marker_detection ed.pdf

Recognition marker is also called marker detection. Augmented reality system marker detection function are require marker, the video camera to capture the marker and algorithm to verify the marker. Marker design is the important part on this detection function. The markers surrounds by a square black and thick border and inside the border consist of the pattern.

Maker detection function, the first step is to detect the marker black border that is detecting the continuously group of pixels under a certain gray value threshold. Finally, the outline of each group is collect and then those group surrounded by four red straight lines. It means the marker is marked as potential markers. The potential marker four corners are used to calculate a homographic. The purpose is to remove the perspective distortion. The marker internal pattern bring to a typical front view one can sample a grid of N x N segment (16x16 segment or 32x32 segment) gray values inside. Those gray values build up a feature vector that is a library of feature vectors of known markers by correlation. The output of this template matching is a confidence factor. If the marker is found that mean the threshold is less than the confidence factor. The inter-marker confusion rates and high false positive will cause by identification mechanism and marker verification using correlation. The marker uniqueness is reduced if increases the library size that will also cause increase the inter-marker confusion rate.

Figure 11 - The matrix barcode.

Figure 12 - Treshold

Rendering

Augmented reality a practical guide book --

Rendering in augmented reality is a technique to use to display the object based on the correct marker. After the marker detection had detected the marker. It finds the pose and set the model view matrix and then rendering operations appear relative to the array and relative to the 3D world. That means the markers allow the augmented reality algorithm to calculate where the input of the virtual object and so that virtual object can display well in the augmented world. In the library of marker, the computer vision software to find them in image and some function is to help to load and display the 3D model.

Other than that, the augmented reality is able to determine the angle and position of the marker. It can use this information to determine the correct position and angle of the 3D object. After that, when the computation is complete, the 3D object will display and it will stack on the top of the camera image and the augmented reality is form. If the augmented reality system renders the 3D object correctly, the 3D will appear in the world scene. Once the position and angle is calculated, the 3D object can be adjusted based on the marker moved around the 3D world.

Figure 13 - Render the object on the marker. [Source: http://www.computergraphica.com/2006/ 12/20/osgart-10-augmented-reality-software-development-kit/]

Augmented Rendering Techniques

The virtual world models and real world models are specified in an affine representation. There have four non-coplanar points in the scene to define affine frame. Affine frame of reference in a given on behalf of four or more non-coplanar three-dimensional point set, the collection of any point projection can be calculated as a set of four points in the linear combination. Affine representation that allows calculates the projection of a point without any camera position information or request information on the calibration parameters. Affine represents is retained under the affine transformation the object properties only. In the affine transformation line, intersections of lines and planes and parallelism are retained. Geometric invariants technique had used in augmented reality system. Augmented reality system used invariant and 2D image enhancement on the stack. This work was expanded to include 3D rendering and virtual objects. To properly integrate into a real scene images and virtual image to reduce the problem:

Track basis of affine point used for transformation.

Calculate the affine representation

Calculate the virtual objects projection as linear combinations of the projections of the affine basis points.

Camera Viewing Geometry

In augmented reality, to project a virtual object accurately are requiring to know the precise combined effect of the three transformations. It using these three transformations to display on respective image planes is necessary. The below Figure 14 are the three transformation.

Object-to-World

World-to-Camera

Camera-to-Image

Figure 14 - Three important transformations for the virtual objects, real world, camera, and the image it produces.

= P3x4 C4x4 O3x4

Equation 1- Homogeneous coordinates this projection

Equation

Description of Homogeneous Transformation

[X Y Z 1]T 

Homogeneous coordinates for a virtual point.

[u v 1]T 

Projection in the graphics image plane.

O3x4

Virtual object coordinates to World Coordinates (object-to-world).

C4x4

World coordinates to Camera coordinates (world-to-camera).

P3x4

Projection operation of the synthetic graphics camera onto the graphics image plane. (Matrix modeling the object's projection onto the image plane).

Table 2 - Description of the Homogeneous Transformation Equation 1.

The same expression exists for real point projection to camera image plane. Equation 1 assumes reference frame defined for each are independent and not related for each other. Four non-coplanar points defined the frame, in camera view and visually keep track of all the video frames. Virtual points and real points in space use single homogeneous transformation.

= 3x4

Equation 2

Equation

Description of Homogeneous Transformation

[x y z 1]T

[X Y Z 1]T point transformed to affine coordinates.

3x4

Projected onto the image plane of a 3D affine point. (Combined effects of the change in the object's representation as well as the object-to-world, world-to-camera and projection transformations.)

Table 3 - Description of the Homogeneous Transformation Equation 2.

The projection matrix elements (3x4) to be the fiducial point image coordinates. Therefore the fiducial points image location consist the information that need to project the 3D object.

Affine Representation

The Affine representations allow the reproject the point without knowing the camera position. It also allow without having any metric information about the points (e.g., 3D distances between them). The collection of points, P1, … , Pn ϵ R3 , n ≥ 4, at least four of which are non-coplanar. This point is representation of affine representation and does not change. If applied to all the points of the same non-singular linear transformation (e.g., translation, rotation, scaling).

The four non-coplanar points (p0 ... p3) in the augmented reality system is represented in an affine reference frame. The p0 is origin point and p1, p2 and p3 as affine basis points. The p0 assigned homogeneous affine [0 0 0 1]T coordinates. The p1, p2, p3 are assigned affine coordinates of [1 0 0 1]T, [0 1 0 1]T, [0 0 1 1]T respectively. The associated affine basis vectors are p1 = [1 0 0 0]T, p2 = [0 1 0 0]T, and p3 = [0 0 1 0]T. px is represented as px = xp1 + yp2 + zp3 + p0 where [x y z 1]T are the homogeneous affine coordinates for the point. This is an affine basis vectors linear combination.

Affine Reprojection

Augmented reality system used augmented reality affine reprojection to create the other point projection are defined in affine coordinate frame. Reprojection used in calculated the virtual object projection point. The 3D projection point at the two camera position and calculate the projection its' point at the third camera position.

Equation 3 - Projection equation.

Equation

Description of Equation

Average distance for the projection center to object point.

It used to scale the image, if bbject distance from the camera is small than the camera size.

Table 4 - Description of the Equation 3.

This Representation is important because it can construct for any virtual object without any information about the three transformations (Figure 14).It require tracking entire frames fiducial point which is non-coplanar. Then used the weak perspective projection model modeled the camera-to-image transformation. The camera is to approximate the perspective projection process by using weak perspective projection model. The affine representation used the point to allow repojection for calculate the affine point without awareness the position of the camera and the camera calibration parameter. The affine frame define by the four point projection is need to decide the symbol  (Equation 2). The reprojection property is shown in Figure 15.

Equation 4

Equation

Description of Equation

Im

Image.

[upi vpi 1]T 

Affine basis point. Projection of the four points, pi i = 0 ... 3.

[up vp 1]T 

Projection.

[x y z 1]T

Homogenous affine coordinates.

Table 4 - Description of the Equation 4.

Figure 15 - Affine Point Reprojection

The Equation 2 of the projection matrix P clearly defined can be seen in the Equation 4. It calculated a 3D point projection of new image there is viewed by camera as a linear combination of the projections of the affine basis points. Affine frame is requiring the four point image projection and 3D point affine coordinates. Affine frame provide the coordinates values by using visual tracking routines initialized on the feature points.

Affine Reconstruction

Affine coordinates each point calculation from Equation 4. Projection that require minimum two views and affine basis points projections are needed. The Equation 4 is needed by the result of the over-determined system of equations because it refers it to do calculation. Two view, I1, I2, of a scene in which the affine basis points projections (p0, ..., p3) are known. Therefore, the affine coordinates [x y z 1]T for any point p can be found from the solution of the following equation:

Equation 5

Equation

Description of Equation

Im

Image.

[upi vpi 1]T 

Affine basis point. Projection of the four points, pi i = 0 ... 3.

[up vp 1]T 

projections of point p 

[x y z 1]T

Homogenous affine coordinates.

Table 4 - Description of the Equation 5.

Affine Depth

Affine formulations required tracking basis points and render the virtual object to augment the real image. So that, it the needed used the computing resources. A good rendering algorithm generates a virtual object well. To render a real-time complex graphic scenes the computer graphic system is require and it needs hardware support to do rendering operations. Z-buffering requires a sequence all points in depth to graphics image project to the same pixel. It had done by issue the value of each point for the depth with the synthetic graphics camera optical axis. Affine point representation define all the point in this the system. It is in order to retain the depth of the point. Orthographic graphics camera renders the virtual objects. The z value is independent of the size of the order of only a point must be maintained. Determine camera optical axis as the 3D line whose points all project to the same point in the image to get the depth ordering. The homogeneous vector [ζT 0]T is optical axis. The ζ is given as the cross product

Equation 6

The projection matrix  first and second rows are the first three elements of the two vectors. p' = p + a[ζT 0]T are the translation of any point of the p. The each point of the projection in the image place are similar with the p projection point. Z-buffering use the depth value to assign each point of p as the product p[ζT 0]T. It will allow the affine point to complete projection. It expressed as the Equation 7.

Equation 7 - Visible surface rendering of a point p on an affine object.

The p's projection and are the image coordinates and p's assigned z-value for the w. 4x4 form as the viewing matrix of transformations of graphic objects perform by computer graphics systems. The upper left 3x3 submatrix has a differential between graphics system viewing and affine projection matrix. In Equation 7, submatrix is a general invertible transformation. Submatrix work with a Euclidean frame of reference then it is a rotation matrix. Real-time rendering of objects used standard graphics hardware reference developed affine frame. Silicon Graphics Reality Engine is a graphics processor. It can perform object rendering with z-buffering for hidden surface removal.