# Surface Rendering Techniques Computer Science Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Surface rendering techniques render a surface which included within volumetric data using geometric primitives. That representing a surface contained inside a volumetric data set can be useful in many applications, there are some weaknesses which are only approximation of surfaces contained within the original data and losing too much information contained the data during the rendering process. To solve these problems, volume rendering techniques are used.

In volume rendering techniques, the geometric model is voxelized into a set of voxels and each of these voxels is stored in the volume. The voxelized model can be either binary or volume sampled which generates alias free density voxelization of the model.

## 2.1 Surface Graphics and Volume Graphics

Vector graphics and surface graphics represents the scene that is a set of geometric primitives stored in a display list .The scan-conversion algorithms are converted these primitives into a discrete set of pixels. In the same way, surface graphics also does not support the rendering of 2D objects and generates only the surfaces of 3D objects.

In comparison with surface graphics, volume graphics uses a 3D volume buffer like as a medium for the purpose of representation and manipulation for 3D scenes that discredited in the image generation sequence and the out-coming 3D discrete form is applied as a database of the place for the purpose of manipulation and rendering. Moreover, all objects are converted into one voxel that is atomic and represents the information about one object that resides in that voxel.

The rendering phase of volume graphics is viewpoint independent. It is insensitive to scene complexity. Boolean and block are supported by volume graphics. It is suitable for 3D sampled or simulated data. It is capable of expressive amorphous phenomena.

Raster graphics decreases image generation from screen refresh by making insensitive to the scene. Moreover the raster representation provides itself to block operations. It is also suitable for showing 2D sampled digital images and provides the environment for mixing digital images. Raster graphics did not like vector graphics, it provides the capability to display shaded, textured surfaces and line drawings.

Table 2.1 shows the comparison between surface graphics and volume graphics and the comparison between raster graphics and vector graphics. The main disadvantage of the raster graphics is the processing power and the next weakness is the large memory. The discrete nature of the image that makes less suitable for geometric operations like transformations and accurate measurements.

Table.2.1. Comparison between vector graphics and raster graphics and between surface graphics and volume graphics (table is taken from [1])

Volume graphics is to synthesise, model, manipulate and render volumetric geometric objects that stored in a volume buffer of voxels. Volume graphics is concerned with modeled geometric scenes and is represented in a regular volume buffer. Volume graphics improves the field of 3D graphics by offering an alternative to traditional surface graphics.

## 2.2 Volume rendering

Volume rendering is a technique to display volumetric data as two- dimensional image [3]. It means that volume rendering operates on three-dimensional data, processes it and transforms it into a simple two-dimensional image. It uses lighting functions, point processing for data classification and alpha blending for image processing.

The purpose of volume rendering is to create a two-dimensional image which is composed of numerous three-dimensional values, for example, X-ray machine. The output of an X-ray machine is a two-dimensional image although an image represents in three-dimensional data. The X-rays can penetrate without interaction and can be absorbed. And then, they can interact and can be deflected from their original direction. After that, the energy from theses X-rays is recorded on two-dimensional film that represents a three-dimensional object.

In volume rendering, since imaginary rays are passed through a three-dimensional object and also the data, they allow for the intensity of each datum and each ray keeps an accumulated value. By comprising a sheet of accumulated values, volumetric data is projected onto a two-dimensional space. In classification step, certain data points of the three-dimensional object can be made transparent to view the features of interest more easily.

Volume rendering is achieved by a sequence of operations in a pipeline that include segmentation, gradient computation, resampling, classification, shading and compositing as shown in figure 2.1. However the order of these operations varies among volume rendering implementation, i.e. shading can come before or after resampling.

Segmentation

Gradient

Computation

Resampling

Classification

Shading

Compositing

Fig.2.1.Volume rendering operations

Segmentation is a pre-processing step and done before the actual rendering. This process separates the data set into structural units. Segmentation labels voxels in a data set and the label can be any information to store with that voxel. The labels are stored together with the voxels on disc after features are indentified and segmented.

In the classification stage of the volume rendering pipeline, the labels can be used to assign opacity values and colors to the voxels. By assigning different opacities and colors to voxels, classifies separating voxels into different features classes. For example, the voxels with the label Tissue are assigned green while the label Tissue voxles is assigned with a very low opacity value to be transparent.

The combination of segmentation and classification is a very power feature of volume rendering pipeline. Segmentation is a very difficult process and hard to capture into an algorithm while classification is an automatically completed process.

The gradient computations find edges or boundaries between different materials by a measurement of how quickly the data in a data set changes to use in the classification and shading stages.

Resampling generates new addresses into the voxel space and new values. Samples are taken along each ray for accumulation and aligned with exact voxel locations. And then by using interpolation, generates new values for samples.

Shading highlights parts of the data set by using an illumination model that range in complexity and depends on CPU power, number of light sources and color.

One pixel may represent hundreds of values that sampled along a ray and these values are needed to accumulate. Accumulation is accomplished via a composition functions including front-to-back and back-to-front function.

## 2.3 Volume types

There are some volumes types. They are regular, rectilinear, curvilinear and irregular.

Regular : The data elements are structured, and arranged in a cubic grid but the grid can be rectangular as well

Rectilinear : a tessellation by rectangles or parallelepipeds that are not, in general, all congruent to each other. The cells may still be indexed by integers as above, but the mapping from indexes to vertex coordinates is less uniform than in a regular grid.

Curvilinear : a grid with the same combinatorial structure as a regular grid, in which the cells are quadrilaterals or cuboids rather than rectangles or rectangular parallelepipeds.

Irregular : Unstructured data, data points can be anywhere within the volume.

## 2.4 Volume rendering techniques

Volume rendering can be achieved by applying an object-order, an image-order and a domain-based technique. Object-order techniques use a forward mapping scheme where the volume data is mapped onto the image plane while image-order techniques use a backward mapping scheme where rays are cast from each pixel in the image plane through the volume data to determine the final pixel value.

On the other hand, domain-based techniques transform the spatial volume data into an alternative domain, such as compression frequency and wavelet, and then a projection is generated directly from that domain.

## 2.4.1 Object-order techniques

Object-order techniques mapped the data samples and projecting each sample onto the image plane through the data samples. The object-order techniques scan the volume and determine which pixels a given voxel can effect.

It traverses the data samples plane-by-plane and row-by-row with a back-to-front algorithm. For arbitrary orientations of the data, some axes may be traversed in an decreasing order while the rest may be with a increasing order. Although specify each axis to traverse in an increasing or decreasing order, the axes's order is arbitrary.

For a front-to-back algorithm, the voxels are traversed in the method of increasing distance from the image plane. For a front-to-back method, when a voxel is projected onto a pixel if the pixels are same, the pixels are neglectd since they are hidden by the first voxel. If the axis which is parallel to the viewing direction is selected to be the the data traversal's outermost loop, partial image results can be showed to the user in order to relate with the data and end the image.

In comparison with a back-to-front method for partial image results, the value of a pixel will modify many times during image generation while the value of a pixel remains unmodified once its value is set in a front-to-back method.

Both back-to-front and a front-to-back algorithm can parallel to the view plane. The data traversal is limited to a smaller rectangular region in orthogonal clipping planes whereas, in parallel clipping planes. For the distance during cut plane and image plane is more than the distance of data samples to the image.

In comparison with surface rendering techniques, the cut planes can be achieved by applying the geometric primitive representation of the object that is a time consuming process. For a back-to-front method, the cut planes can be succeed by putting values in the image plane pixels and changing the bounds of the data traversal.

The distance of each voxel to the image plane could be kept in the pixel. To obtain a shaded image, a 2D discrete shading technique is applied to the image. In the 2D discrete shading techniques, inputs are a 2D array of depth values and output is a 2D image of intensity values.

In the simplest 2D discrete shading method called depth shading, only used the Z-buffer. The value of the corresponding input pixel is inversely proportional to the intensity value kept in each output image's pixel. These made features far from the image plane dark while close feature are bright because in the shading method the surface orientation is not and surface discontinuities and object boundaries are lost.

To obtain more accurately shaded images, the 2D depth image is passed to a gradient-shader and a shaded imaged is produced by the gradient-shader that allow for the object surface To calculate the gradient at each (x, y) pixel located at the input image by

(Eq.2.1)

where z = D(x, y) is the value stored in the Z-buffer related with pixel (x, y), is approximation using a backward difference D(x, y) D(x 1, y), a forward difference D(x + 1, y) D(x, y), or a central difference (D(x + 1, y) D(x 1, y)),is the same approximation like and the central difference is a better approximation of the derivative.

To give more exact normal estimations by identifying image discontinuities, a context sensitive normal estimation method was developed. In this method, if two pixels are in the same context, their values and the initial derivative of the value at these locations do not enormous differ.

A function f (x, y, z) that calculates the value at any location can reconstruct the original signal and this technique is typically used by backward mapping algorithms and forward mapping algorithms. A splatting algorithm approximates smooth object-ordered volume rendering and represents the value of the data samples as a density. Each data sample s = (xs, ys, zs, (s)), s S, has a function C defining its contribution to every point (x, y, z) in the space:

Cs(x, y, z) = hv (x- xs, y- ys, z- zs) ρ(s) (Eq.2.2)

where hv is the volume reconstruction kernel and ρ(s) is the density of sample s which is located at (xs, ys, zs). The contribution of a sample s can be computed by integration:

(Eq.2.3)

where u is coordinate axis. And a footprint function F can be described as follows:

(Eq.2.4)

where (x, y) is the dislocation of an image sample and the weight w at each pixel can be computed as

w (x, y) s = F(x- xs, y- ys) (Eq.2.5)

where (x, y) is the pixel position, and (xs, ys) is to describe the image plane position of the sample s.

By evaluating the integral in equation 2.4, a footprint table is generated and all table values existing outside of the footprint table give zero weight. To find out the weight of the contribution of s to the image plane of each pixel, a footprint table can be focus on the projected image plane position of s and be exampled. Multiplying this weight by (s) then provides the contribution of s to each pixel.

For orthographic projections, each sample's footprint is equal except for an image plane offset and to calculate per view only one footprint table need. But this still requires too much computation time and only one generic footprint table is generate for the kernel. In each view, the generic footprint table is created by a view-transformed footprint table and it can be precomputed.

To generate a view-transformed footprint table, the image plane extent of the projection of the reconstruction kernel and then a mapping is calculated between this extent and the extent that contains the footprint table. At least the value for each entry in the footprint table is built by mapping.

To modify parameters in this algorithm that affect image quality, blocky images are produced by small footprint tables and details images are produced by large footprint tables. After that, different sampling methods are used to generate the view-transformed footprint table and then reconstruct the kernel.

In a technique of rendering volumes that have mixtures of materials, for example, flesh, muscle and CT data containing bone, either a low-pass filter was applied to take out high frequencies before sampling or the scalar field was exampled above the Nyquist frequency. If the volume contains one scalar field representing the composition of several materials, that can be divided either by adding information to each volume element or by the scalar value at each point.

Each of the material percentage volume is a scalar field in material percentage volumes and each material relates color and opacity. A matte volume is applied to slice the volume or achieve other spatial set operations and to got actual rendering of the final composite scalar field by changing the volume. And then the data is projected and composited to form the final image.

## 2.4.2 Image-order techniques

In comparison with object-order rendering techniques, image-order rendering techniques determine how the data samples contribute to the pixels while object-order rendering techniques determine how the data samples affect the pixels.

One image-order volume rendering techniques called binary ray casting that generates images of surfaces included within binary volumetric data exclusive of performing boundary detection and hidden-surface removal. A ray is cast from the pixel to decide if it intersects the surface included within the data. All rays are parallel with the view direction in parallel projections while rays are cast from the eye point .

If an intersection occurs, shading is executed at the intersection point and the result of the color is put in the pixel. To determine the initial intersection, a stepping method is used where the value is decided at regular intervals. The value of data samples with 0 are the background whereas the part of the object is a value of 1. A zero-order interpolation method is used so the value at a position along the ray is 0.

## 2.4. 3 Domain volume rendering

The spatial 3D data is initially transformed into another domain and a projection is produced directly from that domain. The Fourier slice projection theorem is applied by the frequency domain rendering. This approach obtains the 3D volume projection .

A disadvantage of frequency domain volume rendering is that the out coming projection is a line integral beside the view direction that does not display any occlusion and attenuation effects.

In the compression domain rendering, it reduces the computation, storage and transmission overhead of large volume data and performs volume rendering and and figure 2.6 display a CT scan of a lobster.

For the rooted in time-frequency analysis wavelet method, a wavelet is a fast decaying role with zero averaging and there are good features of wavelets in spatial and frequency domain. Moreover it can be used to describe the volumes with small number of wavelet coefficients.

Fig.2.6. Compression domain volume rendering of a CT scan of a lobster (figure is taken from [1])

## 2.5. Irregular Grid Volume Rendering

Irregular gridded data is unstructured data and no explicit connectivity is defined between cells. Irregular grids do not have uniform cubes, and irregular grid data comes in several different formats.

In comparisons with regular grids, operations for irregular grids are more difficult and the effective visualization techniques are more sophisticated. Point location, interpolation, shading, etc., are harder for irregular grids.

For rendering process, the most suitable grids are tetrahedral grids and hexahedral grids. In comparisons with tetrahedral grids, hexahedral grids are that the four points on the side of a cell may not lie on a plane forcing the rendering algorithm during rendering.

Tetrahedral grids have some advantages such as easier interpolation, simple representation and any other grid can be interpolated to a tetrahedral one. But cells are decomposed into tetrahedral so one disadvantages is the size of the datasets predisposed to grow.

There are several different approaches for volume rendering irregular grids. One of the efficient ways is to resample the irregular grid to a regular grid. To get the necessary accuracy, a high enough sampling rate is used. That sampling rate makes the resulting regular grid volume too large for storage and rendering purposes.

The feed-forward technique is used for rendering irregular grids. In this method, cells are projected onto the screen one by one accumulating to the final image. One advantage of these techniques is the ability to compute the volumetric lighting models and it speeds up rendering. But one disadvantage of this technique is generating the ordering for the cell projections.

Following figures show the images of irregular grid rendering.

Fig.2.13. Irregular grid rendering of Blunt Fin (figure is taken from [9])

Fig.2.14. Irregular grid rendering of Combustion Chamber (figure is taken from [9])

## 2.6 Volume rendering optimizations

As an advantage, volume rendering can create informative images that can be helpful in data analysis while it required a lot of time to generate a high-quality image as a drawback.

In object-order volume rendering, on the image plane calculate the contribution of each volume sample and it is a costly operation for large sized data sets to render times that are non-interactive. Although the intermediate outputs in the image plane will be useful, the partial image results are not representatives of the final image. But it is useful to create a lower quality image in a fewer amount of time for the purpose of interaction.

To generate the full resolution image, the data is processed bit by bit and to generate lower resolution image, the data is processed byte by byte. Either to represent an element of the object or to represent the background it used more than four bits of the bytes.

In image-order volume rendering, it involves sampling along the ray to determine pixel values and opacity. In discrete ray casting, the ray is quite expensive to discretize every ray cast although this is needless for parallel projections. One ray can be discretized into a 26-connected line when all the rays are parallel and applied as a template for all other rays. This method is called template-based volume viewing.

In using this template to cast a ray from each pixel, some voxels in the data may give to the image twice. To solve this problem, the rays are cast instead from a base plane. By doing this, each data sample can give at most once to the final image and all data samples can contribute. For the final step, resampling is needed to decide the pixel values from the ray values by using bilinear interpolation.

One obvious optimization for both discrete and continuous ray casting is to limit the sampling to the segment of the ray which intersects the data. To enhance limiting the segment of the ray, polygon assisted ray casting can be used by approximating objects contained within a volume that used a crude polyhedral representation. The polygons are projected two times to make two Z-buffers by using conventional graphics hardware. The initial Z-buffer is the standard closest-distance Z-buffer, and the next one is a farthest-distance Z-buffer. The object is completely enclosed within the representation, these values can be used as the start and end points of a ray segment.

## CHAPTER 3

## Techniques for ray casting regular volume and rectilinear volume

In this chapter, I would like to mention techniques for ray casting regular volume and rectilinear volume.

## 3.1 Techniques for rectilinear volume

## 3.1.1 Isosurfacing of rectilinear volume

The algorithm has three steps: traversing a ray through cells which do not contain an isosurface, analytically computing the isosurface when intersecting a voxel including the isosurface, shading the resulting intersection point. This method is repeated for each pixel on the screen. An advantage is that inserting incremental features to the rendering has only increase the cost. For instance, if one is visualizing multiple isosurfaces with some of them rendered transparcntly, the correct compositing order is guaranteed since traverse the volume in a front-to-back order along the rays. Moreover, shading techniques like as shadows and specular reflection can easily be incorporated for enhanced visual cues. Next advantage is the ability to exploit texture maps which are much bigger than physical texture memory, which is available up to 64 MB at that time.

In a rectilinear array, a regular volume with even grid point spacing then ray-isosurface intersection is straightforward. For find an intersection (Fig. 5), the ray +ttraverses cells in the volume checking each cell. If its data range bounds an isovalue then an analytic computation is executed to solve for the ray parameter t at the intersection with the isosurface:

Fig. 5. The ray traverses each cell (left) and, when a cell is encountered that has an isosutface in it (right), an analytic ray-isosurface intersection computation is performed.

When approximating with a trilinear interpolation between discrete grid points, this equation will enlarge to a cubic polynomial in t. Only the roots of the polynomial are examined. There will be multiple roots, connecting with multiple intersection points. In that case, the smallest t is used. There will be no roots of the polynomial, in this case the ray misses the isosurface in the cell. An example of this is shown in Fig. 6

Fig. 6. (a) The isosurface from the marching cubes algorithm. (b) The isosurface resulting the true cubic behavior inside the cell

## 3.1.2 Maximum-intensity projection of rectilinear volume

The maximum-intensity projection (MIP) algorithm searches the largest value of data that intersects a particular ray. It uses the same shallow spatial hierarchy expressed above for isosurface extraction. Moreover, a priority queue is applied to track the cells or macrocells with the maximal values. In the priority queue, the maximum data value for the dataset is employed as the priority value for this entry. The algorithm repeatedly draws the large amount of entry from the priority queue and splits it into smaller (lower level) macrocells. For each of the cells is added into the priority queue with the precomputed maximum data value for that region of space.

Bilinear interpolation is used at the intersection of the ray with cell faces that intersects the ray, a bilinear interpolation of the data values is calculated, and the maximum value is stored again in the priority queue. At least, when one of these data maxima came out at the top of the priority queue, for the entire ray the algorithm has created the maximum data value.

In order to reduce the length of the priority queue, executes a single trilinear interpolation of the data at one point to create a lower-bound for the maximum value of the ray by the algorithm. Macrocells and datacells which do not over this lower-bound are not gone into the priority queue. To get the value, execute the trilinear interpolation executing the t corresponding to the maximum value from previous ray a particular processor has computed. Normally, there will be a value inside the same block of pixels and exploits image-space coherence. Unless it still gives a bound on the maximum along the ray. If this t value is unavailable, choose the medium of the ray which meets the data volume. It is a simple heuristic which increases the performance for many datasets.

The MIP algorithm employs the 3D bricking memory layout in order to get efficient cache utilization when traversing the data values at same condition to the isosurface extraction algorithm.

## 3.2 Techniques for ray casting regular volume

## 3.2.1 Image-order techniques

There are several optimizations for this algorithm. In one optimization, the number of steps is decreased by traversing part of the ray while the other optimization is to represent the data in memory. Since this algorithm was built up on a machine that is only 32K of RAM, data compression can be solved by using scan-line representation that can store a list of end points which represent the segments owned by the object. Although this representation is compact, the intersection calculation does not need a lot of time.

In comparison with binary ray casting, discrete ray casting composite projections of multivalued data instead of the display of surfaces within binary data and generate surface. Moreover, it traverses a discrete representation of the ray and deciding the closest data sample. It uses a 3D line scan-conversion (voxelization) algorithm in which the data samples contribute to each pixel and cast a ray from it in the direction of the viewing ray and then discrete the ray.

There are three types of connected paths such as 6-, 18-, and 26-connected paths for the 3D discrete topology of 3D paths using a voxelization algorithm to output a 3D discrete ray. These connected paths are built upon the three adjacency connections between consecutive voxels along the path as shown in figure 2.2.

Fig.2.2. 6-, 18-, and 26-connected paths (figure is taken from [1])

If there share a face, two voxels are 6-connected .For 18-connected, if they share a face or an edge and if they share a face, an edge, they are 26-connected, or a vertex. Where for each pair of voxels vi , vi+1, vi and vi+1 are 6-connected path that is a sequence of voxels. For 18- and 26- connected paths are same definition as 6-connected path. The voxels along this path determine the final pixel value. Until the first voxel is encountered the path is traversed. The resulting color value is kept in the pixel and shaded.

The value to the closest intersection is kept at each pixel to output a shaded image and it is passed to a 2D discrete shader. By achieving a 3D discrete shading operation, can obtained the better results. In order to estimate the normal by using normal-based contextual shading. The normal of a voxel is decided by the orientation of that face. To be considerable the error in the approximated normal until a face of a voxel can exist only six possible orientations.

By performing gray-level shading, more accurate results can be obtained and gray-level gradient at that position can be approximated with a main difference:

(Eq.2.6)

where (Gx, Gy, Gz) is the gradient vector and to define Dx , Dy, and Dz as the distances between neighboring samples of the x, y, and z directions, respectively. For shading calculation, the intensity value is stored in the image and the gradient vector is applied as a normal vector. A normal estimation can be executed at point sample and this information and the distance can be used to shade point sample.

Instead of using a discrete or continuous ray, the whole ray could be traversed by keeping the maximum value encountered the length of the ray in the image plane pixel. In comparison with a surface projection as shown in figure 2.3, a maximum projection as shown in figure 2.4 is some internal parts of the data to store the average of all values along the ray. Composited function as shown in figure 2.5 is obtained by defining an color and opacity for each scalar value, accumulating intensity along the ray, revealing 3D internal features and 3D structure information.

Fig.2.3. A surface projection of a nerve cell (figure is taken from [1])

Fig.2.4. A maximum projection of a nerve cell (figure is taken from [1])

Fig.2.5. A composited projection of a nerve cell (figure is taken from [1])

Zero-order interpolation is used by binary ray and discrete ray casting. An advantage for using it is simplicity and speed while the aliasing effect is a disadvantage. To create a more accurate image, Higher-order interpolation functions are used.

An image-order volume rendering algorithm gives an array of data samples, it define the color and opacity. The interpolation functions which specify the color, and opacity are defined.

The Phong illumination model is used to generate the array of color values that involves performing a shading operation. The partially differentiating the interpolation function compute the unit gradient vector to get the component of the gradient. In order to get a smoother set of gradient vectors by using a central differencing method.

There are some kinds of ways to organize surfaces within a scalar field. By getting the magnitude of the local gradient vector at a inversely proportional ratio to the opacity fall off, the best results are got when the depth of the transition district is constant. Multiple iso-surfaces can be showed in a single image by applying the classification mappings.

Rays are cast from the pixels and the trilinear interpolation functions are used to decide the value at a location. These values in a back-to-front method are compounded to produce a single color. In an image-order ray-casting technique, the scalar value is decided at thepoint. Moreover the ray calculations for scalar values, shading, texture mapping, opacity and depth cuing performed at each stepping point.

Although it does not produce a realistic image, it provides a representation of the volumetric data and provides to adjust certain parameters in the shading equations. A simplified shading equation is applied as a function of wavelength, I (λ) is defined as:

(Eq.2.7)

where N is described for the normal, Kd is defined as the diffuse coefficient, Ka is the ambient coefficient, I j is the intensity of the jth light source , Ia is the ambient intensity and L j is the vector to the jth light source. The diffuse coefficient can be defined as:

Kd (λ, S, M) = K (λ) Td (λ, S(x, y, z) M (λ, x, y, z)) (Eq.2.8)

where K is defined the actual diffuse coefficient, Td is described the color transfer function, S is represent the sample array, and M for the solid texture map. The color transfer function is described for red, green, and blue. The intensity integral is approximated as:

(Eq.2.9)

where (d) is atmospheric attenuation, u is represent a vector , O (s) is describe as the opacity transfer function, and bc is defined the background color. The color transfer function is nearly equal to the opacity transfer function. Although selection of the desired transfer function is difficult, opacity transfer and different color functions can be classified to highlight different features in the volume.

In a cell-by-cell processing method, an image-order ray-casting method is used and each cell in the volume is done in a front-to-back order. Processing starts on the plane and processing starts with the cell closest to the viewpoint after that continue in order of growing distance from the viewpoint. Each cell is done by first determining for each scan line and determined by an integration volume. An intensity calculation to equation 2.9 as:

(Eq.2.10)

To simulate light result from translucent objects, volumetric is considered as a field of density emitters which is a tiny particle that emits and scatters light. The quantity of density emitters is proportional to the scalar value and is used to sample the occlusion of deeper parts of the volume but both shadowing and color variation is ignored. It is similar to the V-Buffer method and the intensity I of light is calculated as:

(Eq.2.11)

For the above equation, from t1 to t2 the ray is traversed, gathering at each t the density at these position attenuated by the possibility. The parameter and are modifiable and controls the attenuation and the spread of density values, respectively. Low values create a diffuse cloud appearance, while higher values show up dense portions of the data. The maximum value encountered along the. By mapping these values to changed color parameters, interesting effects can be achieved.

The various existing volume rendering samples can be described as an underlying transport theory model that is a set of virtual particles and information about the data. In order to classify periodicities and similar unseen symmetries of the data, virtual particles and interaction laws can be used.

To define I the intensity of light, using Krueger's transport theory.

(Eq.2.12)

where p is a point, σsc is the scattering coefficient , σa is the absorption coefficient and Q ( p) is the generalized source that is defined as:

(Eq.2.13)

This generalized source includes the emission at a specified point q (p) and along all directions the incoming intensity scaled by the scattering phase ρsc.

## 3.2.2 Shear-Warp Volume Rendering

To solve the transformation's problem from object space to image space, an intermediate coordinate system called sheared object space is introduced. Transform from the volume to an intermediate coordinate system by solving the problem. The third coordinate axis are parallel to all viewing rays during construction of shared object space. It can be formalized as a group of equations and these equations can be described as a factorization of the view transformation matrix Mview as follows:

Mview = P. S. Mwarp (Eq.3.1)

where P is a permutation matrix so that make the z-axis the principal viewing axis. Mwarp transforms sheared object coordinates into image coordinates and S transforms from the volume to sheared object space.

The shear-warp algorithm consists three conceptual steps:

1. Shear and resample the volume slices,

2. Project resampled voxel scanlines onto intermediate image scanlines, and

3. Warp the intermediate image into the final

Figure 2.7 illustrates these steps.

Fig.2.7. Illustration of volume rendering algorithm (figure is taken from [8])

There are three volume rendering algorithms called parallel projection rendering algorithm, perspective projection rendering algorithm and fast classification algorithm based on the shear-warp factorization.

## 3.2.2.1 Parallel Projection Rendering Algorithm

The volume and image data structures can be traversed in scanline order since that voxel scanlines are arranged with pixel scanlines in the intermediate image. Scanline-based coherence data structures use a run-length encoding of the voxel scanline and store to the next non-opaque pixel in the same scanline .

At a parallel projection, a volume is transformed by translating each slice to sheared object space. Therefore figure 3.2 shows the transformation from object space to sheared object space.

Fig.2.8. Illustration of the parallel projection (figure is taken from [8])

To skip voxels which are transparent and occluded, it executes work only for voxels that are both non-transparent and visible. Resampling and compositing are executed by streaming through both the voxels.

Voxels that are not skipped, it uses a tightly-coded loop that executes shading, resampling and compositing. The weights of resampling are the same for every voxel in a slice since each slice.

Figure 2.9 and 2.10 show the images of parallel projection rendering.

Fig.2.9. Volume rendering of a parallel projection with the human head (figure is taken from [8])

Fig.2.10. Volume rendering of a parallel projection with an engine block (figure is taken from [8])

## 3.2.2.2 Perspective Projection Rendering Algorithm

Perspective projections extend the advantages of parallel projections. Perspective projections give additional cues for resolving depth ambiguities and compute occlusions correctly.

For perspective projections, the shear-warp factorization gives a simple and proficient solution to the sampling problem. After that each slice of the volume is converted to sheared object space by a translation .Moreover a uniform scale and the slices are then resampled and composited together. Figure 3.5 illustrates the perspective projection.

Fig.2.11. Illustration of the perspective projection (figure is taken from [8])

The difference between the perspective algorithm and the parallel algorithm is that each voxel must be scaled and translated during resampling in perspective. And the perspective algorithm needs extra time to compute resampling weights therefore the parallel projection algorithm is cheaper than the perspective algorithm.

Figure 3.6 shows the images of perspective projection rendering.

Fig.2.12. Volume rendering with a perspective projection of an engine block (figure is taken from [8])

## 3.2.2.3 Fast Classification Algorithm

This algorithm evaluates the opacity transfer function meanwhile rendering and is moderately slower than the previous algorithms. This algorithm finds the most of the parameters for the opacity transfer function for some block of the volume. If the region is transparent, discard the scanline. After that, recursive this algorithm repeatly and subdivide the scanline. Unless the size of the current scanline portion is bigger than a threshold, render it instead of subdividing.

Unlike the algorithms developed on a runlength encoded volume, it is not practical to keep three copies of the unencoded volume and it is much bigger than a run-length encoding. It is easier to use a small range of viewpoints during modifying the classification function in order to change to one of the previous two rendering techniques for rendering animation sequences.