This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In this paper we present the problems of converting a measured point cloud surveyed with a terrestrial laser scanner into a realistic 3D polygonal model that can satisfy high modeling and visualization demands. Today 3D scanners are becoming a standard source for input data in many application areas, like architecture, archaeology and surveying. We convert a usually unstructured point cloud into a consistent polygonal model. We analyze the principal problems concerning the surface interpolation and the visualization modes of 3D objects. Finally we present the most popular software packages, languages and libraries used for visualization of point cloud data.
Keywords: modeling, reconstruction, triangulation, mesh simplification, 3D scanÂning, point cloud, visualization
Traditionally laser scanning has been used in construction to meaÂsure distances with high accuracy. With the decreasing capture time and the improvement in mobility of laser scanners lately they are now used to scan sites for architectural objects or the preservation of cultural heritage, to record complex buildings, linking the interior measurements to the exterior measurements. High modeling and visualization demands is also required in different applications, like video-games, movies, virtual reality applications and other graphics applications. The reconstruction of precise surfaces from unorganized point clouds derived from laser scanner data measurements is a hard problem and problematic in case of incomplete, noisy and sparse data.
2. 3D model reconstruction using terrestrial laser scanner
Nowadays, the recovery of the 3D shapes and models and the measurement phase are mainly separated from the modeling and visualization part.
Laser Scanning is a new geodetic technique, whereas the geometry of a structure or an object is measured completely automated and reflectorless with high precision and velocity. The origin result is a so-called point cloud.
Most of the laser scanners are based on the time of flight principle. This technique allows measurements of distances up to several hundred of meters. Even ranges beyond one kilometre are achievable. The advantage of long ranges implies a less accuracy in the distance measurement (approx. one centimetre).
Beside the time of flight principle, the phase measurement principle represents the other common technique for medium ranges. The range is restricted to one hundred meters. In opposite to the time of flight principle, accuracies of the measured distances within some millimetres are possible.
The laser scanner records the points three-dimensionally by measuring the horizontal and vertical angle, as well as the spatial distance for each point. The distance is measured electro-optical using the impulse procedure or phase comparison, depending of the instrument type. The coordinates of the points are obtained in a specific Cartesian system of the scanner using simple trigonometric functions. The horizontal and vertical angles are modified automatically in pre-established intervals.
Fig. 1 Data acquisition with laser scanner
A point cloud is a description of a set of points, usually specified by a 3-tuple, [x, y, z] defined on an orthogonal coordinate system representing 3-D space.
Point cloud data files are often simple text files, with each row containing a point. There are many variations; a set of header rows that describes the data, various characters separating the values (delimiters), other information (e.g. colour, surface orientation), binary encoding of the coordinates, etc.
Sometimes these points are referred to as vertices if they are to be used as corners of a polygonal mesh, but a mesh constructed from a point cloud does not necessarily include these points as vertices.
Beside the determination of 3D-coordinates (geometrical information) the recording of the intensity (radio metrical information) is given by the point cloud. Additionally a photographic picture can be made with the camera in cooperated.
The accuracy of the points in space is at the order of a few mm. Geo-referencing is possible due to known points in the scan-area with their given 3D-coordinates.
Fig. 2 Point cloud with intensity information
Mostly, an object has to be scanned from different viewpoints. Afterwards, the aim is to register single point clouds to one common point cloud. This operation is called registration.
For registration, homologous points (tie points) or objects (e.g. spheres, cylinders) are required. Thereby, in each scan at minimum three tie points or spheres must be visible. The spheres can be changed by reflectors or special targets.
The default coordinate system of the laser scanner, named Scanner Coordinate System (SCS) is based on the scanner center, which corresponds to (0, 0, 0) in a local (X, Y, Z) coordinate system. All raw measurements taken by the scanner are with respect to this arbitrary system. As previously mentioned, there is no requirement to know the X, Y, Z of the scanner center, or to level the scanner, as the registration process will transform the SCS to the local coordinate system. Since the targets have known coordinates from surveying in control, the system performs a mathematical transformation from one system (scanner) to the other (local control). To make a simple analogy, this transformation is similar to the common translation from GPS (WGS-84) to state plane coordinates. Assuming a redundant set of known coordinates (more than 3) was used to complete the registration, the system will provide details how well the transformation "fit" on a point-by point basis. The surveyor then can eliminate or weight points based on these results or other local planar projections. Once a registration is complete, all points from the scan(s) are viewed, stored, and exported in the global coordinate system. Thus, it is possible to integrate the local scanner frame to existing reference frames, obtaining a registered point cloud.
3. The surface interpolation problem
The surface reconstruction is a difficult problem because firstly the measured points are usually unorganized and often noisy; moreover the surface can be arbitrary, with unknown topological type and with sharp features. Therefore the reconstruction method must infer the correct geometry, topology and features just from a finite set of sample points.
The data can have quite different properties that must be considered in the solution of the surface interpolation problem. Many methods have been developed triangular mesh representation from a point cloud. Then given the polygonal surface, various techniques can be used for post-processing operations (smoothing, texturing) and for the visualization of the 3D model.
The conversion of the measured data into a consistent polygonal surface is generally based on four steps:
1. Pre-processing: in this phase erroneous data are eliminated.
2 determination of the topology of the object's surface: the relations between adjacent parts of the surface have to be derived. This operation needs to take into consideration possible 'constraints' (e.g. breaklines), mainly to preserve special features (like edges);
3. Generation of the polygonal surface: triangular meshes are created satisfying certain quality requirements, e.g. limit on the meshes element size, no intersection of breaklines, etc.
Fig.3 Generating a polygonal surface
4. Post-processing: when the model is created, editing operations are commonly applied to refine the polygonal surface.
In the first step the principal operation are: data sampling, based on the curvature of the points, noise reduction and outliers rejection and holes filling. In this step wrong correspondences can be removed automatically or manually with visual inspection.
The core part of all reconstruction programs is the triangulation. This converts the given set of points into a consistent polygonal model (mesh).
Delaunay Voronoi Delaunay
triangulation diagram and Voronoi
Fig. 4: Delaunay/Voronoi relationshipOne of the most used is the Delaunay Triangulation. The DeÂlaunay triangulation is closely related geometrically to the Voronoi diagram, see Figure 4. The Voronoi diagram split the plane into a number of polygonal regions called tiles. Each tile has one samÂple point in its interior called a generating point.
All other points inside the polygonal tile are closer to the generating point than to any other. The Delaunay triangulation is created by connecting all generating points which share a common tile edge.
The most desirable feature of the Delaunay triangulation is that it can be shown that the triangles are as equilateral as possible, thin triangles are avoided. There are many problems regarding this method though: the complexity of implementing it is one, and the memory required to triangulate large amounts of data is another.
This operation generates vertices, edges and faces (representing the analysed surface) that meet only at shared edges. Finite element methods are used to discretize the measured domain by dividing it into many small 'elements', typically triangles or quadrilaterals in two dimensions and tetrahedra in three dimensions. An optimal triangulation is defined measuring angles, edge lengths, height or area of the elements while the error of the finite element approximations is usually related to the minimum angle of the elements.
Fig. 5 Applying corrections to the polygonal surface
The last step consists in editing operations in order to perfect the polygonal surface, like edge corrections, triangles insertion or surface corrections by polygons editing.
The polygonal model can also be improved adding new vertices and adjusting the coordinates of existing vertices. Moreover spikes can be removed with smooth functions.
4. The visualization of 3D models
The visualization of a 3D model is often the only product of interest for the external world and remains the only possible contact with the model. Therefore a realistic and accurate visualization is often required.
Nowadays, with the increasing of the computer memories, shade and texture are added to all the models, but in order to accurate visualize big data sets, much information contained in is often reduced. The consequences are that the accuracy of the data is lost as well as the geo-referencing (most of the software has their own coordinate systems) and that high resolution textures are unusable (because of the control on the Level of Detail). On the other end, low accuracy in the visualization does not attract the end-users and cannot justify the high cost of producing the photogrammetric 3D model.
After the creation of a triangular mesh, the results are usually visualized, according to the used package and the requirements, in the following manners:
Wireframe mode: it is the easiest way of representing a 3D object. It consists of points, lines and curves and describes only the edges in a transparent drawing, without texture or shading information. This technique is mainly used in computer-aided design (CAD) packages. This 3D model visualization is:
an easy way representation (points, lines, edges)
a transparent drawing (no hidden surface removal)
used for complex objects sometimes unclear
Fig. 6. Visualization of the model using the wireframe mode
Shaded mode: it is based on the assignment of surface properties to the object (colour, normal information, reflectance, transparency)
There are many different shading algorithms, the most well know are flat shading and smooth shading.
The key difference between flat and smooth shading is in the way that the normals are used. Flat shading is valid for small object and if the source light and the viewer are at infinity; for high detail levels, a great number of flat shaded polygons are required, therefore it is of little value for realism.
1. Flat shading
(normal vector and lighting for each face/triangle)
2. Smooth shading (Gouraud, Phong)
(normal for each vertex - average of surrounding faces)
Fig 7. Visualization of the model using the flat shading mode
Smooth (or interpolated) shading can be applied with many algorithms, but the two "classic" approaches are Gouraud and Phong. Gouraud shading specifies a colour for each vertex and polygon and then intermediate colours are generated along each edge by interpolation between the vertices. Phong shading requires a normal interpolation for each pixel and so is quite prohibitive and time consuming for real-time use.
The characteristics of flat shading are:
- fast and simple;
- shading of a polygon independent of shade of adjacent polygons;
- gives object a faceted appearance.
The characteristics of smooth shading are:
- approximate the normal to the surface at a vertex
by averaging all normals of abutting polygons
- calculate the intensity at each vertex using illumination equations;
- use linear interpolation along a polygon edge to
compute the intensity at each edge pixel;
- use linear interpolation along a scan line to
compute the intensity at each interior pixel of a
Textured mode: it is used for photorealistic visualization of the 3D models (image-based rendering). Texture mapping in its simplest form involves a single texture (image, orthophoto) being mapped onto the surface composed of one or more polygons.
When mapping an image onto an object, the colour of the object at each pixel is modified by the corresponding colour derived from the texture. Compared to flat shading, texture mapping can reduce the number of used polygons, but increase the 'weight' of the model (in terms of visualization).
Fig 8. Visualization of the model using the textured mode
The characteristics of the texture mode:
combination of geometry and texture information;
single / multi texture;
images / orthophotos / maps;
memory and processing power required
A lot of memory is required for rendering realistic 3D texturized model: therefore a new graphic interface, called AGP (Accelerated Graphics Port), has been developed. AGP allows the texture to be stored in a main memory more powerful than video memory and can speed up the transfer of large textures between memory, CPU and video adapter.
In case of 3D terrain model (DTM) other common methods of representation are the contour maps, the colour shaded models (hypsometric shading) or the slope maps. In a contour map, contour lines are generated and displayed from the terrain models by intersecting horizontal planes with the network. In a colour shaded model, the height information of the model are displayed with colours.
In general, creating realistic 3D models (shaded or texturized) helps to visualize the final result much better than a wireframe representation. With a wireframe, because no hidden surface removal is performed, it is not always clear to distinguish. from which viewpoint we are looking at the model. Instead shading and rendering can greatly enhance the realism of the model. To decide which type of model to produce, we have to consider different factors, like time, hardware and needs. If the time is limited or the hardware cannot stand big files, detailed shaded models might not be necessary.
5. Softwares, languages and libraries
Visualization of data can be done using various software packages, like:
Geomagic, PolyWorks, Rapidform
TerrainView, Skyline, ArcGIS, ArcInfo, Erdas
The most popular languages used for 3D model visualization are:
VRML (VRML1 and VRML2)
The libraries used for 3D model visualization are:
DirectX /Direct 3D
The characteristic of OPEN GL (Open Graphics Library) are:
â€¢ open standard
â€¢ available for the most modern operating systems
â€¢ for writing applications that produce 2D and 3D computer graphics
â€¢ to draw complex three-dimensional scenes from simple primitives
â€¢ a single, uniform API to avoid problems for the programmer with different 3D accelerators
â€¢ good programming skills are required
The characteristic of DirectX / Direct3D are:
â€¢ developed by Microsoft
â€¢ available only for Windows
â€¢ collection of application programming interfaces (APIs)
â€¢ handling of multimedia, game programming and video tasks
â€¢ good programming skills are required
File formats for 3D model visualization are:
OBJ (Object File)
â€¢ very popular (AutoCAD)
â€¢ only wireframe model (lines, edges, points)
â€¢ big file size
â€¢ no texture information
â€¢ easy to implement
â€¢ open standard
â€¢ geometry and texture supported
â€¢ accepted from most of the software systems (Maya, Blender, MeshLab, 3D Studio Max)
- COLLADA (COLLAborative Design Activity):
â€¢ data exchange format for 3D data (is designed to transport your content between applications)
â€¢ XML based
â€¢ using XML to represent structured data enables 3D tools to use the full arsenal of modern database tools
â€¢ it can include geometry, material and texture
â€¢ developed by Sony for the Play Station 3 and Portable, now an open standard
In this article we try to find a way to create a computer model of an object which best fit the reality. Polygons are usually the ideal way to accurately represent the results of measurements, providing an optimal surface description. While the generation of digital terrain models has a long tradition and has found efficient solutions, the correct modeling of closed surfaces or free-form objects is of recent nature, a not completely solved problem and still an important issue investigated in many research activities.
3D scanning using High Density Surveying (HDS) is a rapidly growing domain with applications spanning across many industries such as aerospace, architecture, film and television, archaeology and surveying. The ability to capture millions and even billions of 3D points means that a vast amount of information can be known about a particular object or landscape. However, all this data comes at a price. There are relatively few software solutions that can easily handle and process billions points on a PC.
A big difficulty at the moment is the translation and interchange of 3D data between modeling/ visualization packages. Each modeling software has always its own (binary) format. Even if they allow exporting files in other formats, the generated 3D file often cannot be imported in other packages. Usually the only workable export format from a CAD package is the DXF, which has no texture, requires large storage and often produces considerable errors.