3d Graphics Visualization For Interactive Mobile Users Navigation Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

    The main goal of using most of the mobile devices is for voice communications, however, upon technological advancement, it becomes more useful in many more ways. Among these ways is mobile 3D visual navigation system. 3D visual navigation comes in to being as a result of the drawbacks in conventional navigation and tour planning in two-dimensional map-exhibit approach, which are the most commonly available. The information provided for user in 2D are mostly limited with insufficient presentation, and lack of interaction. However, applying 3D techniques surely will yield realistic visualizations into navigation fields and more convenient using mobile device [1].

    What's now called a smart mobile, Smartphone or pocket PC was once called a PDA it was a Personal Digital Assistant and comes from Psion's first attempt in 1984. Thus, Mobile devices have made undreamed progress in enhanced input, computing power, memory, storage, and equipped with graphical hardware for function of displaying. Moreover, combined with an increasing wireless networking capabilities and Global Positioning System (GPS) receiver, the mobile devices offers an opportunity to interact with a map display showing the current location and orientation, A "mobile 3D map" is expected to be at least electronic, navigable interactive and real-time rendered, running on a PDA or smart phone. There are other related systems, which may claim to be 3D maps, but where the representation of the environment is restricted or has no 3D components at all [2]. For example, car navigation systems commonly support perspective 2D projection, which creates an illusion of three dimensionality through the perspective view, while the actual data is purely two-dimensional. Mobile systems are expected to be physically small, to fit in the pocket, and independent of external power sources. In this sense, a device embedded permanently in a car is therefore not considered mobile [3].

    At first glance, it may seem like dealing with 3D map in mobile devices for navigation only present a single particular context (that is user's location). This initial perception fails to take into account other context, like visualization, and interactivity  capabilities. However, there were so many previous attempts at providing reliable interactivity in 3D mobile navigation system; unfortunately, there are still faced many obstacles, which were left unsolved. This paper imperative thought is to solve the problems of 3D visualization for mobile user navigation and the interactivity among users. It will be achieved with 3D model, suited for mobile devi-ces with users' realistic perception.

    The remaining part of this paper is organized as follows; Section two discusses about related work, and section three provide the framework of the 3D virtual environment, and section 4 discusses about the proposed architecture of the framework of this study, while section is 5 described the implementation of the framework and section 6 is the discussion and section 7 is the conclusion of the work

2. Related work

    Mobile devices with 3D map model allow human being (the user) to understand a new world by using the computed path. The main interaction is between the mobile device and the users, however, the development of mobile 3D applications was long hindered by the lack of efficient mobile platforms, but even more by the lack of 3D rendering interfaces. Modern Graphics processing unit (GPU) perform floating-point calculations much faster than most CPUs due to their highly parallel structure which makes them more effective than general-purpose CPU [4, 5, 6, 7, 8, 9, 10, 11], and found its way into various fields. However, mobile devices have developed to the point where direct 3D rendering at interactive rates is feasible for viewing 3D models.

    The problems within visualization come from viewing scene. [12] Consider the direction as a good point of view if it minimizes the number of degenerated image is an image where more than one edges belong to the same straight line. In our design this consideration was adopted. However, [13] has proposed a method for direct approximate viewpoint calculation, which was initially developed for scene modeled by octress, however this paper have adopted the implementation in order ascertain the appropriate viewpoint of the entire environment.

    Early work on visual navigation started with using of 3D graphs, [14] survey on graph visualization and navigation techniques, as used in information visualization which has been adopted in their work. While, [15] performed the early experiment which visualized 3D vector graphics, using small VRML animations, and other multimedia on mobile data terminals so that it will be transmitted over GSM network [16] established an approach on 3D modeling techniques in which a full 3D model is created to generate the illusion of movement in 3D space, in our design only the potential view set of the scene are to be rendered for navigation. [17] Evaluates  the  uses  of smooth   animated transitions between directories in a 3Dtree maps visualization. 

 Afterwards [18] present spatial indoor location sensing information in 3D perception in mind. Thus, [19] provides an approach for the 3D navigation system that utilizes user's PDA with built-in GPS receiver, this come in-line with our design. [20] Reports most of the work already done on viewpoint manipulation and described that Navigation tools can be classified as being egocentric (moving a viewpoint through the world) or exocentric (moving the world in front of a viewpoint). These attribute were also classified in terms of general movement (exploratory), targeted movement, specified coordinate movement and specified trajectory movement. The main idea is to provide visual feedback of the user position [21]. The simplest feedback scheme is to permanently display the 3D coordinate position of the user. This solution is not of great help especially because the position only has a meaning if the user already has an in-depth knowledge of the environment. More elaborated solutions are based on the display of a global, simplified view of the world added in the user's field of view [22]. Hence, our design goes aligned with this idea.

    A* pathfinding expands node with the smallest possible lowest cost to navigate to the target, consequently, [23] present three methods and the associated data structures used to find the best open node. The first method is using a list of open nodes (Maintaining a list). The second method is widely used and implements the open list as a priority queue (Maintaining a priority queue). The third method is faster than the two others and uses an array of stacks (Maintaining an array of stacks), However, we adopt the third method in our implementation of Bi-A* pathfinding to ensure the vital interactive mobile user navigation.

3. The System Structure

    The system structure present the 3D mobile user navigation resources and activities in a 3D space, which encompass of: Satellites (GPS signal source), inter-connection of GPS receivers 'node and the links between the nodes as the client part, and the servers side that host the 3D model as shown in Fig 1. The Global Positioning System (GPS) is a satellite-based radio-navigation system, which provides specially coded satellite signals that can be processed in a GPS receiver, enabling the receiver to compute position, velocity and time [24]. The system utilizes the concept of one-way time of arrival (TOA) ranging. Satellite transmissions are referenced to highly accurate atomic frequency standards onboard the satellites, which are in synchronism with a GPS time base. There are three major components of GPS system: The space segment, control segment and the user segment. The space segment consists of the GPS satellites, which transmit signals on two phase modulated frequencies. GPS satellite are 24 (32 satellites are now in orbit) distributed in 6 equally spaced circular orbital planes with an inclination of 550 (relative to the equator) and an approximate radio of 22,200 kilometers [24, 25]. The control/monitoring segment monitors the health and status of the satellites, it includes: 1 master control, 6 monitor stations and 4 ground antennas spread all over the globe [26]. The master control station, located at Colorado Springs collects the tracking data of the monitoring stations and calculates the satellite's orbit and clock parameters using a Kalman estimator. The user segment simply stands for the total user community. The user will typically observe and record the transmissions of several satellites and will apply solution algorithms to obtain position, velocity, and time

Fig. 1, The Structure of the System.

    The inter-connection of GPS receivers' device at different node and the links between the nodes is the client part; hence the servers' side host the 3D model. As a result, the system is based on client/server architecture and design under wireless remote rendering of the 3D model on low bandwidth networks consideration, particularly considering using GPRS since most of the available mobile devices these days are GPRS enabled. .

    The web application server, and database server are interacting together at the server processing section mutually as shown in Fig 1. This implies that while, the web application server present and processes the information of the real course of action captured from the interconnections' request, the database server respond to the request of interconnections section real time, as such, the web application server keeps regenerating up to date request of the information receives from the interconnection section. The client device, that is, mobile device with GPS signal receiver; receives GPS signal which the assigned as the identification of users, thus, using this signal information (which is the location of user in terms NMEA 0183 protocol) user's location, and request will be determine in the server side instantaneously, therefore, the web application server will processed the information received and send the feedback to the user (client mobile device user) and the cycle continues. However, quite a number of users will make a request and will be responded at the same time, thus lead to the implementation of bi-A* path-finding algorithm to navigate users in a 3D walk-space and at the same time showing their whereabouts on 3D projections mapped.

4. Architectural Framework

    The architectural framework as shown in Fig 2, is adopted from [21] with some modification of a few processes in visualization application processes, and addition of a number of processes in 3D workspace processing. The framework is divided into 3 layers: visualization application, 3D workspace processing and users interaction, thus the user interaction is the implementation whole processes. Although certain processes were undergone in the pre-processing stage while others in runtime processing stage.

Fig. 2, Architechtural framework

    The 3D workspace processing layer describes the main structure of controlled processes in the 3D engine for visualization application of the 3D visualization. Moreover, within the 3D visualization application there are other sub-processes such as the Navigation object, Maintenance object, and Animation object, meant to provide prompt channel in the 3D walk space, using the 3D map of 3D workspace. In order to ensure onward processing of the navigation object in mobile devices' that content the 3D map, Iterative Animation is linked with iterative objects which are the Task Iterative Object Queue and Acknowledge Iterative Object Queue to describe the sequence of navigation phenomenon. While-though, the system structure is client server, there are other processes in the mobile devices based on the mobile device platform that influence on User Interface of the mobile device for users interaction.

5. Modelling

    The paper uses 3D polygonal model for the modeling of the scene. A polygon is a closed region bounded by straight lines. The points where the lines intersect are called vertices. The lines forming the polygon's border are called edges. The interior region is called a face: A polygon has two faces; often called the front and back face. And also it determines an inside and outside region. Objects that form polygonal model are collections of primitives and adjacent polygons given rise to polygonal meshes. In computer graphics the most popular method for representing an object is the polygon mesh model [27]. Object can be represented in the list of points in three dimensional spaces. This is done by representing the surface of an object as a set of connected planar polygons where each polygon is a list of points and consists of a structure of vertices, each vertices being a three dimensional point in so-called world coordinate space, with this model, the object is represented by its boundary surface which is composed of a set of planar faces [28]. Thus, polygonal surface approximation is an essential preprocessing step in applications such as scientific visualization [29, 30], digital terrain modeling [31] and 3D model-based video coding [32]. Polygon mesh representation is formally called a boundary representation or B rep because it is a geometric and topological description of the boundary or surface of the object. Polygonal representations are ubiquitous in computer graphics, because modeling or creating polygonal objects is straight forward; however there are certain practical difficulties. The accuracy of the model, or the difference between the faceted representation and the curved surface of the object, is usually arbitrary [27].

5.1 Designing the 3D Model

    Modelling complex 3D structures like IIUM Gombak Campus will lead to serious repercussions in modelling cost, storage and rendering cost and quality. The IIUM Gombak campus is nestled in a valley in the rustic district of Gombak, a suburb of the capital city of Kuala Lumpur. It covers 700 acres, (2.8 km²) in terms of landmass, with elegant Islamic-style buildings surrounded by green-forested limestone hills, The campus houses all the facilities that a modern community needs, including a mosque, sports complexes, library, clinic, banks, post office, restaurants, bookshops and grocery stores.

   The design of the 3D model was carried out through the following stages: Preparation, Using the reference descriptions (Image or video or sketches), Initial Modelling, Refinement of the Model, and final smoothing. At the preparation stage, we plan to manually design a simplified lightweight 3D model based on the calculations of the pre-processing visibility information acquired from the visibility algorithm implemented, and the consideration of the least level of detail for the potential visible sets, so that it will be use more intuitive in mobile devices. The 3D application used for the design is Autodesk 3DsMax 2010, and the final 3D model is exported to VRML 2.0 through the VRML exporter.

    The reference description of the model is a layout map and video file of the sample area, which is the zone A to zone D of IIUM Gombak campus, and lay within the administrative and academic area of the campus as shown in an enclose red arrows in Fig. 3.

Fig.3, Zone A to D of IIUM Gombak campus.

    The layout image is then imported into Autodesk 3DsMax and placed as a flat plane texture in the program as reference description images to the pre-existing grids. The initial modelling was carried out using polygonal modelling as describe in section 5. The reference image layout was extruded by momentary looking at the reference video and constructs the accurate design of the scene with least level of detail. After the basic shapes of the initial model was completed, the final model was then refined by adjusting points and edges of the model and make it smoother and ensured that it works well when the shape needs to move.

    About seventy percent of the model comes from spline line which where extruded to gain the buildings. Splines means flexible, describes as curves in construction to ease accuracy of evaluation capacity to approximate complex shapes through curve fitting and interactive curve design. However, as the buildings are complex in structure, that is not regular shape or straight structures, that is why we use spline approach for the construction of the model, and the best way in achieving that is using line drawn from the top view and from time to time Boolean property is applied to cut an object to form the required shape of the structure. The model built is shown in Figure 4, and compresses of the following components: Mesh Total= (vertices = 94645, Faces = 147568), Scene Totals = 379, (Objects = 338, Shapes = 11, Lights = 5, Cameras = 6, Helpers = 16, External Dependencies = 6jpg and 2tif), Animation end = 200, Render Flags = 416, Scene Flags = 57032, Render Element = 1. The objects components are; line, Number of polygon (NGon), Rectangle, Sphere, Box, StraightSt, Circle, Plane, Foliage, Sun, Bipart, (Pelvis, Spline, Footsteps).

Fig. 4, 3D model of Zone A to Zone D

6. Pre-processing

    The main purpose of pre-processing is to make visualization approximation by identifying and marking out the subset of the 3D model (spatial data) into potentially visible set (PVS) in order to reduce the data redundancy and enhance the rendering efficiency at the runtime processing. In our approach, we transformed the 3D map model into regular space grids partitioning, and then subdivided it into regular grid of 2n + 1; where n = 1, 2…grids on each side. as shown in Fig. 4, as a result, a quad-tree of axis aligned bounding boxes is formed, Although, the dataset structure were not considered to be a quad-tree data structures, yet their spatial partitioning representation will become quad-tree in nature. What we are considering here is to partition the 3D data set in such a way that the 3D dataset are represented in the regular grid and each grid is independent to any other, so that at each grid we can determine the visibility complexity, the potential visible set and the level-of-details required for the realistic 3D representation of the scene in mobile devices.

Fig. 5, Spatial subdivision

The regular grids containing actual 3D scene in the pre-computed resolution are stored in grids buffers and pre-computed to index buffers. During run-time, each grid containing the 3D scene traversed based on the dynamic location of the users' within the view frustum as shown in Fig. 6. If an adjacent grid is found completely outside the frustum it's discarded.

    [33] Coined the term remote rendering' to describe remote out of core rendering. However,  remote rendering has later been used to describe a situation where the rendering is performed remotely, and final frames sent to the clients. By implementing server-side visibility culling, network usage can be strongly decreased by sending only visible objects to the client. The real-time server-side computation of the visibility set can be pre-computed for all view cells that form a partition of the viewpoint. This is possible when the potential visibility sets (PVS) have been pre computed for all view cells of the viewpoint space, two methods of client-server cooperation are possible. First, the client sends his viewpoint changes to the server that locates the corresponding view cell and second the server updates the client data according to the associated PVS [34]. However, the Visual complexity of a scene from a given point of view is a quantity which depends on:

The number of surfaces visible from the point of view;

The area of visible part of each surface of the scene from the point of view;

The orientation of each (partially) visible surface according to the point of view;

The distance of each (partially) visible surface from the point of view

The visual complexity of a scene from a given viewpoint can now be computed by a formula like the following one:




C(V) is the visual complexity of the scene from the view point V,

Pi(V) is the number of pixels corresponding to the polygon number I in the image obtained from the view point V

r is the total number of pixels of the image (resolution of the image)

n is the total number of polygons of the scene

     Most pre-process visibility culling methods calculate the visibilities independently of the possible runtime viewing direction [2, 3]. View frustum culling is performed to all sets of potential visibility sets of nodes as shown in Fig. 6.

Fig. 6, View frustrum

     In addition view frustum culling methods further exclude polygons not directly in the viewpoint as shown in Fig 6. The level of detail selection is performed depending on a given polygon budget and data availability which allows for a constant frame rate control [2]. The quad-tree and the view-frustum test is computed for each tile in the buffer. Mipmapping requires all levels of detail of a texture to be held in memory [2, 12, 14].

7. Runtime Processing

     The runtime processing is implemented based on client-server architecture through 3D graphic Pipeline to the mobile devices' 3D API. The server is responsible for handling requests sent from the clients through sequence of long series of 3D transformations calculations of the frames. The client request after process by the server are sends through the 3D graphic pipeline for rendering to the client mobile devices via NMEA sentence, the graphic pipeline which acts as a state machine, and desires to be restarted when a state change is issued. The mobile device need to contain the 3D application (3D engine) in order for the mobile users to be able to make a requests, and receive a message. At runtime, all the potential visible sets within the 3D dataset are resolved and computed by using the pre-process information that was pre-computed; only the current view grid needs to have its visibility list open in memory, that is, to hold in memory for only the currently needed Level of detail (LOD) scene for the lowest level of detail. During initialization, the set of grids within the potential visibility set of the view frustum are loaded first, as was stated before and then followed by the other sets within the next view frustum, This sequences indicates the amount of rendered frames while all the entries are independent and associated to the 3D map model so that transmission is in queue.

7.1. Bi- A*Pathfinding Algorithm

     The bidirectional pathfinding algorithm (Dantzig 1962; Dreyfus 1967; Dreyfus 1967; Goldberg and C. Harrelson, 2005) works as follows. It alternates between running the forward and reverse version of Dijkstra's algorithm. This is refers to as the forward and the reverse search, respectively. The algorithm works as follows:

    During initialization, the forward search scansand the reverse search scans t. In addition, the algorithm maintains the length of the shortest path observed, that is, and the corresponding path followed.

Initially. When a path is scanned by the forward search where is the source and is the target or destination and has already been scanned in the reversed direction,

Then we know the shortest path and and the paths lengths and, respectively.

If, we have found a shorter path than those seen before,

So we update and its path accordingly, and do similar updates during the reverse search.

  The algorithm terminates when the search in one direction selects a vertex that has been scanned in the other direction. We use an alternation strategy that balances the work of the forward and reverse searches. If the target is reachable from the source, the bidirectional algorithm finds an optimal path, and it is the path stored along with.

Given a particular area to determine the bi-A* pathfinding from one location to another, it should start with the following:

Step 1. Assume the cost of movement from one node to the other is 2, each node is represented by and the estimated movement cost to move from the source to the final destination, be by adding 2 to each successive node throughout the movement, however, the later is referred to as the heuristic, which is a presumption.

Step 2. The entire search area should be divided into a square grid as shown in Fig. 7. This will simplify the search area to a simple two dimensional array. Each item in the array represents one of the squares on the grid, and its status is recorded as walkable or unwalkable. The path is found by figuring out which squares it takes to get from A to B.

Fig. 7, Bi-A* Pathfinding algorithm

Step 3. The movement should start in the first square of the source node A, by looking at all the boundary squares, However, the highest number of squares surrounding each square in the grid is eight squares, four comes from sides and four comes from the diagonals, and in some cases, it will be less than that if the source location is at extreme point. The cost of movement from each square node to another square node going through the sides of the square is less than the cost of movement of the square node moving through the diagonal by square root of 2, or roughly 1.414 the cost of moving sides.

Step 4. Once the first side square that is open with no obstacle found, it will be the first shortest path, since the cost of moving through the side square is less than the cost of moving through the diagonal square, otherwise, the diagonal square is the diagonal the last option to follow as the shortest path, else there is no path from that source to destinations. Therefore the algorithm terminates.

Step 5. Save the first path taken from node A as + 2 since is representing the node in a look-up table as shown in Table 1. Then moving to the next node through sides will become + 4, while moving through the diagonal will be.

Step 6. The costs of the movement from the source to the destination are computed in the look-up table as shown in Table 3.1. The important point to note is the paths followed through the successive node. The ways to approach that is through the symmetry and consistencies of the path followed which determine the bi-A* pathfinding. The symmetric approach can use the best available potential functions but cannot terminate as soon as the two searches meet. The consistent approach can stop as soon as the searches meet, but the consistency requirement restricts the potential function choice.


 Table 1.Look-up table for the movement

8. Results

    The experiment carried out to determine the efficient interactive navigation among users based on client server implementation and GPS signal over network transmission was found to face major difficulty when the 3D models was complex, that is containing many details of the scene . However, the datasets transfers with the lightweight 3D model and least Level of details improves the rendering speed and does not affect the download time, the scene is remotely rendered in sequences to the mobile clients and the frame rate was sufficient for conducting the navigation within an environment. Moreover, more than two users in a 3D walk-space were able to navigate using the shortest path to meet at a certain point and at the same time sees their whereabouts in 3D projection mapped on the their mobile devices' screen.

    The scenario for two-way direction of user navigation is as the following: there are three users involved, user blue, user green and user red as shown in Fig.8, they all are in different position within the environment and they make an appointment to meet face to face in a single location. Each user being aware position will also sees his current front view together with the current location of each other (as dot colour) in 3D projection mapped on the same mobile devices' screen. The users are Green user, Red user and Blue user as shown in Fig 9, 10, and Fig. 11, respectively. Based on the algorithms explained the applications is designed and installed on each of the users mobile devices.

Fig. 8, Pathfinding Scenario.

    The application is designed so that the colour dots in the scenario represent the users, and at the same time the Global Positioning system's Information, and also to implement two methods of client-server cooperation. First, the client sends his viewpoint changes to the server that locates the corresponding view node and second the server updates the client data according to the associated potential visibility sets.

Fig. 9, Green user

    The mobile devices' GPS receivers support a protocol called NMEA0183, which is a standard protocol to transfer geographical location information from GPS to GPS receiver's devices. Our application is configured to connect through the connecting port to the library of the protocol. The location information which is referred by NMEA as sentence is transfer to the mobile device through the 3D application connecting the protocol. The protocol consists of several sentences, in the development of this work only $GPGGA sentence is required, which stand for Global Positioning System Fix Data.

    The Bi A* pathfinding algorithm adopted and implemented in the 3D application integrate with the $GPGGA sentence to determine the starting and dynamic nodes for navigation within the environment. The navigation within the environment was carried out based on the wireless remote rendering on low bandwidth networks thought, particularly considering using GPRS since most of the available mobile devices these days are GPRS enabled. The entire system works by remote rendering from the server to the client, based on client requests. Hesina and Schmalstieg (1998) coined the term 'remote rendering' to describe remote out-of-core rendering. However, remote rendering has later been used to describe a situation where the rendering is performed remotely, and final frames sent to the clients.

  Fig. 10, Red user

Each mobile user is required to enrol for the access to the navigation information to get a user name and ID. The mobile device whose GPS receiver support NMEA 0183 protocol is required to send requests to the server, the server identify the request through the $GPGGA sentence and the user's ID. The calculation Potential Visibility Sets, Least level of detail and visibility culling is undertaken in the server side. By implementing server-side visibility culling, network usage can be strongly decreased by sending only visible objects to the client. The real-time server-side computation of the visibility set can be pre-computed for all view sets that form a partition of the viewpoint. This is possible when the potential visibility sets have been pre computed for all view sets of the viewpoint space.

    At the initial navigation orientation all the users sees their view points and the locations of the each other in the projected 3D map. As all the users constantly moves and manoeuvres the environment based on the shortest path distance, so also they sees the movement of the colour dots in the projected 3D map in the same mobile devices' screen and at the same time their value of the GPS coordinates changes and the rendering highly sufficient with good visible scene.

Fig. 11, Blue user

9. Discussion

Anyone who has ever experienced three-dimensional (3D) interfaces will agree that navigating in a 3D world is not a trivial task. The user interface of traditional 3D browsers provides simple navigation tools that allow the user to modify the camera parameters such as orientation, position and focal. Using these tools, it is frequent that, after some movements, the user is lost in the virtual 3D space and usually tries to restart from the beginning. When interacting with a 3D virtual world, one of the first requirements is being able to navigate in the world in order to easily access and explore information to allow for judicious decision making for solving eventual problems. Basic navigation requires being able to modify the viewpoint parameters (position, orientation and focal) for the user's movements to be efficient, it is important for the user to have a spatial knowledge of the environment and a clear understanding of his location. In order to enhance the user's navigation, navigation tools have to take into account the user goals and provide tools that help the user accomplish specific tasks. Its believe that the built-in navigation schemes that are available in most current 3D browsers are too generic. Navigation can be improved by adapting the navigation schemes to the virtual world and to the user's tasks. This belief led us to the concept of metaphor-aware navigation, that is, the navigation is tightly bound to the visual metaphor used and the way the user moves in the virtual world is determined by the metaphor that the same world is based upon. We also believe that the way a user navigates in a 3D world is intimately related to the task that he pretends to accomplish

10. Conclusion

The interactive mobile device visualization and navigation using 3D Maps of a 3D workspaces environment is an ongoing project with more upcoming bits and pieces. So far, we have attempted to solved to the problem of interactive navigation using bi-A* pathfinding algorithm, and built a mobile 3D engine that applies visualization optimization techniques and shortest pathfiding algorithm in 3D application. Even though from the viewpoint of very limited resources it might experiences some weakness. However, the 3D engine's role for this research is to implement the algorithms proposed and to maintain output sensitivity. Thus, the problem of GPS signal remains a great deal due to the fact that the GPS signals often blocked or reflected. This is a common problem, which could dramatically reduce the accuracy. To some degree it can be improved with radio differential correction or using learning and prediction methods, but the blocking and reflecting problem remains. Getting a precise model of the 3D workspace in the Campus, the effects of blockings and reflections can be calculated roughly. When the GPS receivers and the satellites' real coordinates are known, it can be determined which of the satellites has a straight visibility to the receiver, and which walls and planes reflect signals to the receiver. In this way the GPS accuracy in 3D workspace can be improved. However, mobile phones embedded with GPS and online maps have already emerged in the market to navigate user on the road. Some devices may also contain a digital compass, inertial sensors, miniature video cameras for position and orientation tracking location context aware, time and visualization of information.  The prototype of the design implementation is presented tested, and it shows the navigation orientation of 3 users in 3D walk-space and at the same time showing their whereabouts on 3D projections mapped. The map shows the user's location in the scene to navigate from source to the target, and the target also moves to the source to meet on the same physical location