OVS architecture

Published:

Haptic assisted Approach to Open Architecture Virtual Simulations design (OVSA) philosophy

Abstract:

Virtual reality is a technology which is often regarded as a natural extension to 3D computer graphics with advanced input and output devices. Since each virtual environment is an aggregation of various computer hardware and software, it is necessary to develop a system architecture which defines functional components required for modeling and simulation, interfaces between components, augmented interactions, and system infrastructure. This paper deals with modeling simulation architecture for virtual training systems based on the OVS architecture and describes a method of modeling and simulation for distributed simulation of Common Virtual Environment (CVE).

1.0 Introduction

Virtual reality (VR) is gaining popularity as an engineering design tool because of intuitive interaction with computer-generated models. The immersive aspects of virtual reality offer more intuitive methods to interact with 3D data than the conventional 2D mouse and keyboard input devices [1]. This technology has only recently matured enough to warrant serious engineering applications. Several companies and government agencies are currently utilizing the application of virtual reality techniques to their design and manufacturing processes. The state of the technology is appropriate for undertaking projects which demonstrate the feasibility and usefulness of using virtual reality for facilitating the design of a product. In very simple terms, virtual reality can be defined as a synthetic or virtual environment which gives a person a sense of reality. This definition would include any synthetic environment which gives a person a feeling of “being there”. Virtual enviroments generally refers to environments which are computer generated and can simulate real world problems, although there are several immersive environments which are not entirely synthesized by computer. By defining the interfaces to these Virtual environment and always accessing the environment through these interfaces, the developed program becomes hardware independent. Constructing a development environment that can simulate this interface, one can develop and test programs on a host computer, and then run them on the actual device upon completion [2].

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

In addition, Virtual simulation (VS) is becoming an indispensible tool in streamlining the design to implementation process. Engineers are creating “Virtual” factories, true 3-dimensional environments that allow visualization of the manufacturing process in much earlier stage [3]. In contrast to the physical simulations, virtual simulations with many intuitive features are now a days at glance in academic research. For VS systems with VR technologies, a ship simulator [4] is presented and modeled using virtual reality technique. The ship maneuvering simulator uses computer virtual reality technique and crafts the 3D virtual environment. [5] Discussed the functionality required to enable immersive visualization and hands-on interactivity for automobile VS using a commercial CAD application (CATIA). VS toolkits provide many computer graphics and distributed computing techniques [6]. Designing and implementing the software for VS is becoming increasingly difficult as problem complexity grows and the expectation for presence realism increases. Fast computer processors are needed to achieve user requirements. This is typically achieved through proprietary parallel machines (high-end workstations) or through computer clusters (i.e., coordinated set of computers) interconnected by Fast Ethernet operating at 100 Mbit/s or Gigabit Ethernet operating at 1000 Mbit/s. Computer clusters are essential when the controllers to the peripherals can not all reside in a single computer. For example, some peripherals are based on a specific operating system or use a new interface standard, thus requiring another application specific computer to support it. Furthermore, computer clusters can be a good choice because they allow for incremental enhancement to the VS.

With the rapid development of new input and output devices, it is becoming more certain that no one computer can meet the demands of future VS systems. As with most technologies undergoing rapid growth, supporting technologies, infra-structures,and standards are not keeping pace and, as a result of this, problems are being encountered.

To achieve an immersive visual experience, one needs to provide from two to twelve visual displays. Two displays are needed for head-mounted displays and twelve displays are needed for six-walled screens such as CAVE(CAVE Automatic Virtual Environment) [7]. The graphics cards that generate these displays can reside in one proprietary computer or can be distributed within a computer cluster, and inter connected by a special network. Yet cluster programming introduces new issues such as synchronized management of distributed data and processes [8]. Furthermore the data from various input devices need to be propagated to other devices and systems and video retraces for the different video outputs must be synchronized [9].

2.0 Existing Virtual Simulators

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Numerous simulators have been developed over the years to assist in the development, testing, and evaluation of the products process. These simulators have provided a cost effective solution to the development of products by alleviating the cost of maintaining an actual product, test course, and safety facilities in which to experiment with the systems. When humans have to perform a real-time simulation based on the direct manual interaction approach, a number of factors determine the usability of the VS system and, consequently, the reliability of the results. Aspects, such as the accuracy of the virtual models and the overall visual quality have an effect on how much realistic the user perceives the simulated environment, and may eventually affect the ability to directly interact with objects. With regard to the realism of a VS simulation, there are basically three relevant aspects: the visual realism, the behaviour of the simulated world and the conveying of sensations. Simulation has benefits that include reduced competition for scarce resources, no risk of harm to personnel or equipment, the ability to add as yet undeveloped capabilities to subsystems, and the ability to perform repeated tests over vast and varied terrains from the comfort of your own desk. As a result, an individual code module can be thoroughly tested and understood before moving to real hardware.

The development of a haptics-assisted VS environment is quite complex because of the difficulty to simulate realistic physical processes and due to limitations of the currently available VR interface devices. The research activity carried out in this field have regarded particular tasks of the maintenance activity such asassembly/ disassembly, accessibility and manipulability assessment of geometrically complex mechanical systems. A VR system for maintainability simulations in aeronautics with haptic force feedback is introduced in [10] where a haptic device is used to track hand movements and to return force feedback for providing the sensation of working with a physical mock-up. Similarly Jayaram [11] have developed a well known VR assembly application called virtual assembly design environment (VADE) at Washington State University. The VADE models part behaviors by importing constraints and model data from the CAD packages, one or two-handed assembly operations are performed using position tracking and cyber glove. Similar research by Wan [12] has been conducted at Zhejiang university in creating MIVAS (A Multi-Modal Immersive Virtual Assembly System) and Grasp identification and multi-finger haptic feedback for virtual assembling [13]. From the viewpoint of human computer interaction, visualized 3D images for such VS environments can provide operators with precise understanding of the simulated model. In addition, operators can virtually create models in 3D virtual space, where prototype test, or behavior of the models can be analysed as in the real world. Although haptic assisted VE programming is difficult, fortunately there are many software components, commercially available or in the public domain, that greatly reduce the development efforts. Some of the commercial toolkits are CAVELib [14], WorldToolKit [15] and OpenInventor [16]. Some of the public domain toolkits are VR Juggler [17], GNU/MAVERIK [18], MR Toolkit [19] and Chai3d [20]. All of the toolkits provide fairly comprehensive functionality from low level device handling to sophisticated distributed process and data management. Comprehensive VE toolkits are essential for rapid program development. Yet if a user wants to use only parts of several VE toolkits, implementing the VE becomes very difficult. This difficulty arises because most toolkits are frameworks that constrain the application programming to follow predefined rules. Furthermore, The applications are not open source and may lack device driver handlilng for advance devices. This makes it difficult to use a part without the whole application.

VS, although defined as a technology, is actually a combination of several technologies such as advanced visualization, simulation, decision theory, virtual environments, and interface for augmented devices. Since VS technologies cross multiple domains and organizational structures, there is a need to maintain an awareness of each of the technologies that support VS and promote lagging technologies to keep all of them in synchronization.

As shown above, this work is one part of a larger effort that has a goal of identifying and addressing fundamental measurement, control and standards issues. The usage of a closed platform that holds limitaions assosiated to the VS such as VR toolkit limitaiton or operating system dependency rather than the environment. The reason for this is the simulation environments are typically composed of predefined patterns of interface and display engines and it lacks an open interface that can provide further enhancement to it.

3.0 Framework of the Open Virtual Simulator

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Recently, simulation environments have attempted to leverage existing technologies to achieve a general-purpose or ‘open' environment. The impact that 'open' system has had on the computer science culture has been remarkable. From a general computing point of view, the term ‘open architecture' has been attributed the following definition:

“An architecture whose specifications are public; this includes officially approved standards as well as privately designed architectures whose specifications are made public by the designers. The opposite of open is closed or proprietary”

This definition is applicable to the general computer science community as a whole; the need for open architecture simulation has arisen because of a pressing need for more flexible simulation environment that predefines the simulation architecture model.

Therefore, in this paper a three layered architecture is proposed, each of which can be decomposed into further functional sub-modules. Each layer of the architecture has a general descriptive name. Enhancements like simulation-system architecture, real-hardware combination interface, substantial code and command interface are made in the proposed OVSA. These changes lead to the discovery that system has become independent or ‘open' from the simulation environment point of view. Fig.1 shows the High-level definition of the proposed architecture for the Open architecture Virtual simulations. Top layer is the functionality layer which is responsible for handling the user task. This layer passes the user requests tasks to the Component control layer. The control layer processing the request of the user requires execution of devices involved in manipulation of the task, for this the beneath layer called the device layer supports the control layer in performing the operation. Each layer and its operations are discussed further in detail.

To further consider the design and implementation of the OVSA, following issues must be considered:

3.1 High-Performance Computing (HPC)

With the emergence of high performance computing and visualization, new avenues for improvement are opening up to the design engineering community. This improved computing has lessened the impact of complex modeling simulations on design tasks and has resulted in reconsideration of design projects where design tasks are developed in more complex frame. Computationally intensive analysis and visualization models that were previously considered unwieldy and too complex to be used are being reexamined. The rapid evolution of high performance computing clearly puts increasing pressure on design application developers and users to maintain pace to ensure the greatest positive impact on the product design process.

3.2 Facilitating Tools

When considering the implementation of an open architecture VS, a particular hardware architectural must be committed to a high level abstract specification. When going for an open architecture VS system following enabling tools should be chose:

A. Standard operating system (OS): like DOS or Windows.

B. Non -proprietary hardware: such as PC's or SUN workstations.

C. Standard bus systems: such as PCI or VME.

D. Use of standard control languages: such as C or C++ or Java.

3.3 OS for Open Simulations

The operating system provides a software interface to enable the user to run application program and performs tasks such as port input output (I/O), updating the screen display and communicating with peripheral devices. In general, the tasks that an open architecture Simulator has to manage can be split into two different categories:

A. Direct machine control. This encompasses device interfacing, I/O, graphics engine, managing the VR environment and co-coordinating asynchronous events.

B. Non - machine control. Reading scripts and running the simulation loops, higher level communications to other systems and providing user interfaces.

We can also classify these two sets of task into real-time and non real-time. The definition of real time, which relates to the computing control systems, is given accurately by Microsoft Corporation Microsoft [21]:

“A real time system is one in which the correctness of the computation not only depends upon the logical correctness of the computation, but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to have occurred”.

3.4 Mapping Haptic-Assistance

Haptic-rendering process consists of using information received from the virtual environment, evaluating the force and torque to be generated at a given position, velocity, etc. at the operational joint of a haptic interface. The operational joint can be defined as the location on the haptic interface where position, velocity, acceleration, and sometimes forces and torques, are measured.

In order to map OVSA with Haptics, the following problems must be addressed [22]:

A. Finding the point of contact: This is the problem of CD (collision detection), which becomes more complex and computationally expensive as the virtual environment becomes more complex.

B. Generation of contact forces and torques: This creates the “feel” of the object. Contact forces can represent the stiffness of the object, damping, friction, surface texture, etc.

C. Dynamics of the virtual environment: Objects manipulated in a virtual environment can perform complex moves and may collide with each other.

D. Computational rate: Computational rate must be high, around 1 kHz or higher, and the latency must be low. Inappropriate values of both of these variables can cause hard surfaces in the virtual environment to feel soft as well as causing system unstable.

Many virtual reality systems have enabled feed backing by adding the sense of “Haptic” as one of the interaction methods. One of the greatest advantages of the haptic Interface is that it has a wide variety of applications. One of the first broad applications is in training people to simulate real world tasks. In the field of medicine, touch is an important sense that has been one of the most researched topics in haptics. The haptic provides these simulators with the ability to train on surgical simulators. This reduces the training time of the students and allows them to train on more complex operations before actually operating. The simulation could be recorded and later observed for evaluation or skill level verifications on the procedure. The surgery can also be recorded so that the student can feel the doctor's prerecorded procedure. The ultimate goal is to enhance the realism of environment as shown by various researchers that adding haptic feedback to the virtual simulations increases task efficiency.

4.0 Functional Layer

Two of the main features of the functional layer applicability, shown in 2, are in handling a graphical interface (Virtual environment) to the user and providing a set of services that enable the application to communicate with third-party software and hardware, and allow third party software access to, and control of, its own features.

The users interface must be intuitive and allow the user to access all of the Simulation functionality and parameters. It must provide features that enable the creation and testing of mission programs. In addition, it must allow access to design time tools for software module creation.

5.0 Components Control Layer

The request is received by the Components control layer using the user request interface to the component control layer as shown in fig. 3. The layer decides the task to be evaluated in three common runtime states initialization, execution and termination. In initialization state the Models are load and their geometry is evaluated. Then the scripts are matched with the scripting API and then translated into command task, these task d defines the actions to be taken by the simulation manager. Furthermore, the configuration manager analyzes the whole situation and anything required for the next state is prepared. The execution state is the actual execution of the required control method in real-time until a termination is occurred. In this state the simulation block is executed provided with information concerning to the scene rendering by the collision detection algorithm, the dynamics calculations and virtual scenes. The simulation scene is rendered with the help of scheduler which is responsible for scheduling the simulation tasks. The scheduler principal assignment is to schedule and process the tasks execution using a scheduling algorithm. Upon calling the termination state, the required cleanup routines execute and unload all the modules and scripts. This layer is responsible for providing the application with any desired information regarding initializing, execution and termination. The control information when required to interact with the hardware or devices attached to the application issues services request to the device interface layer.

6.0 Device Interface Layer

Essentially, the device interface layer is headed by the simulation engine, which is shown in fig. 4. This engine receives two different types of request from the component control layer interface. One is to calibrate a new device attached to the application. The second is I/O (input output) request to read or write a device status. In the first task this layer is responsible for initializing devices reading its device drivers and defining a communication interface for the application. Furthermore, it maps the device according to the virtual environment and defines protocol library for smooth communication between application and device interface. As initialization is done only once when the application is load, the device interface layer mostly deals with the second task which is more common. In this state the layer is responsible for reading the virtual communication ports using the defined protocols by the communication infrastructure. The communication infrastructure and protocols of the device layer perform these functions and pass measured data/status back to the components control layer.

7.0 Haptic mapping with OVSA

Using a haptic interface with simulation environment, users can be trained within highly interactive, realistic virtual simulator, thus combining advantages of a safe environment with the value of the “learning by doing”. To perform the haptic aided simulation, mapping of haptic device with the virtual simulator is necessary. Fig. 5a shows the haptic device connection with the device interface of the OVSA. Fig. 5b shows the open transformation process used for mapping haptic device coordinates with the simulated virtual environment coordinates. The coordinated of the virtual model are mapped with the haptic coordinates and HIP (Haptic Interface Point) is calculated. The endpoint of the probe is called the HIP. It plays an important role in collision detection and producing the sense of touch and feel. A vector from the HIP to the model surface is needed to determine if the HIP is still inside the object. If the dot product calculated between this vector and the normal vector is positive then the HIP is outside the object and if the value is negative the HIP is still inside the object. The magnitude of the vector is used to calculate forces and torque applied to the object by the probe.

8.0 Implementation of the model

From the reference architecture as described in Section 3, suitable implementation architecture must be chosen from the multitude of enabling technologies before a prototype design can be created and tested. Implementation of the proposed architecture is shown in Fig. 6. This architecture is based on an array of Intel processor running on Windows XP with a real-time extension designed as multi microprocessor OS so inter-process communication can be done easily. The application layer is coded to be executed via the processor array with the real-time extension to the haptic device interface with dedicated thread for handling of time critical data. C++ was chosen as programming language using the GUI version of VC++, Active X technology was used for information exchange that gave an advantage of any module or interface to be installed that is enabled with active X information. Phantom haptic device was used supported by Open Haptics API. The phantom setup allows the stylus or thimble to function in a wide range as operating tool, paint brush or other tools depending on the application.

9.0 Conclusion

This paper tries to present and discuss an architecture model for the OVSA (open virtual simulation architecture). A suitable implementation architecture that can satisfactorily realize the ‘open' simulation functionality has been chosen and described. Haptic assistance and transformation of the haptic interface to the virtual simulations are shown. Since the model posses extended hybrid architecture, in its un-cond state there are many ‘open' slots, or modules, in the lower layers that need defining bases on the expected scope of actions the simulations performed. Hence, it has been shown that OVSA enables integration with the flexibility to communicate with other systems. The architecture allows hardware to be interchangeable under common communication protocol rule sets of Active X, proving it to be truly open.

References

[1] Peng Gaoliang, Gao fun and He Xu. (2009), "Towards the development of a desktop virtual reality-based system for modular fixture configuration design", Emerald Journal of Assembly Automation, Vol. 29 No. 1, pp. 19-31.

[2] D. Pape, Hardware independent virtual reality development system, IEEE Computer Graphics Appl. 16 (4), 44-47 (1996)

[3] Donald, D. L. (1998). “A tutorial on ergonomics and process modeling using QUEST and IGRIP. In Proceedings of the 30th Conference on Winter Simulation, Washington, D.C., United States.

[4] Xiufeng, Z., Yicheng, J., Yong, Y., and Zhihua, L. 2004. “Ship simulation using virtual reality technique”, In Proceedings of the 2004 ACM SIGGRAPH international Conference on Virtual Reality Continuum and Its Applications in industry, Singapore.

[5] Julien Berta (1999). “Integrating VR and CAD”. IEEE Computer Graphics and Applications, Vol. 19, No. 5, pp. 14-19.

[6] R. Hubbold, J. Cook, M. Keates, S. Gibson, T. Howard, A. Murta, A. West, and S. Pettifer, GNU/MAVERIK: A microkernel for large-scale virtual environments, Presence 10 (1), 22-34 (2001).

[7] C. Cruz-Neira, D. J. Sandin, and T. A. DeFanti, Surround-screen projection-based virtual reality, the design and implementation of the CAVE, in Computer Graphics Proceedings (1993) pp. 135-142.

[8] H. Tramberend, Avocado, a distributed virtual reality framework, in Proceedings of the IEEE Virtual Reality (1999) pp. 8-13.

[9] J. Allard, V. Gouranton, L. Lecointre, and E. Melin, Net Juggler and SoftGenLock: Running VR Juggler and active stereo and multiple displays on a commodity component cluster, in Proceedings of the IEEE Virtual Reality (2002) pp. 273-274.

[10] Diego B, Joan S, Aiert A, Jorge JG, Alejandro G-A, Luis M. (2004). “A large haptic device for aircraft engine maintainability”. IEEE Computer Graphics and Applications, Vol. 24, No. 6, pp. 70-74.

[11] Jayaram S, Jayaram U, Wang Y, Tirumali H, Lyons K, Hart P. (1999). “VADE: A Virtual Assembly Design Environment”. IEEE Comput. Graph Appl 19:44-50.

[12] Wan H, Gao S, Peng Q, Dai G, Zhang F (2004)MIVAS: a multi-modal immersive virtual assembly system. In: Proceedings of the ASME design engineering technical conference, Salt Lake City, UT.

[13] Zhu Z, Gao S, Wan H, Luo Y, Yang W (2004) Grasp identification and multi-finger haptic feedback for virtual assembly. In:Proceedings of the ASME design engineering technical conference, Salt Lake City, UT.

[14] CAVELib user's manual version 3.1.1, VRCO, Inc. (2004).

[15] WorldToolKit documentation - Release 10, Sense8, Inc. (2004).

[16] Open Inventor, http://www.vsg3d.com. Accessed 15 march (2010).

[17] A. Bierbaum, C. Just, P. Hartling, K. Meinert, A. Baker, and C. Cruz-Neira, VR Juggler, A virtual platform for virtual reality application development, in Proceedings of the IEEE Virtual Reality (2001) pp. 89-96.

[18] R. Hubbold, J. Cook, M. Keates, S. Gibson, T. Howard, A. Murta, A. West, and S. Pettifer, GNU/MAVERIK: A microkernel for large-scale virtual environments, Presence 10 (1), 22-34 (2001).

[19] C. Shaw, M. Green, J. Liang, and Y. Sun, Decoupled simulation in virtual reality with the MR Toolkit, ACM Trans. Inform. Syst. 11 (3), 287-317 (1993).

[20] Chai3d, http://www.chai3d.org. Accessed 15 march 2010

[21] Microsoft Corp. (1995) “Real Time systems and Microsoft Windows NT”, Microsoft MSDN Library.

[22] Fu KS, Gonzalez RC, Lee CSG. Robotics: control, sensing, vision and intelligence. McGraw-Hill International Edition; 1987 ISBN: 0070226256.