Virtual Reality Applications and Universal Accessibility
Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
The conception of Virtual Reality, a divinatory three-dimensional, computer-generated environs that allows a individual or multiple users to interact, pilot, react, and feel a compounded world modeled from the virtual world, has provided social, scientific, economic and technological change since its origin in the early 1960's. The environs do not necessarily need the same properties as the real world. Most of the present virtual reality environments are principally visual experiences, displayed either on a computer desktop or through peculiar or stereoscopic displays, but some pretences admit additional sensory information, such as sound through speakers or headphones. Virtual reality is a technology, which allows a user to interact with a computer-imitated environment, whether that environment is a feigning of the real world or an imaginary world. Virtual Reality brings the vision as close and realistic as reality itself. In present world virtual reality is useful in variety of fields like Information Systems, Military, Medicine, Mathematics, Entertainment, Education, and Simulation Techniques. Most of the Virtual Reality systems allow the user to voyage through the virtual environs manipulate objects and experience the upshots. The supreme promise of virtual reality is universal accessibility for one and all. In this project, everyone will welfare - people across all the fields. And the dispute is to develop a well-informed virtual reality systems with design and smart commonsense rule that are useful to people and those that provide great value and real meliorations to the quality of life. If this can be accomplished, tomorrow's information society technology could be bidding greater exclusivity through atmosphere, intelligence and universal accessibility.
Virtual reality may obliterate into the main headlines only in the retiring few years, but its roots reach endorse four decades. The nation was shaking in the late 1950's because off palatable traces of McCarthyism and was agitating to the sounds of Elvis, that an idea arose and would change the way people interacted with computers and make possible VR.
At the emerging time, computers were looming colossi locked in air-conditioned rooms and used only by those familiar in abstruse programming languages. More than glorified adding machines few people considered them. But a former naval radar technician named Douglas Engelbart & young electrical engineer viewed them differently. Rather than limit computers to number crunching, Engelbart visualize them as tools for digital display. He knew from his past experiences with radar that any digital information could be viewed on a screen. He then reasoned and connects the computer to a screen and uses both of them to solve problems. At first, his ideas were disregarded, but by the early 1960s other people were also thinking the same way. Moreover, the time was right for his vision of computing. Communications technology was decussate with computing and graphics technology. At first computers based on transistors rather than vacuum tubes became avail. This synergy yielded more user-friendly computers, which laid the fundament for personal computers, computer graphics, and later on, the emergence of virtual reality. Fear of nuclear attack motivated the U.S. military to depute a new radar system that would process large amount of information and immediately display it in a form that humans could promptly understand. The ensuing radar defense system was the first "real time," or instantaneous, feigning of data. Aircraft designers began experimenting with ways for computers to graphically display, or model, air flow data. Computer experts began provide with new structure computers so they would display these models as well as compute them. The designers' work covered with a firm surface the way for scientific visualization, an advanced form of computer modeling that expresses multiple sets of data as images and the technique of representing the realworld by a computerprogram.
Massachusetts Institute of Technology
The process of extracting certain active properties by steeping self-styled computer wizards strove to lessen the condition that makes it difficult to make progress to human interactions with the computer by replacing keyboards with capable of acting devices that have confidence on images and motion hands to emphasize or help to express a thought or feeling to manipulate data. The idea of virtual reality has came into existence since 1965, when Ivan Sutherland expressed his ideas of creating virtual or imaginary worlds. With three dimensional displays he conducted experiments at MIT. He outlined the images on the computer by developing the light pen in Ivan Sutherland in 1962. Sketchpad, is the Sutherland's first computer-aided design program, opened the way for designers to create blueprints of automobiles, cities, and industrial products with the aid of computers. The designs were operating in real time by the end of the decade. By 1970, Sutherland also produced an early stage of technical development, head-mounted display and Engelbart unveiled his crude pointing device for moving text around on a computer screen which is the first "mouse."
The flight simulator is one of the most influential antecedents of virtual reality. Following World War II and through the 1990s, to simulate flying airplanes (and later driving tanks and steering ships) the military and industrial complex pumped millions of dollars into technology. Before subjecting them to the hazards of flight it was safer, and cheaper, to train pilots on the ground. In earlier time's flight simulators consisted of mock compartments where the pilot sits while flying the aircraft which built on motion platforms that pitched and rolled. However, they lacked visual feedback which is a limitation. When video displays were coupled with model cockpits this was changed.
Computer-generated graphics had replaced videos and models by the 1970s.These flights are imitating the behavior of some situation which was operating in real time, though the graphics which belongs to an early stage of technical development. The head-mounted displays were experimented by military in 1979. These creation resulting from study and experimentation were driven by the greater dangers associated with training on and flying the jet fighters that were being built in the 1970s. Better software, hardware, and motion-control platforms enabled pilots to navigate through highly detailed virtual worlds in the early 1980s.
Virtual video games, Movies and animation
The entertainment industry for natural consumer was computer graphics, which, like the military and industry, as the source of many valuable spin-offs in virtual reality. Some of the Hollywood most dazzling special effects were computer generated in 1970's, such as the battle scenes in the big-budget, blockbuster science fiction movie Star Wars, which was released in 1976. Later movies as Terminator and Jurassic Park came in to scene, and .The video game business boomed in the early 1980s.
The data glove is the one direct spin-off of entertainment's venture into computer graphics, a computer interface device that detects hand movements. It was invented to produce music by linking hand movements to communicate familiar or prearranged signals to a music synthesizer. For this new computer input device for its experiments with virtual environments NASA Ames was one of the first customers. The Mattel Company was the biggest consumer of the "data glove", which changed in order to improve it into the Power Glove, the spreading mitt with which children are put down by force adversaries in the popular Nintendo game. As pinball machines gave way to video games, the field of scientific visualization has the experience of its own striking change in appearance from bar charts and line drawings to dynamic images.
For transforming columns of data into images, scientific visual perception uses computer graphics. This image of things or events enables scientists to take up mentally the enormous amount of data required in some scientific probes. Imagine trying to understand DNA sequences, molecular models, brain maps, fluid flows, or cosmic blowups from columns of numbers.
A goal of scientific mental image that is similar to visual perception is to capture the dynamic qualities of systems or processes in its images. Borrowing and as well as creating many of the special effects techniques of Hollywood, scientific visual perception moved into animation in the 1980s. NCSA's award-winning animation of smog decreasing upon Los Angeles have the exert influence or effects on air pollution legislation in the state in 1990. This animation was a tending to persuade by forcefulness of argument and stamen of the value of this kind of imagery.
Animation had severe limitations. At First, it was costly. After developing with richness of details computer simulations, the smog animation itself took 6 months to produce from the resulting data; individual frames took from several minutes to an hour. Second, it did not allow for capable of acting for changes in the data or conditions responsible for making and enforcing rules, an experiment that produce immediate responses in the imagery. If once the animation is completed it could not be altered. Interactivity would have remained aspirant thinking if not for the development of high-performance computers in the mid-1980s. These machines provided the speed and memory for programmers and scientists to begin developing advanced visualization software programs. Low-cost, high-resolution graphic workstations were linked to high-speed computers by the end of the 1980s, which made visualization technology more accessible.
The basic elements of virtual reality had existed since 1980, but it took high-performance computers, with their powerful image translating capabilities, to make it work. To help scientists comprehend the vast amounts of data pouring out of their computers daily Demand was rising for visualization environments. Drivers for both computation and VR, high-performance computers no longer served as mere number derived from, but became exciting vehicles for systematic search and discovery.
3. Introduction to Virtual reality
Virtual Reality is the computer generated stereoscopic environment. It gives capable of being treated as fact and contribution to interactive learning environments it combines attribute of accepting the facts of life with, manipulative reality like in simulation programs. Most of the Virtual Reality systems allow the user to voyage through the virtual environment manipulate objects and experience the outcome of an event. Virtual Reality brings the imagination as close and realistic as reality itself. This environment does not necessarily need the same properties as the real world. There can be different forces, gravity, magnetic fields etc in dissimilarity of things to the real solid objects. It is the technique of representing the real world by a computer program or imagined environment that can be experienced visually in the three dimensions of width, height, and depth. It implicates the use of advanced technologies, including computers and various multimedia peripherals, to produce a simulated (i.e., virtual) environment that users became aware of through senses as comparable to real world objects and events. Virtual reality can be delivered using variety of systems. Devote fully to oneself into virtual world, manipulating things in that world and facing the important effects as like that in a real world, involves future development of devices and complex simulations programs. In virtual systems, movements in internet are simulated by shifting the optics in the field of vision in direct response to movement of certain body parts, such as the head or hand. Human-computer interaction is a discipline in showing worry with the design, act of ascertaining and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. Many users have physical or relating to limitations at the same time to handle several different devices. Virtual reality is a new medium brought about by technological advances in which much experimentation is now taking place to find practical applications and more effective ways to communicate.
A virtual world is everything that is included in a collection of a given medium. I may involve without any others being included but exist in the mind of its originator or be broadcast in such a way that it can be shared with others. The key elements in experiencing virtual reality or any reality for that matter are a virtual world, having intense mental effort with sensory feedback (responding to user input), and interactivity. In virtual reality the effect of entering the world begins with physical rather than mental, concentration. Because Immersion is a necessary component of virtual reality. Virtual reality is more closely consociated with the ability of the participant to move physically within the world. Telepresence, Augmented Reality, and Cyberspace are closely associated with virtual reality. The recipient can access the content by virtual world through the interface which can be associated with it. At the boundary between the self and the medium the participant interacts with the virtual world. For the study of good user interface design much effort has been put forth. For many media and virtual reality will require no less effort.
4. Applications of Virtual reality
The Virtual Reality had shown its applicability in early 1990's and its exposure went beyond the expectations and it just started with some of the blocky images. Coming to the entertainment, the applications will involve in games, theatre experiences and many more. The application of the Virtual Reality come into the picture in Architectures where the virtual models of the buildings are created where the users can visualise the building and they can even walk into it. This may help to see the structure of the building even before the foundation is laid. in this way the clients or the user can checkout the whole building and even they can change the design if there are any alterations in the plan, this makes the planning and modifications very realistic and easy. This Virtual Reality is applicable even in medicine, information systems, military and many more. Further discussion will give a detailed explanation of all the applications.
4.1 Virtual Reality in information system:
For generating the direct or the indirect view of the physical real world environment the Augmented Reality is used. In this the elements will be in mixed up way with two things and finally create a mixed reality. The two things are Virtual Computer and the Generated imagery. Let us consider an example of Sports Channel on the TV where the scores are the real time examples of the semantic context in the elements of the environments. The Advancement in the Augmented Reality (AR) the real world entities can be digitized and even the user can interact with the surrounding in the digital world itself. This can be achieved by adding computer vision and object recognition to the Augmented Reality (AR) technology. Through this technology the information related to the surrounding and different objects present in it can be obtained and that will be similar to the real world information. Here the information is retrieved in the form of information layer.
In the present scenario the Augmented Reality (AR) research is been populated through the applications of the computer generated imagery. This application is replicating the real world where live video streams are been used. For the purpose of the Visualisation to the real world different displays are been used, they are Head Mounted Displays and Virtual Retinal Displays. Not only the displays but also the research also constructs the environment in a controlled way in which it replicates the real world. for this many number of sensors and actuators are used.
The two definitions of the Augmented Reality (AR) that are widely accepted in present days are:
- The Augmented Reality (AR) is a combination of real and virtual and it is interactive in the real time i.e., real world and this is registered in 3D. This definition is given by Ronald Alums in 1997.
- Paul Milligram and Fumio Kishinev define Augmented Reality (AR) as "A it is a continuous extent of the real world environment into a pure virtual or digital environment.
Due to the development in the Augmented Reality (AR) the general public are also getting attracted to this and interest is been increased in it.
Coming to the Main Hardware components that are used in Augmented Reality (AR) are as follows:
- Input Devices
- Combination of powerful CPU
- solid state compass
- Smart Phones.
Augmented Reality (AR) uses different display techniques to visualize the real world entities
- Head Mounted Displays
- Handheld Displays
- Spatial Displays
Head Mounted Displays:
Head Mounted Display (HMD) is one the display techniques used for visualizing the both the physical entities as well as the virtual graphical objects and the main thing that is to be concentrated is that all the entities and the objects moist replicate the real world. The Head Mounted Display (HMD) work in two ways i.e., through optical se-through and video see-through. Here half-silver mirror technology is used for optical see-through technology. This half-silver mirror technology first considers the physical world to pass through the lens of the optical since and then the graphical overlay information is to reflect these physical entities in the virtual world i.e. visualizing the physical entices in the virtual world. For this sensing the Head Mounted Display (HMD) uses tracking which should have six degree of freedom sensors. The main usage of tracking is that it allows the physical information to be registered in the computer system where that information will used in the virtual world's information. The experience that an used gets is very impressive and effective. The products of this Head Mounted Display (HMD) are Micro Vision Nomad, Sony Plastron, and I/O Displays.
Handheld Augment Reality is also one of the displaying technique used for the visualizing the virtual entities from the physical world. Handheld Augment Reality is a small devices that is used for computing and it is so small that it will fit in the user's hand. This Handheld Augment Reality uses video see-through techniques that helps to convert the physical entities or information into virtual information i.e., into graphical information. The different devices that are used in this are digital compasses and GPS in which six degree sensors are used. This at present emerged as Retool Kit for tracking.
Instead of wearing or carrying the display such as head mounted displays with handheld devices; pertaining to Augmented Reality digital projectors are used to display graphical information through physical objects. The key difference in spatial augmented reality is that from the users of the system the display is separated. Because these displays are not assorted with each user, SAR graduated naturally up to groups of users, thus allowing for strong tendency & collaboration between users. It has over traditional head mounted displays and handheld devices and several advantages. And for the user there is no such requirement to carry equipment or wear the display over their eyes. This makes spatial AR a good candidate to work together on a common project, as they can see each other's faces. At the same time a system can be used by multiple people and there is no need for every individual to wear a head mounted display. In current head mounted displays and portable devices spatial AR does not suffer from the limited display resolution. To expand the display area a projector based display system can simply incorporate more projectors. Portable devices have a small window into the world for drawing, For an indoor setting a SAR system can display on any number of surfaces at once. The persistence nature of SAR makes this an ideal technology to support design, for the end users SAR supports both graphical visualisation and passive hep tic sensation. People are able to touch physical objects, which is the process that provides the passive hap tic sensation.
In modern world the set of reasons that support the reality systems use the following tracking technologies. Some of the tracking system is digital cameras, optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID, wireless sensors. All these technologies have different levels of exactness and accuracy. The most important in this system is to track the pose and position of the user's head.
Virtual Reality Tracking Systems
In VR system tracking devices are intrinsic components. And these tracking devices communicate with the system processing unit and telling it the orientation of the user's.
In this system the user allows to move around within a physical world, and the trackers can detect where the user is moving his directions and speed.
In VR systems there are various kinds of tracking systems in use, but very few thing are common in all the tracking systems, which can detect six degrees of freedom(6-DOF).These are nothing but the object's position with x, y and z coordinates in space. This includes the orientation of objects yaw, pitch, and roll.
From the users point of view when u wear the HMD, the view changes as you look up, down, left and right. And also the position changes when you tilt your head or move your head forward or backward at an angle without changing the angle of your gaze. The trackers which are on the HMD will tell the CPU where you are looking and sends the right images to your HMD screens.
All the virtual tracking system has a device that generates a signal, the sensor will detects the signal and the control unit will process the signal and transfer the information to CPU.
Some tracking system required to attach the sensors components to the user. In such kind of system we have to place the signal emitters at fixed points in the surrounding environment.
The signals which are sent from emitters to sensors can take many forms, which admit electromagnetic signals, acoustic signals, optical signals and mechanical signals. Each and every technology has its own set of advantages and disadvantages.
Electromagnetic tracking systems
It measure magnetic fields to bring forth by running an electric current continuously through three coiled wires ordered in a perpendicular orientation with one another. Each coil becomes an electromagnet, and the system's sensors measure how the magnetic field affects the other coils. This measurement tells the direction to the system and also predilection of the emitter. An efficientelectromagnetictracking system is very reactive, with low levels of latent period. One disadvantage of this system is that anything that can yield a magnetic field can intervene in the signals sent to the sensors.
Acoustic tracking systems
Acoustic tracking system emit and sense ultrasonicsound wavesto ascertain the position and orientation of a target. Most of the tracking systems measure the time it takes for the ultrasonic sound to reach a sensor. Generally the sensors are fixed in the environment and the user wears the ultrasonic emitters. The system estimate the position and orientation of the target based on the time it took for the sound to reach the sensors. The rate of updates on a target's position is equally slow for Acoustic tracking systems which is the main disadvantages because Sound travels relatively slowly. The speed of sound through air can change depending on the temperature, humidity or barometric pressure in the environment which adversely affects the system's efficiency.
Optical tracking devices
The name itself indicates that it uselight to measure a target's position and orientation. The signal emitter in an optical tracking device typically consists of a set of infraredLEDs. The sensors which we use here are camerasthat can sense the emitted infrared light. The LEDs light up in continuous pulses. The cameras will record the pulsed signals and send information to the system's processing unit. The unit can then draw from specific cases for the data to determine the position and orientation of the target. Optical systems have a fast upload rate, which means it minimises the time taken by the specific block of data. The disadvantages are that the line of sight between a camera and an LED can be blurred, interfering with the tracking process. Ambient light or infrared radiation can also make a system less effective.
Mechanical tracking systems
Mechanical tracking rely on a physical connection between the fixed reference point and a target. The VR field in the BOOM display is a very common example of a mechanical tracking system. A BOOM display is an HMD mounted on the end of a mechanical arm that has two points of articulation. The system detects the position and starts orientation through the arm. In mechanical tracking systems the update rate is very high, but the only disadvantage is that they limit a user's range of motion.
4.2 Virtual Reality in military simulations:
VR technology extends a likely economically and efficient tool for military forces to improve deal with dynamic or potentially dangerous situations. In a late 1920's and 1930's,almost simulations in a military surroundings was the flight trainers established by the Link Company. At the time trainers expected like cut-off caskets climbed on a stand, and were expended to instruct instrument flying. The shadow inside the trainer cockpit, the realistic interpretations on the instrument panel, and the movement of the trainer on the pedestal mixed to develop a sensation similar to really flying on instruments at night. The associate trainers were very effective tools for their proposed purpose, instructing thousands of pilots the night flying skills they involved before and during World War II.
To motivate outside the instrument flying domain, simulator architects involved a way to get a view of the beyond world. The initial example of a simulator with an beyond position seemed in the 1950's, when television and video cameras became in market. With this equipment, a video camera could be ' fled ' above a scale model of the packet around an airport, and the leading image was sent to a television monitor directed in front of the pilot in the simulator. His movement of the assure stick and limit produced corresponding movement of the camera over the terrain board. Now the pilot could experience visual resubmit both inside and outside the cockpit.
In the transport aircraft simulators, the logical extension of the video camera/television monitor approach was to use multiple reminders to simulate the total field of notion from the airplane cockpit.
Where the field of notation requires being only about 60 degrees and 180 degrees horizontally vertically. For fighter aircraft simulators, the field of view must be at least 180 degrees horizontally and vertically. For these applications, the simulator contains of a cockpit directed at the centre of a vaulted room, and the virtual images are projected onto the within surface of the dome. These cases of simulators have established to be very in force training cares by themselves, and the newest introduction is a project called SIMNET to electronically paired two or more simulators to produce a distributed simulation environment. [McCarty, 1993] Distributed simulations can be used not only for educating, but to improve and test new combat strategy and manoeuvre. A significant improvement in this area is an IEEE data protocol standard for distributed interactive simulations. This standard allows the distributed simulation to include not only aircraft, but also land-based vehicles and ships. Another recent development is the use of head- climbed displays (HMDs) to decrease the cost of wide field of perspective simulations.
Group of technologies for military missions:
Applying applications of virtual reality which are referred by military, the Military entropy enhancement in a active combat environment, it is imperative to provide the pilot or tank commander with as much of the demand information as possible while cutting the amount of disordering information. This aim contributed the Air Force to improve the head-up display (HUD) which optically merges important information like altitude, airspeed, and heading with a clear position through the advancing windscreen of a fighter aircraft. With the HUD, the pilot advancing has to look down at his instruments. When the HUD is paired with the aircraft's radar and other sensors, a synthetic image of an enemy aircraft can be exposed on the HUD to show the pilot where that aircraft is, even though the pilot may not be able to see the actual aircraft with his unaided eyes. This combination of real and virtual views of the outside world can be broad to night time procedures. Using an infrared camera mounted in the nose of the aircraft, an increased position of the terrain ahead of the aircraft can be designed on the HUD. The effect is for the pilot to have a 'daylight' window through which he has both a real and an enhanced position of the night time terrain and sky. In some situations, the pilot may need to concentrate fully on the virtual entropy and completely omit the actual view. Work in this field has been started by Thomas Furness III and others at Wright Laboratories, Wright-Patterson Air Force Base, and Ohio. This work, dubbed the Super Cockpit, demanded not only a virtual view of the beyond world, but also of the cockpit itself, where the pilot would select and manipulate virtual controls using some Applications of Virtual Reality.
Automobiles based companies have used VR technology to build virtual paradigms of new vehicles, testing them real before developing a single physical part. Designers can make changes without having to scrap the entire model, as they often would with forcible ones. The growth process becomes more efficient and less expensive as a result.
Smart weapons and remotely- piloted vehicles (RPVs):
Many different types of views in combat operations, these are very risky and they turn even more dangerous if the combatant attempts to improve their performance. But there are two clear obvious reasons have driven the military to explore and employ set of technologies in their operations; to cut down vulnerability to risky and to increase stealth.
So here peak instances of this principle are attacking weapons and performing reconnaissance. To execute either of these tasks well takes time, and this is the normal time when the combatant is exhibited to unfriendly attack. For this reasons “Smart weapons and remotely- piloted vehicles (RPVs) “were developed to deal this problems. Loosely smart weapons are autonomous, while others are remotely controlled after they are established. This grants the shooter and weapon controller to set up the weapon and immediately attempt cover, thus minifying this exposure to return fire. In the case of RPVs, the person who controls the vehicle not only has the advantage of being in a safer place, but the RPV can be made smaller than a vehicle that would carry a man, thus making it more difficult for the enemy to detect.
4.3 Virtual Reality in Medicine
Virtual reality is being used today in many ways, one of the important feature of VR is application in the field of medicine for example paranasal surgery, Psychiatry, Virtual surgery etc. Mark Billing Hurst developed a prototype surgical assistant for simulation of paranasal surgery at the Hit Lab in Washington. Through the simulation operation the system provides vocal and visual resubmit to the user, and discourages the surgeon when a dangerous action is about to take place. In addition to training, the expert supporter can be used during the original operation to provide feedback and guidance. Its very practicable when the surgeon's awareness of the situation is limited due to complex antimony.
At last, author and his associates are developing a toolkit for physicians which will help them create their own expert supporters for different types of surgery Medical Applications.
The Virtual Reality should be in such a way that it must show the realism in the environment and this must be applicable in the following areas specially in the medical field:
Acropophobia, it is belong to a category of specific phobias. Is extreme and irrational concern (fear) superlatives. In the treatment of this, taking a patient to the adjoin of a virtual high building in a nonthreatening environment demonstrates different advantages. The brainstem clews needing vestibule-ocular mismatch that produce physical symptoms when the individual with acrophobia is identified in the offending environment for example ledge high above the street can be procreated with sufficient fidelity in a known nonthreatening environment that is virtual reality laboratory. This develops nausea and vertigo and elicits sympathetic responses. Conversely, the patient is aware that he or she is in fact in a reliable environment. Thus a cognitive noise is evoked, i.e. the sesensorineural perception of height placed with knowledge of the original safe environment. Neuroplastic mechanisms then can come into play to begin resetting the brainstem-visual interaction.
The indications persist overpowering if the cognitive deadening effect of knowledge of the actual safe environment is abstracted. In this situation, the patient is pathetic to persist the exposure to the height necessary for the neuroplastic response to develop.here the patient can be exposed to a step by step increasing level of stimulation by developing the sensed height of the building or decreasing the distance to the ledge and also assure the shape of the environment by walking nearer to the virtual adjoin or by looking up or down. In each case, the virtual environment is recalculated in essentially real time to develop to that we needed environmental consistency. Decompressing in such environments has been quite effective. Like same approaches have been used for treatment of arachnophobia and fear of flying. The reality of the stimulus can be placed example from a plainly artificial stick spider to a quite realistic, inspired representation with spider like texture and movement patterns. So the motion patterns can be made reactive to the patient's movements, and tactile input can be added.
Technologies of surgical training expect contiguous exposure of the physician to original patients. The mechanical aspects of surgical technique, including recognition of anatomic landmarks,
Instrument alters, and reaction to changes in the surgical field, this expect live patients and interaction with an experienced surgeon- educator. With VR training paradigms, the surgeon under aiming alters instruments that are connected to force transducers.
Also visual environment demonstrating anatomic structures is experienced and exchanges within the area of position in accordance with the actions taken using the virtual instruments and with changes in the arena of view. Such an approach is very useful in discovering the basic surgical guides. This environment appropriates for unlimited practice, limited only by the realism of the virtual surgical field, until the trainee-surgeon demonstrates sufficient manual and visuospatial adaptation to justify and handling actual patients. so here similar techniques appropriates the truth of the surgical technique to be developed to higher than human levels. For (ex.) when using virtual instruments to activate micro robotic devices, contributes to microsurgery have been developed to bottle up effects of natural human shudder and motor fatigue while maintaining the realistic interaction between the surgeon and the field. Further elaboration of this technique ultimately will allow performance of processes currently too delicate for the human hand.
Alternatively, to Combination of all these virtual techniques with improved computer-integrated imaging like, MRI, CT scan. Which allows much more accurate advances to biopsy and surgical procedures? Virtual reality techniques quietly are inspiring the instructing and sensing of anatomic relationships.
Basically Dynamic Imaging is merging of workflow automations, image edits and digital image. In other words it is Processing of image information varying between both medical imaging systems and visual anatomic data sets, has allowed virtual "fly-through" examinations of internal organs of patients. Moreover, computational systems have been developed to allow tactile interaction with these surfaces, immensely elaborating the potentialities for this consequence interaction in presurgical evaluations and fundamentally modifying the process of non-invasive organ testing.
Present generation well known developed technology is neurologic investigation with the use of Virtual reality (VR) applications, and it's considered as one of the crucial therapy. Suppose the development of particle of medical VR applications is trusted, then one can continue from an essentially open- curl condition for a closed-loop condition. That is, the computer in fact generates many of the characteristics of the virtual environments previously depicted. The presentation is varied according to the demonstration response, but the patient corpse in an artificially genarated milieu. If we want to develop the operating feedback curls between human and machine will be closing and therefore will become more functional and clinically applicable. Then we have to develop virtual surgery and dynamic imaging .This operational enhancement will be earned for systems demanding both microsurgical and imaging responses to patient
Space, whether using literal or computed images The computer must alter
Images gained from an actual physical environment rather than images developed by its internal program.
Virtual reality motor assistance
As a distinct localized entity in order to develop a fledge to assist the patient in accomplishing the target, it happens when the computer must feel that goal in the patient's space. The tactual input to the patient must use the target as well as the patient's replies in formulating the restorative forces that guide the patient to complete the target. Only by developing a response in real time discovered to the features of the patient's environment and to the patient's own replies to that environment can the VR system gives an reserve response.
Advances to training in movement disorders will trust on these design rules and will capitalize on the previously undeveloped potential of neuroplasticity.
More ever, it encounter we must assume the visual- tactile assistance paradigm. Here the patient may reach for a target.
In some situation, the patient-computer curl also must be closed, but the preference is directed toward assure of autonomic responses rather than cognitive interactions. But it is encounters the computer that must feel
Whether the presented inputs are effective in diluting pain responses and should modify stimuli procedures consequently.
VR systems can help in pain management by developing sufficient absorption to disturbs the patient from the pain by turning the noxious stimuli with pleasant sensory stimulus and by regulating pain limiting
Systems. In the concept of real-time analysis of patient space, an intelligent quantification of patient space - brain disorder supervising by partition passing, the VR computer also can contribute co-intelligence to practicing processing of patient data. Consider the analysis of motor conduct in complex partial seizures. In these situations, the analysis is concentrated on the patient's video space in the video
Electroenc Ephalo Graphic (EEG) monitoring environment. By using extensions of the a nalytic techniques originated for assessing video space in the visual-haptic environment, the computer can extract
respected movement data for presentation to the clinician.This approach demonstrates open loop. However, by changing the computer to cross regions of concern according to significance of the body character and the movement practices generated, a map of these patterns
can be awarded to the practician. This reduces the need for manual treatment to track applicable movements and delivers such data to the clinician in Virtual reality form.
Analysis of information processing in neuropil
Before analysing information processing in neuropil, generally VR extends to augment the conception of anatomic relationships, so will next applications grants rendering of physiology in terms that will increase understanding far beyond present techniques. For an example, VR demonstration of information flow in neuropil can be anticipated to permit significant dialogue between areas of neuropil and the individual. Analysis of such activity must take into account the holographic nature of central nervous system (CNS) processing in order to transcend.
But in an every methods currently must in use, which provide originality penetration into the nature of processes such as speech recognition(means giving instructions by the user and it functions depends on this instructions) and production, visual rule recognition, spatial perception, and incorporated (praxic) motor reactions. The output should be an raised interpreting of CNS role on a systems level that is probably to conduct to a new family of clinical diagnostic instruments and to rational, less empirical methods of neuropharmaceutic development.
4.4 Virtual reality in Mathematics:
Traditionally we have two kinds of laws, when we consider the obstacles which they stand in the way of certain actions. those are described below
i physical laws
ii social laws.
Physical laws avoid us from pitching up a ton weight boulder into the air unbacked.And social laws avoid us from departing around murdering people. In the second case, we may be able to execute the act physically, but are kept by our respect to the law which prohibits it. One tradition, whose history is far too complex to start to depict here, has searched to make of these two classes one, controversy that some social laws, such as that preventing execution, are natural laws, that is, are blueprinted in compliance with the nature of humans, and, for many in that tradition, in accordance with the nature of his Creator.
When we come to go through mathematics, we can't assist but feel the power of the impediments put in our manner to prevent us from doing what we wish.
At first, these may seem to be arbitrarily enforced on us by our teachers. We wanted to contribute 23 to 34 and make 57, but the spoilsports put a big red cross by it. If we are favourable, we will come to understand why patterns, such as those for the increase of fractions, are being enforced. Inspired by the beautiful coherency of these rules, we may finally turn our hand to research. Now, instead of forever adjoining up against the dislike of our teachers, we encounter some freedom at last. Our supervisor proposes we look at this paper and that to see if we can generalise their results in a certain sort of direction. Exemption of a sort, then, but clearly we can't just determine things how we like. Not only must we reason logically, but we feel ourselves critically restricted in how we define our concepts. When we get it right, unhoped of consequences should ensue.
Surely, a researcher's explorations in one domain will extend them, apparently inexorably, to the concepts of a very multiple fields. When we come to explain the experience of this reality we are adjoining up against, options are rather slender. As a first choice, we might equate it to physical reality. But the conflicts here seem too great. We might then liken exploiting in mathematics to playing according to the rules of the game of chess. This gives us beyond the realism of physical possibility. So we could move our bishop for our first move, but then we simply wouldn't be playing chess. Also, like mathematics, we can play chess in our heads without a physical chess set. However, the analogy doesn't gives us very far. The principles of chess seem far too absolute. Certainly, they're greatly fashioned to make for a game that has engaged people for their whole lives. But absorbed in mathematics is more like constituting able to change some of the rules of the game. If we took all such card and board games together the analogy would be nearer. But where then is the analogue to the discovery of storming links? It would be as though someone could expose a new strategy in bridge, and a chess player then distinguish how it could help her game.
Mathematics is an synergistic multiplayer game. Its virtual reality is constantly impressed by your actions and by the actions of other players.
- What makes the game stable?
- Why does not it crash?
- What is the nature of the shared game space for all players?
Massively-Multiplayer Online Role-Playing Game
This technology was equated by Borovik to, in the short form Massively-Multiplayer Online Role-Playing Game denotes as MMORPG. And he goes on to pose the questions:
What are the characteristics of intrinsic and unplanned laws of MMORPGs? Why do virtual world economies of MMORPGs adjust the similar laws as the real world economies? In particular, why do many virtual worlds suffer from inflation?
This gives us back to older classes and the question of their flightiness. For some the seeming of the same economic laws in the virtual world will ponder the inevitably of the truths of economics. But another reaction would be to argue that one should require common phenomena to seem in the virtual world and our world as these MMORPGs' economies are modelled so closely on our economy, and this certainly isn't the only possible economy. A further considers might then ensue about whether there is an ideal economy. Someone in the Natural Law tradition, for example, would invoke the concept of a ‘fair price', that is, one which does justice to all parties, understanding the virtue ‘justice' in a specific way.
4.5 Virtual reality in Education & Entertainment:
Our imagination is captured by the promise of virtual reality and has networks which will render it accessible. We might have a doubt that networked absorption environments, cyberspace, artificial or virtual reality, or whatever we may call, which evolves into one of the greatest adventures to ever come forward. `Virtual reality will depict from the real world entities and affect the whole ambit of culture, science, and commerce, including education, entertainment, and industry. It introduces new composition of mixed origin of experience for which the descriptors presently could not exist and it will be multi-national.
In the early days this text was cited by Gibson, Benedikt, and Tokoro. At first reading they might appear to be different tracks, but joining them together conduces importantly toward the casting of a 'matrix,' a 'computer-affirmed, computer-generated, multi-dimensional, artificial, or virtual reality,' which is 'widely distributed, omnipresent, open-ended, and ever changing.' They also suggest three important areas of recent cultural and technical development:
- The establishment of a cyber culture, which includes everyone who choose to inhabit the area of distributed digital media - electronic bulletin boards, databases, and multi-user simulation environments, including virtual reality. More or less these inhabitants will live in such domains; the bulk of their time is engaged within them. There they can change their individualities, their manner of social interactable, and their relationship with society. By living in such domains they become virtual beings in a virtual place., a society becomes accomplished, and a morals may goforth. What kind of morals will this be? Will it be governed? By whom and for what? This line of questioning becomes even more implying as A three dimensional environment which is considered as distributed virtual reality by one and all, that may contain private spaces or residences, which contain personal objects and self commands.
- The aspiration of the physical world, and private spaces may have doors, closets, and windows that look out onto multi-dimensional aspects. Toolkits allow for qualitative change of the world, and extensions of it are constituted of a never ending field of pure data. The field of data can include all walks of supplying comodities and produce worlds which do not fit our present descriptors. Some experiences will be comrad, like going shopping, or going to a concert. Other things will be unusual, like going to an ancient place or another planet.
- The permeative of the data field is everywhere, and people move about with computer devices. Interfaces become intuitive. Guides or agents co-inhabit the domains. Agents acquire knowledge, become familiar, and grow old with us.While this could read as science fable, extensive research is already being conducted in networked or distributed virtual reality. It currently comprise a very small industry, but one with great potential for growth. Our research in virtual reality at the STUDIO for Creative Inquiry at Carnegie Mellon University (CMU) investigates this field, and pertinent applications within it.The Networked Virtual Art Museum Perhaps it is useful to report on one project at the STUDIO in greater detail. The project is the Networked Virtual Art Museum, which joins telecommunications and virtual reality through the design and development of multiple-user absorption environments, networked over long distance. The essential areas inquire through the project include world-building software, visual art and architecture, telecommunications, computer programming, human interface design, and artificial intelligence, communication protocol, and cost analysis.Visual art and architecture
The fusion of disciplines is the basis for cooperative authorship of virtual worlds. The construction of the virtual museum implies the participation of visual artists, architects, computer aided design teams, computer programmers, musicians and recording specialists as well as other disciplines.
The project assists as a testing site for world building software and affiliated hardware. The programming teams have added sustainally to the functions of the software tested. Public releases are in planning.
Vital to the project is the development and effectuation of networking approaches, including modem-to-modem, server, and high bandwidth connectivity. Telecommunications specialists cooperate with the design team to resolve problems of connectivity absorption environments. Project achievements in this area are discussed in greater detail below.
The covering of artificial intelligence, in the form of agents (or guides) and smart objects, is an necessary area of development. The comprehension of inquire in the areas of interface design, smart objects, and artificial intelligence is a major component.Groupware and communication protocolThe project documents multi-user interaction and groupware performance, setup protocols within networked absorption environments, and suggests standards. The contribution of communication specialists addresses aspects of documentation and standardization.
The cost analysis is the practical nature of networked immersion environments, investigates the protency of information access for the end user, and profiles the end user experience. The project involves the participation of cost analysis specialists and develops a practical cost basis for networked absorption environments.
The project team has designed and formed a multi-national art museum in absorption based virtual reality. The building of the museum implies a developing grid of participants located in remote geographical locations. Nodes are networked using modem-to-modem telephone lines, the Internet, and finally high bandwidth telecommunications.
Each participating node will have the option to move with the virtual environment and conduce to its shape and content. Participants are invited to create additions or galleries, install works, or commission researchers and artists to develop new works for the museum. Tool rooms will be available, so one can construct additional objects and functions to existing worlds, or build entire new ones. Further, guest conservators will have the opportunity to organize special exhibitions, research advanced concepts, and investigate critical theory pertaining to virtual reality and cultural expression.
The design of the museum centers on a main entrance hall from which one can access bordering wings or galleries. Several exhibitions are completed, while others are under construction. The first exhibition to be conceptualize and completed, is Fun House, based on the traditional fun house found in amusement parks. The museum also contains the Archaeopteryx, considered by Fred Truck and based on the Ornithopter, a flying machine designed by Leonardo da Vinci. Imagine flying a machine designed by one of the worlds greatest inventors. The team is also collaborating with Lynn Holden, a specialist in Egyptian culture, to complete Virtual Ancient Egypt, an educational application based on classic temples mapped to scale. The gallery exhibitions mentioned are being constructed at CMU. However, we are anticipating other additions gestated and constructed by participating nodes in Australia, Canada, Japan, and Scandinavia.
Now that the framework of the museum project has been described, perhaps it is useful to discuss the essential points of one application.
5. Cave Automatic Virtual Environment
A Cave it is an acronym. We can abbreviate as Cave Automatic Virtual Environment. We can define it is an immersive virtual reality environment. In these projectors are directed to 3, 4, 5 or 6 of the walls of a room-sized cube.
From the last two years we considered implementing the CAVE (Cave Automatic Virtual Environment), there were many basic problems with head-mounted virtual-reality technology in that some of the few problems are
- Simplistic real-time walk-around imagery
- Unacceptable resolution
- It is tough task sharing experiences between two people or among more than two people.
- Primary colours (the basic colours) and lighting models
- There is no ability to perform for successive refinement of images
- Too sensitive to quick head movement
- No easy unification with real control devices
- A common problem is Disorientation.
- Poor multi-sensory integration, including sound and touch
The first CAVE
The first CAVE was developed in the Electronic Visualization Laboratory at University of Illinois at Chicago. In 1992(One thousand nine hundred ninety two year) SIGGRAPH the demonstration and announcement was gave.
General characteristics of the CAVE
It is a 10' X 10' X 9' theatre that sits in a larger room. The large room measured to be around 35' X 25' X 13'. The CAVE used projection screens to make up the walls. It uses rear-projection screens to make up of the walls. It uses down-projection screens to make up of floor. To show images on each of the screens CAVE use high-resolution projectors. By projecting the images onto mirrors which give back the images onto the projection screens. CAVE is used to generate 3-D graphics the user can watch these graphics a special glasses. These special glasses allow user to see a 3-D graphics that are generated by CAVE. With these glasses, people using the CAVE can certainly see objects moving in the atmosphere, and can walk around them, getting a suitable appearance of what the object would look like when they walk all over it. All these things can possible with only electromagnetic sensors. The frame of the CAVE is made out of non-magnetic stainless steel in order to interfere as little as possible with the electromagnetic sensors. When a person walks around in the CAVE, their movements are tracked with these sensors and the video adjusts accordingly. Computers control this aspect of the CAVE as well as the audio aspects. Cave not only gives 3-D video but it can also give 3-D audio. To produce these multiple speakers are placed from multiple angles in the CAVE.
A visual display is created by projectors that are placed outside the CAVE. The user inside the CAVE used to control the physical movements. Stereoscopic LCD (Liquid crystal display) shutter glasses carry a 3D image. The computers quickly produce a couple of images, one for each of the user's eyes. The glasses are coordinated with the projectors so that each eye only sees the correct image. Since the projectors are placed outside of the cube, mirrors frequently distance the distance needed from the projectors to the screens. SGI workstations or computers, drive the projectors. Clusters of desktop PCs (Personal computers) are famous to implement CAVEs, because they cost little and run quickly.
The CAVE has an inside-out viewing example where the design is such that the observer is inside looking out as opposed to the outside looking in. It uses window projection. It is nothing but creating an off-axis perspective projection. It means center of projection relative to the plane and the projection plane are designate for each eye. Each screen bring up to date at 96(ninety six ) Hz or 120Hz with a size of 1025x768 or 1280x492 pixels per screen, in an appropriate. Two off-axis stereo projections are displayed on each wall. To give the false appearance of 3-D, the viewer wears stereo shutter glasses that allow a distinct image to be viewed to each eye by coordinating the rate of alternating shutter openings to the screen to bring up to date rate. When producing a stereo image, the screen refresh rate is efficiently cut in half due to the need of viewing 2 images for 3-D image. Thus, with a 96(ninety six)Hz screen refresh rate, the total image has a highest screen refresh rate of 48(forty eight)Hz. The CAVE has a comprehensive view that changes from 90° to greater than 180° depending upon the distance of the viewer from the projection screens. However, the decrease in size and refresh rate could be overcome with some design varies to the CAVE's current display system or to future projector systems.
Current VE applications are in some ways more hopeful and run on systems that have little computational power than current flight simulation applications. A essential feature provided by most VE applications that can effect system performance, is user interaction with near virtual objects. To work efficiently with objects at close range a user need that the VE provide stereovision. This one need alone creates a series of restraint impacting the virtual environment. To produce correct perspective views for each eye the Stereovision need that the user's orientation and current head position in the space. If we don't know this kind of information the 3-D world looks distorted. Accordingly, the requirement to know head location forces the use of head tracking equipment that can agreement, overall system performance in areas such as image refresh rate and lag .
At present the only directional sound is generated by the CAVE audio system. The future plan is producing 3-d audio by using (HRTF).A MIDI(Musical Instrument Digital Interface) synthesizer is connected via Ethernet/PC so, for example, sounds may be produced to alert the user or carry information in the frequency domain. Introducing of new systems to make the calculation of individual HRTFs (Head-Related Transfer Function) more tractable, (manageable) 3-D audio system as fast as possible applied to the CAVE(Cave Automatic Virtual Environment). However, at this time, only one person can be managed and therefore the 3-D sound can only be correct for that person. This is a important problem for systems that arrange more number of users of the same environment such as the CAVE.
Magnetic Tracking System
Hand and head position are measured with the Ascension Flock of Birds six degree-of-freedom electromagnetic tracker operating at a 60Hz sampling frequency for a dual sensor configuration. The transmitter is placed above the CAVE (Cave Automatic Virtual Environment in the center and has a beneficial operating range of 6(six) feet. Head position is used to place the eyes to carry out the correct stereo computing for the observer. The second position of CAVEs sensor is used to allow the watcher to interact with the virtual environment. Since this system is not linear and such nonlinearities can significantly agreement the virtual experience of immersion for the user, a calibration of the tracker system is needed.
They are many libraries and software's designed specifically for CAVE applications. There are several techniques are available for scene modifications: In market they are 3 popular scene graphs are available : Opens, OpenSceneGraph, and OpenGL Performer.OpenSG:It can available as open source.
OpenSceneGraph: It also available open source.
OpenGL Performer: It is a commercial product from SGI.It is better for simpler simulations, not large scenes.
For developing CAVE(Cave Automatic Virtual Environment) software the mostly used API(Application Programmer's Interface) is CAVELib.It is created at the Electronic Visualization Lab at University of Illinois Chicago. At 1996 the software became commercialized and the further improvement of this software is taken care by VRCO Inc. The CAVELib is nothing but a VR(virtual reality) software package operates at low level in such that it conceptual, away for a developer window and viewport creation.The important feature of this API(Application programming Interface) is platform independent.The feature of this is enabling developers to create high-end virtual reality applications on Linux operating systems and Windows. The examples of Linux operating systems are Solaris, IRIX and HP-UX are no longer supported.CAVELib-based applications are clearly, configurable at run-time to make an application executable independent of the display system.VR Juggler is a suite of APIs designed to simplify the VR application development process. VR Juggler, a virtual platform for the creation and execution of immersive applications basically it provides a system-independent operating environment.VR Juggler admits the programmer to write an application that will run with any VR(virtual reality) display device , input devices, without changing any code or having to recompile the application. Total in worldwide one hundered CAVEs(Cave Automatic Virtual Environment) are using this Juggler.
CaveUT: This is Developed by PublicVR. CaveUT influence existing gaming technologies to create a CAVE atmosphere. Basically it is an open source for Unreal tournament. By using this spectator function CaveUT can place virtual viewpoints around the player's "head". Each viewpoint is a distinct client that, when projected on a wall, gives the appearance of a 3-D atmosphere.Quest3D: It is a development platform. It is used to develop or creating real-time 3D applications an
Cite This Dissertation
To export a reference to this article please select a referencing stye below: