Virtual Reality Applications and Universal Accessibility
Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Published: Fri, 02 Mar 2018
The conception of Virtual Reality, a divinatory three-dimensional, computer-generated environs that allows a individual or multiple users to interact, pilot, react, and feel a compounded world modeled from the virtual world, has provided social, scientific, economic and technological change since its origin in the early 1960’s. The environs do not necessarily need the same properties as the real world. Most of the present virtual reality environments are principally visual experiences, displayed either on a computer desktop or through peculiar or stereoscopic displays, but some pretences admit additional sensory information, such as sound through speakers or headphones. Virtual reality is a technology, which allows a user to interact with a computer-imitated environment, whether that environment is a feigning of the real world or an imaginary world. Virtual Reality brings the vision as close and realistic as reality itself. In present world virtual reality is useful in variety of fields like Information Systems, Military, Medicine, Mathematics, Entertainment, Education, and Simulation Techniques. Most of the Virtual Reality systems allow the user to voyage through the virtual environs manipulate objects and experience the upshots. The supreme promise of virtual reality is universal accessibility for one and all. In this project, everyone will welfare – people across all the fields. And the dispute is to develop a well-informed virtual reality systems with design and smart commonsense rule that are useful to people and those that provide great value and real meliorations to the quality of life. If this can be accomplished, tomorrow’s information society technology could be bidding greater exclusivity through atmosphere, intelligence and universal accessibility.
Virtual reality may obliterate into the main headlines only in the retiring few years, but its roots reach endorse four decades. The nation was shaking in the late 1950’s because off palatable traces of McCarthyism and was agitating to the sounds of Elvis, that an idea arose and would change the way people interacted with computers and make possible VR.
At the emerging time, computers were looming colossi locked in air-conditioned rooms and used only by those familiar in abstruse programming languages. More than glorified adding machines few people considered them. But a former naval radar technician named Douglas Engelbart & young electrical engineer viewed them differently. Rather than limit computers to number crunching, Engelbart visualize them as tools for digital display. He knew from his past experiences with radar that any digital information could be viewed on a screen. He then reasoned and connects the computer to a screen and uses both of them to solve problems. At first, his ideas were disregarded, but by the early 1960s other people were also thinking the same way. Moreover, the time was right for his vision of computing. Communications technology was decussate with computing and graphics technology. At first computers based on transistors rather than vacuum tubes became avail. This synergy yielded more user-friendly computers, which laid the fundament for personal computers, computer graphics, and later on, the emergence of virtual reality. Fear of nuclear attack motivated the U.S. military to depute a new radar system that would process large amount of information and immediately display it in a form that humans could promptly understand. The ensuing radar defense system was the first “real time,” or instantaneous, feigning of data. Aircraft designers began experimenting with ways for computers to graphically display, or model, air flow data. Computer experts began provide with new structure computers so they would display these models as well as compute them. The designers’ work covered with a firm surface the way for scientific visualization, an advanced form of computer modeling that expresses multiple sets of data as images and the technique of representing the realworld by a computerprogram.
Massachusetts Institute of Technology
The process of extracting certain active properties by steeping self-styled computer wizards strove to lessen the condition that makes it difficult to make progress to human interactions with the computer by replacing keyboards with capable of acting devices that have confidence on images and motion hands to emphasize or help to express a thought or feeling to manipulate data. The idea of virtual reality has came into existence since 1965, when Ivan Sutherland expressed his ideas of creating virtual or imaginary worlds. With three dimensional displays he conducted experiments at MIT. He outlined the images on the computer by developing the light pen in Ivan Sutherland in 1962. Sketchpad, is the Sutherland’s first computer-aided design program, opened the way for designers to create blueprints of automobiles, cities, and industrial products with the aid of computers. The designs were operating in real time by the end of the decade. By 1970, Sutherland also produced an early stage of technical development, head-mounted display and Engelbart unveiled his crude pointing device for moving text around on a computer screen which is the first “mouse.”
The flight simulator is one of the most influential antecedents of virtual reality. Following World War II and through the 1990s, to simulate flying airplanes (and later driving tanks and steering ships) the military and industrial complex pumped millions of dollars into technology. Before subjecting them to the hazards of flight it was safer, and cheaper, to train pilots on the ground. In earlier time’s flight simulators consisted of mock compartments where the pilot sits while flying the aircraft which built on motion platforms that pitched and rolled. However, they lacked visual feedback which is a limitation. When video displays were coupled with model cockpits this was changed.
Computer-generated graphics had replaced videos and models by the 1970s.These flights are imitating the behavior of some situation which was operating in real time, though the graphics which belongs to an early stage of technical development. The head-mounted displays were experimented by military in 1979. These creation resulting from study and experimentation were driven by the greater dangers associated with training on and flying the jet fighters that were being built in the 1970s. Better software, hardware, and motion-control platforms enabled pilots to navigate through highly detailed virtual worlds in the early 1980s.
Virtual video games, Movies and animation
The entertainment industry for natural consumer was computer graphics, which, like the military and industry, as the source of many valuable spin-offs in virtual reality. Some of the Hollywood most dazzling special effects were computer generated in 1970’s, such as the battle scenes in the big-budget, blockbuster science fiction movie Star Wars, which was released in 1976. Later movies as Terminator and Jurassic Park came in to scene, and .The video game business boomed in the early 1980s.
The data glove is the one direct spin-off of entertainment’s venture into computer graphics, a computer interface device that detects hand movements. It was invented to produce music by linking hand movements to communicate familiar or prearranged signals to a music synthesizer. For this new computer input device for its experiments with virtual environments NASA Ames was one of the first customers. The Mattel Company was the biggest consumer of the “data glove”, which changed in order to improve it into the Power Glove, the spreading mitt with which children are put down by force adversaries in the popular Nintendo game. As pinball machines gave way to video games, the field of scientific visualization has the experience of its own striking change in appearance from bar charts and line drawings to dynamic images.
For transforming columns of data into images, scientific visual perception uses computer graphics. This image of things or events enables scientists to take up mentally the enormous amount of data required in some scientific probes. Imagine trying to understand DNA sequences, molecular models, brain maps, fluid flows, or cosmic blowups from columns of numbers.
A goal of scientific mental image that is similar to visual perception is to capture the dynamic qualities of systems or processes in its images. Borrowing and as well as creating many of the special effects techniques of Hollywood, scientific visual perception moved into animation in the 1980s. NCSA’s award-winning animation of smog decreasing upon Los Angeles have the exert influence or effects on air pollution legislation in the state in 1990. This animation was a tending to persuade by forcefulness of argument and stamen of the value of this kind of imagery.
Animation had severe limitations. At First, it was costly. After developing with richness of details computer simulations, the smog animation itself took 6 months to produce from the resulting data; individual frames took from several minutes to an hour. Second, it did not allow for capable of acting for changes in the data or conditions responsible for making and enforcing rules, an experiment that produce immediate responses in the imagery. If once the animation is completed it could not be altered. Interactivity would have remained aspirant thinking if not for the development of high-performance computers in the mid-1980s. These machines provided the speed and memory for programmers and scientists to begin developing advanced visualization software programs. Low-cost, high-resolution graphic workstations were linked to high-speed computers by the end of the 1980s, which made visualization technology more accessible.
The basic elements of virtual reality had existed since 1980, but it took high-performance computers, with their powerful image translating capabilities, to make it work. To help scientists comprehend the vast amounts of data pouring out of their computers daily Demand was rising for visualization environments. Drivers for both computation and VR, high-performance computers no longer served as mere number derived from, but became exciting vehicles for systematic search and discovery.
3. Introduction to Virtual reality
Virtual Reality is the computer generated stereoscopic environment. It gives capable of being treated as fact and contribution to interactive learning environments it combines attribute of accepting the facts of life with, manipulative reality like in simulation programs. Most of the Virtual Reality systems allow the user to voyage through the virtual environment manipulate objects and experience the outcome of an event. Virtual Reality brings the imagination as close and realistic as reality itself. This environment does not necessarily need the same properties as the real world. There can be different forces, gravity, magnetic fields etc in dissimilarity of things to the real solid objects. It is the technique of representing the real world by a computer program or imagined environment that can be experienced visually in the three dimensions of width, height, and depth. It implicates the use of advanced technologies, including computers and various multimedia peripherals, to produce a simulated (i.e., virtual) environment that users became aware of through senses as comparable to real world objects and events. Virtual reality can be delivered using variety of systems. Devote fully to oneself into virtual world, manipulating things in that world and facing the important effects as like that in a real world, involves future development of devices and complex simulations programs. In virtual systems, movements in internet are simulated by shifting the optics in the field of vision in direct response to movement of certain body parts, such as the head or hand. Human-computer interaction is a discipline in showing worry with the design, act of ascertaining and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. Many users have physical or relating to limitations at the same time to handle several different devices. Virtual reality is a new medium brought about by technological advances in which much experimentation is now taking place to find practical applications and more effective ways to communicate.
A virtual world is everything that is included in a collection of a given medium. I may involve without any others being included but exist in the mind of its originator or be broadcast in such a way that it can be shared with others. The key elements in experiencing virtual reality or any reality for that matter are a virtual world, having intense mental effort with sensory feedback (responding to user input), and interactivity. In virtual reality the effect of entering the world begins with physical rather than mental, concentration. Because Immersion is a necessary component of virtual reality. Virtual reality is more closely consociated with the ability of the participant to move physically within the world. Telepresence, Augmented Reality, and Cyberspace are closely associated with virtual reality. The recipient can access the content by virtual world through the interface which can be associated with it. At the boundary between the self and the medium the participant interacts with the virtual world. For the study of good user interface design much effort has been put forth. For many media and virtual reality will require no less effort.
4. Applications of Virtual reality
The Virtual Reality had shown its applicability in early 1990’s and its exposure went beyond the expectations and it just started with some of the blocky images. Coming to the entertainment, the applications will involve in games, theatre experiences and many more. The application of the Virtual Reality come into the picture in Architectures where the virtual models of the buildings are created where the users can visualise the building and they can even walk into it. This may help to see the structure of the building even before the foundation is laid. in this way the clients or the user can checkout the whole building and even they can change the design if there are any alterations in the plan, this makes the planning and modifications very realistic and easy. This Virtual Reality is applicable even in medicine, information systems, military and many more. Further discussion will give a detailed explanation of all the applications.
4.1 Virtual Reality in information system:
For generating the direct or the indirect view of the physical real world environment the Augmented Reality is used. In this the elements will be in mixed up way with two things and finally create a mixed reality. The two things are Virtual Computer and the Generated imagery. Let us consider an example of Sports Channel on the TV where the scores are the real time examples of the semantic context in the elements of the environments. The Advancement in the Augmented Reality (AR) the real world entities can be digitized and even the user can interact with the surrounding in the digital world itself. This can be achieved by adding computer vision and object recognition to the Augmented Reality (AR) technology. Through this technology the information related to the surrounding and different objects present in it can be obtained and that will be similar to the real world information. Here the information is retrieved in the form of information layer.
In the present scenario the Augmented Reality (AR) research is been populated through the applications of the computer generated imagery. This application is replicating the real world where live video streams are been used. For the purpose of the Visualisation to the real world different displays are been used, they are Head Mounted Displays and Virtual Retinal Displays. Not only the displays but also the research also constructs the environment in a controlled way in which it replicates the real world. for this many number of sensors and actuators are used.
The two definitions of the Augmented Reality (AR) that are widely accepted in present days are:
- The Augmented Reality (AR) is a combination of real and virtual and it is interactive in the real time i.e., real world and this is registered in 3D. This definition is given by Ronald Alums in 1997.
- Paul Milligram and Fumio Kishinev define Augmented Reality (AR) as “A it is a continuous extent of the real world environment into a pure virtual or digital environment.
Due to the development in the Augmented Reality (AR) the general public are also getting attracted to this and interest is been increased in it.
Coming to the Main Hardware components that are used in Augmented Reality (AR) are as follows:
- Input Devices
- Combination of powerful CPU
- solid state compass
- Smart Phones.
Augmented Reality (AR) uses different display techniques to visualize the real world entities
- Head Mounted Displays
- Handheld Displays
- Spatial Displays
Head Mounted Displays:
Head Mounted Display (HMD) is one the display techniques used for visualizing the both the physical entities as well as the virtual graphical objects and the main thing that is to be concentrated is that all the entities and the objects moist replicate the real world. The Head Mounted Display (HMD) work in two ways i.e., through optical se-through and video see-through. Here half-silver mirror technology is used for optical see-through technology. This half-silver mirror technology first considers the physical world to pass through the lens of the optical since and then the graphical overlay information is to reflect these physical entities in the virtual world i.e. visualizing the physical entices in the virtual world. For this sensing the Head Mounted Display (HMD) uses tracking which should have six degree of freedom sensors. The main usage of tracking is that it allows the physical information to be registered in the computer system where that information will used in the virtual world’s information. The experience that an used gets is very impressive and effective. The products of this Head Mounted Display (HMD) are Micro Vision Nomad, Sony Plastron, and I/O Displays.
Handheld Augment Reality is also one of the displaying technique used for the visualizing the virtual entities from the physical world. Handheld Augment Reality is a small devices that is used for computing and it is so small that it will fit in the user’s hand. This Handheld Augment Reality uses video see-through techniques that helps to convert the physical entities or information into virtual information i.e., into graphical information. The different devices that are used in this are digital compasses and GPS in which six degree sensors are used. This at present emerged as Retool Kit for tracking.
Instead of wearing or carrying the display such as head mounted displays with handheld devices; pertaining to Augmented Reality digital projectors are used to display graphical information through physical objects. The key difference in spatial augmented reality is that from the users of the system the display is separated. Because these displays are not assorted with each user, SAR graduated naturally up to groups of users, thus allowing for strong tendency & collaboration between users. It has over traditional head mounted displays and handheld devices and several advantages. And for the user there is no such requirement to carry equipment or wear the display over their eyes. This makes spatial AR a good candidate to work together on a common project, as they can see each other’s faces. At the same time a system can be used by multiple people and there is no need for every individual to wear a head mounted display. In current head mounted displays and portable devices spatial AR does not suffer from the limited display resolution. To expand the display area a projector based display system can simply incorporate more projectors. Portable devices have a small window into the world for drawing, For an indoor setting a SAR system can display on any number of surfaces at once. The persistence nature of SAR makes this an ideal technology to support design, for the end users SAR supports both graphical visualisation and passive hep tic sensation. People are able to touch physical objects, which is the process that provides the passive hap tic sensation.
In modern world the set of reasons that support the reality systems use the following tracking technologies. Some of the tracking system is digital cameras, optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID, wireless sensors. All these technologies have different levels of exactness and accuracy. The most important in this system is to track the pose and position of the user’s head.
Virtual Reality Tracking Systems
In VR system tracking devices are intrinsic components. And these tracking devices communicate with the system processing unit and telling it the orientation of the user’s.
In this system the user allows to move around within a physical world, and the trackers can detect where the user is moving his directions and speed.
In VR systems there are various kinds of tracking systems in use, but very few thing are common in all the tracking systems, which can detect six degrees of freedom(6-DOF).These are nothing but the object’s position with x, y and z coordinates in space. This includes the orientation of objects yaw, pitch, and roll.
From the users point of view when u wear the HMD, the view changes as you look up, down, left and right. And also the position changes when you tilt your head or move your head forward or backward at an angle without changing the angle of your gaze. The trackers which are on the HMD will tell the CPU where you are looking and sends the right images to your HMD screens.
All the virtual tracking system has a device that generates a signal, the sensor will detects the signal and the control unit will process the signal and transfer the information to CPU.
Some tracking system required to attach the sensors components to the user. In such kind of system we have to place the signal emitters at fixed points in the surrounding environment.
The signals which are sent from emitters to sensors can take many forms, which admit electromagnetic signals, acoustic signals, optical signals and mechanical signals. Each and every technology has its own set of advantages and disadvantages.
Electromagnetic tracking systems
It measure magnetic fields to bring forth by running an electric current continuously through three coiled wires ordered in a perpendicular orientation with one another. Each coil becomes an electromagnet, and the system’s sensors measure how the magnetic field affects the other coils. This measurement tells the direction to the system and also predilection of the emitter. An efficientelectromagnetictracking system is very reactive, with low levels of latent period. One disadvantage of this system is that anything that can yield a magnetic field can intervene in the signals sent to the sensors.
Acoustic tracking systems
Acoustic tracking system emit and sense ultrasonicsound wavesto ascertain the position and orientation of a target. Most of the tracking systems measure the time it takes for the ultrasonic sound to reach a sensor. Generally the sensors are fixed in the environment and the user wears the ultrasonic emitters. The system estimate the position and orientation of the target based on the time it took for the sound to reach the sensors. The rate of updates on a target’s position is equally slow for Acoustic tracking systems which is the main disadvantages because Sound travels relatively slowly. The speed of sound through air can change depending on the temperature, humidity or barometric pressure in the environment which adversely affects the system’s efficiency.
Optical tracking devices
The name itself indicates that it uselight to measure a target’s position and orientation. The signal emitter in an optical tracking device typically consists of a set of infraredLEDs. The sensors which we use here are camerasthat can sense the emitted infrared light. The LEDs light up in continuous pulses. The cameras will record the pulsed signals and send information to the system’s processing unit. The unit can then draw from specific cases for the data to determine the position and orientation of the target. Optical systems have a fast upload rate, which means it minimises the time taken by the specific block of data. The disadvantages are that the line of sight between a camera and an LED can be blurred, interfering with the tracking process. Ambient light or infrared radiation can also make a system less effective.
Mechanical tracking systems
Mechanical tracking rely on a physical connection between the fixed reference point and a target. The VR field in the BOOM display is a very common example of a mechanical tracking system. A BOOM display is an HMD mounted on the end of a mechanical arm that has two points of articulation. The system detects the position and starts orientation through the arm. In mechanical tracking systems the update rate is very high, but the only disadvantage is that they limit a user’s range of motion.
4.2 Virtual Reality in military simulations:
VR technology extends a likely economically and efficient tool for military forces to improve deal with dynamic or potentially dangerous situations. In a late 1920’s and 1930’s,almost simulations in a military surroundings was the flight trainers established by the Link Company. At the time trainers expected like cut-off caskets climbed on a stand, and were expended to instruct instrument flying. The shadow inside the trainer cockpit, the realistic interpretations on the instrument panel, and the movement of the trainer on the pedestal mixed to develop a sensation similar to really flying on instruments at night. The associate trainers were very effective tools for their proposed purpose, instructing thousands of pilots the night flying skills they involved before and during World War II.
To motivate outside the instrument flying domain, simulator architects involved a way to get a view of the beyond world. The initial example of a simulator with an beyond position seemed in the 1950’s, when television and video cameras became in market. With this equipment, a video camera could be ‘ fled ‘ above a scale model of the packet around an airport, and the leading image was sent to a television monitor directed in front of the pilot in the simulator. His movement of the assure stick and limit produced corresponding movement of the camera over the terrain board. Now the pilot could experience visual resubmit both inside and outside the cockpit.
In the transport aircraft simulators, the logical extension of the video camera/television monitor approach was to use multiple reminders to simulate the total field of notion from the airplane cockpit.
Where the field of notation requires being only about 60 degrees and 180 degrees horizontally vertically. For fighter aircraft simulators, the field of view must be at least 180 degrees horizontally and vertically. For these applications, the simulator contains of a cockpit directed at the centre of a vaulted room, and the virtual images are projected onto the within surface of the dome. These cases of simulators have established to be very in force training cares by themselves, and the newest introduction is a project called SIMNET to electronically paired two or more simulators to produce a distributed simulation environment. [McCarty, 1993] Distributed simulations can be used not only for educating, but to improve and test new combat strategy and manoeuvre. A significant improvement in this area is an IEEE data protocol standard for distributed interactive simulations. This standard allows the distributed simulation to include not only aircraft, but also land-based vehicles and ships. Another recent development is the use of head- climbed displays (HMDs) to decrease the cost of wide field of perspective simulations.
Group of technologies for military missions:
Applying applications of virtual reality which are referred by military, the Military entropy enhancement in a active combat environment, it is imperative to provide the pilot or tank commander with as much of the demand information as possible while cutting the amount of disordering information. This aim contributed the Air Force to improve the head-up display (HUD) which optically merges important information like altitude, airspeed, and heading with a clear position through the advancing windscreen of a fighter aircraft. With the HUD, the pilot advancing has to look down at his instruments. When the HUD is paired with the aircraft’s radar and other sensors, a synthetic image of an enemy aircraft can be exposed on the HUD to show the pilot where that aircraft is, even though the pilot may not be able to see the actual aircraft with his unaided eyes. This combination of real and virtual views of the outside world can be broad to night time procedures. Using an infrared camera mounted in the nose of the aircraft, an increased position of the terrain ahead of the aircraft can be designed on the HUD. The effect is for the pilot to have a ‘daylight’ window through which he has both a real and an enhanced position of the night time terrain and sky. In some situations, the pilot may need to concentrate fully on the virtual entropy and completely omit the actual view. Work in this field has been started by Thomas Furness III and others at Wright Laboratories, Wright-Patterson Air Force Base, and Ohio. This work, dubbed the Super Cockpit, demanded not only a virtual view of the beyond world, but also of the cockpit itself, where the pilot would select and manipulate virtual controls using some Applications of Virtual Reality.
Automobiles based companies have used VR technology to build virtual paradigms of new vehicles, testing them real before developing a single physical part. Designers can make changes without having to scrap the entire model, as they often would with forcible ones. The growth process becomes more efficient and less expensive as a result.
Smart weapons and remotely- piloted vehicles (RPVs):
Many different types of views in combat operations, these are very risky and they turn even more dangerous if the combatant attempts to improve their performance. But there are two clear obvious reasons have driven the military to explore and employ set of technologies in their operations; to cut down vulnerability to risky and to increase stealth.
So here peak instances of this principle are attacking weapons and performing reconnaissance. To execute either of these tasks well takes time, and this is the normal time when the combatant is exhibited to unfriendly attack. For this reasons “Smart weapons and remotely- piloted vehicles (RPVs) “were developed to deal this problems. Loosely smart weapons are autonomous, while others are remotely controlled after they are established. This grants the shooter and weapon controller to set up the weapon and immediately attempt cover, thus minifying this exposure to return fire. In the case of RPVs, the person who controls the vehicle not only has the advantage of being in a safer place, but the RPV can be made smaller than a vehicle that would carry a man, thus making it more difficult for the enemy to detect.
4.3 Virtual Reality in Medicine
Virtual reality is being used today in many ways, one of the importa
Cite This Work
To export a reference to this article please select a referencing stye below: