Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.
Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.
Various technologies are used in Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on one’s person.
A head-mounted display (HMD) is a display device paired to a headset such as a harness or helmet. HMDs place images of both the physical world and virtual objects over the user’s field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user’s head movements. HMDs can provide users immersive, mobile and collaborative AR experiences.
AR displays can be rendered on devices resembling eyeglasses. Versions include eye wear that employ cameras to intercept the real world view and re-display its augmented view through the eye pieces and devices in which the AR imagery is projected through or reflected off the surfaces of the eye wear lens pieces.
Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. Another version of contact lenses, in development for the U.S. Military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time.
Virtual retinal display
A virtual retinal display (VRD) is a personal display device under development at the University of Washington’s Human Interface Technology Laboratory. With this technology, a display is scanned directly onto the retina of a viewer’s eye. The viewer sees what appears to be a conventional display floating in space in front of them.
The EyeTap (also known as Generation-2 Glass) captures rays of light that would otherwise pass through the center of a lens of an eye of the wearer, and substituted each ray of light for synthetic computer-controlled light. The Generation-4 Glass (Laser EyeTap) is similar to the VRD (i.e. it uses a computer controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display, by way of exact alignment with the eye, and resynthesis (in laser light) of rays of light entering the eye.
Handheld displays employ a small display that fits in a user’s hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiduciary markers, and later GPS units and MEMS sensors such as digital compasses and six degrees of freedomaccelerometer-gyroscope. Today SLAM markerless trackers such as PTAM are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times as well as distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye.
Spatial Augmented Reality (SAR) augments real world objects and scenes without the use of special displays such as monitors, head mounted displays or hand-held devices. SAR makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users. SAR has several advantages over traditional head-mounted displays and handheld devices. The user is not required to carry equipment or wear the display over their eyes. This makes spatial AR a good candidate for collaborative work, as the users can see each other’s faces. A system can be used by multiple people at the same time without each having to wear a head-mounted display.
Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects, providing the opportunity to enhance the object’s appearance with materials of a simple unit- a projector, camera, and sensor. Handheld projectors further this goal by enabling cluster configurations of environment sensing, reducing the need for additional peripheral sensing.
Other tangible applications include table and wall projections. One such innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle. Virtual showcases, which employ beam-splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Altogether, current augmented reality display technology can be applied to improve design and visualization, or function as scientific simulations and tools for education or entertainment. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative.
Spatial AR does not suffer from the limited display resolution of current head-mounted displays and portable devices. A projector based display system can simply incorporate more projectors to expand the display area. Where portable devices have a small window into the world for drawing, a SAR system can display on any number of surfaces of an indoor setting at once. The drawbacks, however, are that SAR systems of projectors do not work so well in sunlight and also require a surface on which to project the computer-generated graphics. Augmentations cannot simply hang in the air as they do with handheld and HMD-based AR. The tangible nature of SAR, though, makes this an ideal technology to support design, as SAR supports both a graphical visualisation and passive haptic sensation for the end users. People are able to touch physical objects, and it is this process that provides the passive haptic sensation.
Modern mobile augmented reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. These technologies offer varying levels of accuracy and precision. Most important is the position and orientation of the user’s head. Tracking the user’s hand(s) or a handheld input device can provide a 6DOF interaction technique.
Techniques include speech recognition systems that translate a user’s spoken words into computer instructions and gesture recognition systems that can interpret a user’s body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear.
The computer analyzes the sensed visual and other data to synthesize and position augmentations.
Software and algorithms
A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration which uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited from visual odometry. Usually those methods consist of two parts.
First detect interest points, or fiduciary markers, or optical flow in the camera images. First stage can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods. The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiduciary markers) present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.
Augmented reality has many applications, and many areas can benefit from the usage of AR technology. AR was initially used for military, industrial, and medical applications, but was soon applied to commercial and entertainment areas as well.
AR can be used to aid archaeological research, by augmenting archaeological features onto the modern landscape, enabling archaeologists to formulate conclusions about site placement and configuration.
Another application given to AR in this field is the possibility for users to rebuild ruins, buildings, or even landscapes as they formerly existed.
AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there. AR can also be employed within an architect’s work space, rendering into their view animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building’s exterior to virtually see through its walls, viewing its interior objects and layout.
AR technology has helped disabled individuals create art by using eye tracking to translate a user’s eye movements into drawings on a screen. An item such as a commemorative coin can be designed so that when scanned by an AR-enabled device it displays additional objects and layers of information that were not visible in a real world view of it.
ViewAR BUTLERS App – Placing furniture using AR
AR can enhance product previews such as allowing a customer to view what’s inside a product’s packaging without opening it. AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use. AR is used to integrate print and video marketing. Printed marketing material can be designed with certain “trigger” images that, when scanned by an AR enabled device using image recognition, activate a video version of the promotional material.
Augmented reality applications can complement a standard curriculum. Text, graphics, video and audio can be superimposed into a student’s real time environment. Textbooks, flashcards and other educational reading material can contain embedded “markers” that, when scanned by an AR device, produce supplementary information to the student rendered in a multimedia format. Students can participate interactively with computer generated simulations of historical events, exploring and learning details of each significant area of the event site. AR can aide students in understanding chemistry by allowing them to visualize the spatial structure of a molecule and interact with a virtual model of it that appears, in a camera image, positioned at a marker held in their hand. Augmented reality technology also permits learning via remote collaboration, in which students and instructors not at the same physical location can share a common virtual learning environment populated by virtual objects and learning materials and interact with another within that setting.
30 years of Augmediated Reality in everyday life.
Since the 1970s and early 1980s, Steve Mann has been developing technologies meant for everyday use i.e. “horizontal” across all applications rather than a specific “vertical” market. Examples include Mann’s “EyeTap Digital Eye Glass”, a general-purpose seeing aid that does dynamic-range management (HDR vision) and overlays, underlays, simultaneous augmentation and diminishment (e.g. diminishing the electric arc while looking at a welding torch).
AR can help industrial designers experience a product’s design and operation before completion. Volkswagen uses AR for comparing calculated and actual crash test imagery. AR can be used to visualize and modify a car body structure and engine layout. AR can also be used to compare digital mock-ups with physical mock-ups for efficiently finding discrepancies between them.
Augmented Reality can provide the surgeon with information, which are otherwise hidden, such as showing the heartbeat rate, the blood pressure, the state of the patient’s organ, etc. In particular AR can be used to let the doctor look inside the patient by combining one source of images such as an X-ray with another such as video. This helps the doctor to identify the problem with the patient in a more intuitive way than looking at only type of image data. This approach works in a similar as the technicians doing maintenance work.
Examples include a virtual X-ray view based on prior tomography or on real time images from ultrasound and confocal microscopy probes or visualizing the position of a tumor in the video of an endoscope. AR can enhance viewing a fetus inside a mother’s womb. See also Mixed reality.
In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier’s goggles in real time. From the soldier’s viewpoint, people and various objects can be marked with special indicators to warn of potential dangers. Virtual maps and 360° view camera imaging can also be rendered to aid a soldier’s navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center.
Augmented reality map on iPhone
AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile’s windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their path. Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as a ship’s heading and speed while moving throughout the bridge or performing other tasks.
AR can help facilitate collaboration among distributed team members in a work force via conferences with real and virtual participants. AR tasks can include brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms.
Sports and entertainment
AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow “first down” line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories. 
AR can enhance concert and theater performances. For example, artists can allow listeners to augment their listening experience by adding their performance to that of other bands/groups of users.
The gaming industry has benefited a lot from the development of this technology. A number of games have been developed for prepared indoor environments. Early AR games also include AR air hockey, collaborative combat against virtual enemies, and an AR-enhanced pool games. A significant number of games incorporate AR in them and the introduction of the smartphone has made a bigger impact.
Complex tasks such as assembly, maintenance, and surgery can be simplified by inserting additional information into the field of view. For example, labels can be displayed on parts of a system to clarify operating instructions for a mechanic who is performing maintenance on the system. Assembly lines gain many benefits from the usage of AR. In addition to Boeing, BMW and Volkswagen are known for incorporating this technology in their assembly line to improve their manufacturing and assembly processes. Big machines are difficult to maintain because of the multiple layers or structures they have. With the use of AR the workers can complete their job in a much easier way because AR permits them to look through the machine as if it was with x-ray, pointing them to the problem right away.
Tourism and sightseeing
Augmented reality applications can enhance a user’s experience when traveling by providing real time informational displays regarding a location and its features, including comments made by previous visitors of the site. AR applications allow tourists to experience simulations of historical events, places and objects by rendering them into their current view of a landscape. AR applications can also present location information by audio, announcing features of interest at a particular site as they become visible to the user.
AR systems can interpret foreign text on signs and menus and, in a user’s augmented view, re-display the text in the user’s language. Spoken words of a foreign language can be translated and displayed in a user’s view as printed subtitles
How Augmented Reality Works
Video games have been entertaining us for nearly 30 years, ever since Pong was introduced to arcades in the early 1970s. Computer graphics have become much more sophisticated since then, and game graphics are pushing the barriers of photorealism. Now, researchers and engineers are pulling graphics out of your television screen or computer display and integrating them into real-world environments. This new technology, called augmented reality, blurs the line between what’s real and what’s computer-generated by enhancing what we see, hear, feel and smell.
On the spectrum between virtual reality, which creates immersive, computer-generated environments, and the real world, augmented reality is closer to the real world. Augmented reality adds graphics, sounds, haptic feedback and smell to the natural world as it exists. Both video games and cell phones are driving the development of augmented reality. Everyone from tourists, to soldiers, to someone looking for the closest subway stop can now benefit from the ability to place computer-generated graphics in their field of vision.
Augmented reality is changing the way we view the world — or at least the way its users see the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. Similar devices and applications already exist, particularly on smartphones like the iPhone.
In this article, we’ll take a look at where augmented reality is now and where it may be headed soon.
Augmenting Our World
The basic idea of augmented reality is to superimpose graphics, audio and other sensory enhancements over a real-world environment in real time. Sounds pretty simple. Besides, haven’t television networks been doing that with graphics for decades? However, augmented reality is more advanced than any technology you’ve seen in television broadcasts, although some new TV effects come close, such as RACEf/x and the super-imposed first down line on televised U.S. football games, both created by Sportvision. But these systems display graphics for only one point of view. Next-generation augmented-reality systems will display graphics for each viewer’s perspective.
Some of the most exciting augmented-reality work is taking place in research labs at universities around the world. In February 2009, at the TED conference, Pattie Maes and Pranav Mistry presented their augmented-reality system, which they developed as part of MIT Media Lab’s Fluid Interfaces Group. They call it SixthSense, and it relies on some basic components that are found in many augmented reality systems:
These components are strung together in a lanyardlike apparatus that the user wears around his neck. The user also wears four colored caps on the fingers, and these caps are used to manipulate the images that the projector emits.
SixthSense is remarkable because it uses these simple, off-the-shelf components that cost around $350. It is also notable because the projector essentially turns any surface into an interactive screen. Essentially, the device works by using the camera and mirror to examine the surrounding world, feeding that image to the phone (which processes the image, gathers GPS coordinates and pulls data from the Internet), and then projecting information from the projector onto the surface in front of the user, whether it’s a wrist, a wall, or even a person. Because the user is wearing the camera on his chest, SixthSense will augment whatever he looks at; for example, if he picks up a can of soup in a grocery store, SixthSense can find and project onto the soup information about its ingredients, price, nutritional value — even customer reviews.
By using his capped fingers — Pattie Maes says even fingers with different colors of nail polish would work — a user can perform actions on the projected information, which are then picked up by the camera and processed by the phone. If he wants to know more about that can of soup than is projected on it, he can use his fingers to interact with the projected image and learn about, say, competing brands. SixthSense can also recognize complex gestures — draw a circle on your wrist and SixthSense projects a watch with the current time.
Mistry demonstrates SixthSense
Photo courtesy Sam Ogden, Pranav Mistry, MIT Media Lab
The SixthSense augmented reality system lets you project a phone pad onto your hand and phone a friend — without removing the phone from your pocket. See more gadget pictures.
Photo courtesy Lynn Barry, Pranav Mistry, MIT Media Lab
Augmented Reality on Cell Phones
While it may be some time before you buy a device like SixthSense, more primitive versions of augmented reality are already here on some cell phones, particularly in applications for the iPhone and phones with the Android operating system. In the Netherlands, cell phone owners can download an application called Layar that uses the phone’s camera and GPS capabilities to gather information about the surrounding area. Layar then shows information about restaurants or other sites in the area, overlaying this information on the phone’s screen. You can even point the phone at a building, and Layar will tell you if any companies in that building are hiring, or it might be able to find photos of the building on Flickr or to locate its history on Wikipedia.
Layar isn’t the only application of its type. In August 2009, some iPhone users were surprised to find an augmented-reality “easter egg” hidden within the Yelp application. Yelp is known for its user reviews of restaurants and other businesses, but its hidden augmented-reality component, called Monocle, takes things one step further. Just start up the Yelp app, shake your iPhone 3GS three times and Monocle activates. Using your phone’s GPS and compass, Monocle will display information about local restaurants, including ratings and reviews, on your cell phone screen. You can touch one of the listings to find out more about a particular restaurant.
There are other augmented reality apps out there for the iPhone and other similar phones — and many more in development. Urbanspoon has much of the same functionality as Yelp’s Monocle. Then there’s Wikitude, which finds information from Wikipedia about sites in the area. Underlying most of these applications are a phone’s GPS and compass; by knowing where you are, these applications can make sure to offer information relevant to you. We’re still not quite at the stage of full-on image recognition, but trust us, people are working on it.
We’ve looked at some of the existing forms of augmented reality. On the next page, we’ll examine some of the other applications of the technology, such as in video games and military hardware
An iPhone user displays the augmented reality app Monocle, which combines the phone’s camera view with tiny tags indicating the names, distances and user ratings of nearby bars, restaurants and more.
AP Photo/Marcio Jose Sanchez
Augmented Reality in Video Games and the Military
Video game companies are quickly hopping aboard the augmented-reality locomotive. A company called Total Immersion makes software that applies augmented reality to baseball cards. Simply go online, download the Total Immersion software and then hold up your baseball card to a webcam. The software recognizes the card (and the player on it) and then displays related video on your computer screen. Move the card in your hands — make sure to keep it in view of the camera — and the 3-D figure on your screen will perform actions, such as throwing a ball at a target.
Total Immersion’s efforts are just the beginning. In the next couple of years, we’ll see games that take augmented reality out into the streets. Consider a scavenger-hunt game that uses virtual objects. You could use your phone to “place” tokens around town, and participants would then use their phones (or augmented-reality enabled goggles) to find these invisible objects.
Demos of many games of this order already exist. There’s a “human Pac-Man” game that allows users to chase after each other in real life while wearing goggles that make them look like characters in Pac-Man.
Arcane Technologies, a Canadian company, has sold augmented-reality devices to the U.S. military. The company produces a head-mounted display — the sort of device that was supposed to bring us virtual reality — that superimposes information on your world. Consider a squad of soldiers in Afghanistan, performing reconnaissance on an opposition hideout. An AR-enabled head-mounted display could overlay blueprints or a view from a satellite or overheard drone directly onto the soldiers’ field of vision.
Now that we’ve established some of the many current and burgeoning uses of augmented reality, let’s take a look at the technology’s limitations and what the future holds.
Augmented reality can breathe a little life into your sports trading cards.
Photo courtesy Total Immersion
Limitations and the Future of Augmented Reality
Augmented reality still has some challenges to overcome. For example, GPS is only accurate to within 30 feet (9 meters) and doesn’t work as well indoors, although improved image recognition technology may be able to help [source: Metz].
People may not want to rely on their cell phones, which have small screens on which to superimpose information. For that reason, wearable devices like SixthSense or augmented-reality capable contact lenses and glasses will provide users with more convenient, expansive views of the world around them. Screen real estate will no longer be an issue. In the near future, you may be able to play a real-time strategy game on your computer, or you can invite a friend over, put on your AR glasses, and play on the tabletop in front of you.
There is such a thing as too much information. Just as the “CrackBerry” phenomenon and Internet addiction are concerns, an overreliance on augmented reality could mean that people are missing out on what’s right in front of them. Some people may prefer to use their AR iPhone applications rather than an experienced tour guide, even though a tour guide may be able to offer a level of interaction, an experience and a personal touch unavailable in a computer program. And there are times when a real plaque on a building is preferable to a virtual one, which would be accessible only by people with certain technologies.
There are also privacy concerns. Image-recognition software coupled with AR will, quite soon, allow us to point our phones at people, even strangers, and instantly see information from their Facebook, Twitter, Amazon, LinkedIn or other online profiles. With most of these services people willingly put information about themselves online, but it may be an unwelcome shock to meet someone, only to have him instantly know so much about your life and background.
Despite these concerns, imagine the possibilities: you may learn things about the city you’ve lived in for years just by pointing your AR-enabled phone at a nearby park or building. If you work in construction, you can save on materials by using virtual markers to designate where a beam should go or which structural support to inspect. Paleontologists working in shifts to assemble a dinosaur skeleton could leave virtual “notes” to team members on the bones themselves, artists could produce virtual graffiti and doctors could overlay a digital image of a patient’s X-rays onto a mannequin for added realism.
The future of augmented reality is clearly bright, even as it already has found its way into our cell phones and video game systems. For more information about the subject and where it’s headed, take a look at the links on the next page
Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-gener
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: