Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

Multi Touch Interfaces Future Human Computer Interaction Computer Science Essay

Over the last number of years, multi-touch-sensitive interactive surfaces have evolved from research prototypes to commercial products with Main stream adoption. The focal point of this research aims towards the idea of ubiquitous computing, where everyday surfaces in our environment are made interactive. With the recent release of the Microsoft Surface and Apple’s Iphone a greater interest concerning Multi-touch and gestural technology have surfaced. In the movie Minority Report (2002 1b) Tom Cruise donned a pair of gloves and interacts with a translucent computer screen that responds to the gestures made by his movements instead of the traditional keyboard, speech and mouse. Minority Report (2002) gave the main stream public its first taste of multi touch interaction and the possibilities it presented. Since then it has influenced a new wave of interaction and multimedia designers to develop new and exciting ways to interact with these new multi sensory systems.

But the reality is Multi-touch technologies have been around for a long time and have a long and varied history.  To put this into perspective, a group at the University of Toronto was working on multi-touch way back in 1984 (Lee, Buxton & Smith, 1985 30), the same year that the first Macintosh computer was released.  During the development of the iPhone, Apple was very much aware of the history of multi-touch, dating at least back to 1982(Mehta 1982 1c) and the use of the pinch gesture, dating back to 1983 (Buxton 2005.2). It is important to those wanting to understand the process of innovation that “new" technologies like multi-touch are not developed in a relatively short space of time.  While a company hoping to capitalize on the idea of the “great invention" for a successful marketing campaign, real innovation rarely works that way.  The evolution of multi-touch technology is a text-book example of what Bill Buxton calls “the long-nose of innovation." (Buxton 2005 2)

An example of this phenomenal is the Mouse, first built in 1965 by William English and Doug Engelbart. (English 1965 3) By 1968 it was copied (with the originators' consent) for use in a music and animation system at the National Research Council of Canada. Around 1973 Xerox PARC adopted a version as the graphical input device for the Alto computer (Edwards 2008 4). Then in January of 1984 the first Macintosh was released and it was this Macintosh computer that brought the mouse to the attention of the general public (Edwards 2008 4). However it was not until 1995, with the release of Windows 95, that the mouse became ubiquitous and is now an essential devices sold with every desktop pc worldwide. (Buxton 2005 website)

Since the mouse was first developed Computer interaction has used the same interaction techniques that we as users do not even think twice of using them in our everyday interaction with multiple systems. As new technologies continue to develop newly developed computer systems will add a new host of interaction techniques that will take advantage of the whole human body using gestures to control the system. A gesture can be defined as

“Any physical movement that a digital system can sense and respond to without the aid of a traditional pointing device such as a mouse or stylus. A wave, a head nod, a touch, a top tap and even a raised eyebrow can be a gesture” (Saffer 2009)

The point of this interaction is the between user and this system is the User Interface (UI). The user interface is the part of the computer and its software that people can see, hear, touch and direct what to execute. Proper interface design should provide a mix of well-designed input and output mechanisms that’s satisfies the user’s needs and capabilities and limitations in the most effective way possible. The best interface is one that is not noticed, one that allows the user to focus on the information and task at hand, not the mechanisms used to present the information’s or the task performed.

Many ideas for re-designing computer workspaces beyond the computer screen have been developed (Rekimoto 2002 .5). As mentioned earlier the main goals of this type of research have been to turn everyday surfaces, such as tabletops or walls, into interactive surfaces [images ref here]. The user of these systems can manipulate, share, and transfer digital information in situations that are the complete opposite of using a standard Desktop PC. For these systems to work the user’s hand, finger or gesture movement must be tracked and be recognizable to the system. Hand and finger based interaction offers several advantages over traditional mouse-based interfaces, especially when it is used in conjunction with physical interactive surfaces [images?} ……expand?

At the forefront of the interactive multi-touch and gestural revolution is Jeff Han who’s refined FTIR (frustrated total internal reflection) sensing technique (Han 2002.29) has created the possibility to design and create a much cheaper alternative of interactive multi-touch surfaces. From his now famous TED presentation in 2006 where he unveiled his multi-touch pressure sensitive display surface a new wave of new designers and programmers have began to develop and create a low cost DIY approach to multi-touch through its construction and implantation. There are currently three main methods of creating low cost Multi-touch interactive surfaces: Frustrated Total Internal Reflection (FTIR), Diffused Illumination (DI) and Laser Light Plane (LLP)

Frustrated Total Internal Reflection (FTIR)

Infrared light is shined into the side of an acrylic panel (most often by shinning IR LEDs on the sides of the acrylic). The light is trapped inside the acrylic by internal reflection. When a finger touches the acrylic surface this light is “frustrated” causing the light to scatter downwards where it is picked up by an infrared camera.

(Nuigroup 2009)

Image here

Diffused Illumination Rear and Front (DI)

Diffused Illumination comes in two main forms. Front Diffused Illumination and Rear Diffused Illumination. Both techniques use the same basic principles but rear DI is favoured over front DI. The Microsoft Surface configuration uses a Rear DI set-up. Infrared light is shined at the screen from below the touch surface. A diffuser is placed on top or on bottom of the touch surface.  When an object touches the surface it reflects more light than the diffuser or objects in the background; the extra light is sensed by a camera. Depending on the diffuser, this method can also detect hover and objects placed on the surface.

(Nuigroup 2009)

Image here

Laser Light Plane (LLP)

Infrared light from a laser(s) is shined just above the surface. The laser plane of light is about 1mm thick and is positioned right above the surface, when the finger just touches it, it will hit the tip of the finger which will register as an IR blob.

(Nuigroup 2009)

Image here

A sensor of some sort is used in all of multi-touch and gestural interfaces to detect any changes in its environment. The most common types of sensor used are

Pressure: which detects if something is being touched or pressure is being applied.

Light: usually infra red, to detect changes in the light frequency and the surrounding environment.

Proximity: to detect spatial awareness such as objects in the area.

Acoustic: to detect the presence of sound.

Tilt: to detect an angle increase vertical or horizontal.

Motion: to detect movement and speed of any object.

Chapter 2

To design and develop a multi-touch gestural interface each system has to be able to recognize and track each gesture and touch event. NUI and specifically multi-touch based systems use tracking software such as Community Core Vision (CVV) (nuigroup 2009.17) to track multiple fingers at once. CCV is an open source/cross-platform solution for computer vision and machine sensing. It takes a video input stream and outputs tracking data (e.g. coordinates and blob size) and events (e.g. finger down, moved and released) that are used in building multi-touch applications. This tracking is very important to multi-touch technology. It is what enables multiple fingers to perform various actions without interrupting each other. The use of this software creates a system for multiple users interacting at any given time, which is what we call "multi-user." In the real world there is already multi-touch and multi-user, and we use gestures all of the time; if done right, these systems should be as easy to interact with as picking up a pencil, drawing a picture, and showing it to someone, innovation should not make things more complicated for the user (Norman 2003.20). One of the main goals in designing these new types of systems should be no different than the usual goal of user experience design which is to try to make complicated things easy to understand and use. We discus touch input the three main areas are as follows:

Basic touch

In a basic touch system the users would use standard interaction devieces such the mouse, keyboard, joystick, track-ball, and a data glove.

Multi-touch

Multi-touch applications involve multiple touch points and multiple user interaction at any one time and such applications and require a different design approach. In the paper Beyond flat surface computing: challenges of depth-aware and curved interfaces (Benko 2009) identifies one of the main drawbacks to Multi-touch:

“The design of most interactive touch-sensitive surface systems can restrict user interaction to a 2D plane of the surface and disregard the interactions that happen above it.”

Gestures

Gesture interaction generally involves no touch input or any gear for interaction and in direct contrast to multi-touch most of the gestures can take place some distance from the surface. Gestures cause an event to execute when the gesture is recognized.  Gesture movements can be intuitive and range from simple hand and body movement to more subtle complex movements such as facial expressions, where sometimes the slightest involuntarily movement can be classed as a gesture.

Multi touch and Natural User Interface Designs will require applying new ideas and techniques from the cutting edge of Interaction Design and Human Computer Interaction (HCI). Challenges that arise can be users who share the same physical space and co-interact in situations create new challenges for software designers, than when designing system for standalone single users: these users are said to be co-present.

"co-present ,are users who share the same physical space and situations differ from the challenges software designers have long dealt with when needing to support physically distributed users. “ (Baecker M, et al 1995.11)

Designers must consider the appropriate method of input for any given task, it is important for designers to take the use, and practicality of these systems into account. For example, an interactive wall surface should be designed with as little user interaction as possible otherwise the user’s arms maybe become tired. Or when designing for touch point using a finger instead of the cursor. When a user uses a finger over a mouse, the hand will cover up a portion of the screen on either side which can cause important information is too missed. The mouse is a very precise small target while the finger is attached to the hand and in turn the hand attached to the arm which can cast shadows and obstruct the user’s view. Fingers can also be a problem for designers, fingers secrete natural oil which can make the surface being used greasy and oil causing the surface to become difficult to use. Fingerprints can also be a problem leaving the surface with many smudges allowing for a dirty used look, however the use of bright colours in the design can help hide them but dark black colours in the background can even highlight the problem. With pen or mouse gestures, people can now author musical scores, create interactive mathematical demonstrations, and make architectural drawings, to name a few examples. Novel interfaces such as the Nintendo Wii (ref) make use of freeform hand gestures to make for a more physical and engaging interactive experience. Multiple methods on how touch and gesture are incorporated into different systems have also arisen. Some systems, such as the iPhone, use touch as the only input method. Others, such as Windows 7 allow for a multiple input methods such as touch, stylus, and Keyboard and mouse. Free-Form gestural interfaces don’t require the user to touch the device, a glove can be used but as gestural interfaces develop the goal of this technology is to use only the body as the sole source of interaction and input. For these system to function correctly designers must understand the basics of how the body moves. This study of anatomy, physiology, and mechanics of human movement is called kinesiology and is as important as HCI when designing natural user interfaces. So far there is little in the form of standard hardware for these interfaces to be developed. Most systems are on the CCV camera systems and others can use different capacitive technologies, which can lead to multiple design approaches depending on which system set-up and configuration is being used at the time.

.

Chapter 3 HCI

When discussing gestural and multi-touch technologies designer draw on some of the more traditional established techniques from Human Computer Interaction or HCI. The term human-computer interaction has only been widely used since the early 1980’s but its roots can be traced back to more established disciplines. The goal of good HCI design is to make technology easier for the user to learn and use. Human –computer interaction is put simply

” the study of people, computer technology and the way these influence each other. We study HCI to determine how we can make this computer technology more usable by people.” (Dix A, et al 1993 p .12)

HCI designers must consider a variety of factors when designing such as what people want and expect, what physical abilities or even limitations the user processes, , and even what people find enjoyable and attractive. The computer is a complex tool created to have the potential to expand our knowledge and improve the way we can function in all aspects of our everyday life. For decades people have invented, designed created objects some having a basic simple use while others have a more multi-propose and complex use. When a user is interacting with a computer system there are many examples of good and bad design. In the first chapter titled The Psychology of Everyday Things from the book The Design Of everyday Things ( Norman 1998. 13) , Norman introduces six concepts that we can use to evaluate good and bad design, Affordances, Constraints, Conceptual models, Mapping, Visibility and Feedback. Of these Norman identifies two key principals that help ensure good HCI which are Visibility and Affordances.

“Controls need to be visible, with good mapping with their effects, and there design and should also suggest (that is afford) their functionality “(Norman, 1992 .13b)

Although HCI coves both areas of computer system input and output for the purpose of this dissertation concerning gestural and multi-touch the author covers input only. Input is how a person communicates their needs or actions to the computer. As computer design and systems evolved over the years a number of input devices also developed, most of which we as user have become so familiar with that they become universal in our everyday interaction.

“Input is concerned with recording and entering data into the computer system and issuing instructions to the computer. In order to interact with computer systems effectively, user must be able to communicate their instructions in such a way that the machine can interpret them” (Preece J, et al, 1994, p-212 .14)

These basic touch devices’ as mentioned earlier in chapter 2 (mouse, keyboard etc). One of the key aims in selecting an input device and deciding how it will be used to control its actions and insure that they

Match the physiological and psychological characteristics of users, there training and experience

Are appropriate for the tasks to be performed

Is suitable to the intended work and environment

(Preece et al 1994 p-212 )

When designing systems whether they be traditional or new cutting edge systems such as multi-touch designers must consider and study and incorporate the disciplines of HCI which include Psychology, Ergonomics, Linguistics, Sociology, Anthropology, Graphic Design & Typography (Hill 1995 .27). The design and implementation of these new systems must adhere to the standard design principles while also developing new interaction principles and techniques. If we look at some of the key principles of usability, and user experience in general there are aspects of having multiple users and inputs that can be an advantage. (Norman, 1990; Gibson, 1979.6).Most software developed today must accommodate both the novice and expert users. Novice users can discover the features and applications through the use of visible clues such as drop down menus that list all potential actions, buttons and icons etc. Accommodating the novice users is achieved through visual aids about what actions and features are possible. Accommodating expert users on the other hand is a little more complex. A frequent user can quickly become tired and frustrated with having to use the same icons and visual aids that help novice users. If an expert user already knows a specific action to take they are usually offer keyboard shortcuts for known simple actions like copy (Ctrl +C) and paste (Ctrl +V), to more advanced actions that you'll find in all software released, even if it can be time consuming to learn the multiple shortcuts at first with practise they can speed up and enhance user interaction and experience.

Natural intuition can also play a role in how we perceive and what we can actually do with physical objects in the real world. We can start by looking at how people interact with objects in the real world, and the idea of affordances (Norman, 1990; Gibson, 1979.6). Norman states

“The term affordances refer to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could be possibly be used.” (Norman D 1998 Chapter 1 p. 1-33)

In context this implies that we interact with objects in a natural way every day, when a phone rings we answer it, converse with the other person on the other end, or maybe pass it on to somebody else. We can pick up a postcard; turn it over to see if what the message is on the back, or who sent it. By doing this we interact with real world objects in this natural way. The main aim of Natural User Interface and gesture-based systems is to re-create this natural feeling of interaction with everyday objects, in how we interact with digital objects. So rather than trying develop new and complicated ways to interact with new digital objects, the aim is to try to re- create how people naturally interact with objects and each other when designing gesture based systems.

.

Chapter 4

So will NUI become a standard set of principles and rules that govern the design of the new interactive surface or gestural interface as HCI governs traditional user interface design? When we discuss Multi touch and the idea of a Natural User Interface we can compare how a users’ interaction can differ greatly using multi touch and gestural interfaces compared to the standard interaction. Based on the findings from “Determining the benefits of direct-touch, bimanual, and multi finger input on a multi-touch workstation” (Kin, Agrawala & DeRose,2009) the authors’ suggest Design Guidelines for developing applications for multi-touch systems. These guidelines compare the differences difference s between single-touch v. multi-touch and single finger v. multi-finger interaction

Design Guidelines:

A one finger direct-touch device delivers a large performance gain over a mouse-based device. For multi target selection tasks even devices that detect only one point of touch contact can be effective.

Support for detecting two fingers will further improve performance, but support for detecting more than two fingers is unnecessary to improve multi target selection performance.

Reserve same-hand multi finger usage for controlling multiple degrees of freedom or disambiguating gestures rather than for independent target selections.

Uniformly scaling up interfaces originally designed for desktop workstations for use with large display direct-touch devices is a viable strategy as long as targets are at least the size of a fingertip.

As discussed in the previous chapter designers must incorporate HCI principles and rules when designing the user interface for gestural and multi-touch systems. As these technologies evolve the user interface will evolve, the user will begin to expect a more natural user friendly maybe even intuitive responsive user interface.

So what types of applications might be viable option for touch-based user interfaces? Below are some of these new applications (change…..)

Phicons

Augmented reality

Mobile

Sixth-Sense Technology

(1) Natural gesture-based systems replace mouse clicks with finger taps which lead some designers to profess that the tap is the new click. Current multi-touch systems recognize multiple fingers and objects at once; for example, Microsoft Surface can currently recognize and track 52 fingers, objects, or tags which are bar-code type identifiers (Derene 2009.19) such as phicons which can be used to show possible actions. A phicons or physical icon is

“The tangible computing equivalent of an icon in a traditional graphical user interface, or GUI” (Fidalgo F, et al 2006.16)

Phicons are small physical objects (e.g. a pawn of queen on a chess board) that can be identified by a system through the use of a bar code that cameras can pick up and identify. Phicons can be used as images that will allow a user to gain valuable information such as what action or idea they represent or to perform.

(2) Augmented reality is also a key new technology in this new and exciting digital revolution. The Iphone is really at the fore-front of most new emerging technology and augmented reality is no exception with numerous augmented reality iphone apps available to download with functionality ranging from layer reality browser’s to navigation and real time information in most major cities. Augmented reality (AR) is defined as

“A term for a live direct or indirect view of a physical real-world environment whose elements are merged with (or augmented by) virtual computer-generated imagery - creating a mixed reality. Artificial information about the environment and the objects in it can be stored and retrieved as an information layer on top of the real world view.” (R. Azuma 1997 .11)

Another practical and interesting use of this technology is installed Lego stores throughout the United States (insert ref here.18). The idea behind this system is you choose a packaged box of Lego from the shelf, and then once at the AR kiosk the box is scanned and the kiosk displays a real time render 3D with characters walking around the scene as in a live action piece. This enhances the user interaction and experience by mixing the real world with the digital. [Image]

(3) As more interactive surfaces emerge casual interactions from passersby are a new and exciting way for users to interact with these systems .Today most people who use an interactive surface might be also expected to carry a mobile phone with them. Current models have many advanced features, such as Bluetooth, a camera or 3D graphics and are able to run custom applications. Therefore, it is reasonable to assume that the majority of users who interact with any interactive surface would also like to use their mobile phone as a tool for this interaction. This offers new possibilities for interaction between these users’ mobile phones and large public interactive displays. The Microsoft Surface (Microsoft 2008.21) can be deployed in say a mobile operator’s store to provide information about mobile phones placed on top. Interactive Games can be played where the mobile phone can act as a controller, as shown by Poker Surface: Combining a Multi-Touch Table and Mobile Phones in Interactive Card Games (Shirazi 2009) it can provide a tangible feeling and allows the users to hold their “cards” in the hand. By applying gestural actions to the mobile device such as vertical ,and horizontal tilt, shake or even flick the user can control the game, which in poker actions translates to hold, fold raise, deal and shuffle the cards, all done with gestural movement of the phone. Even further development is taking place and the lines of standard interaction between mobile technology and multi touch are blurring, where does on stop and the other begin? ( Shen 2009 .25) presents a new mobile interaction model, called double-side multi-touch, based on a mobile device that receives

“Simultaneous multi-touch input from both the front and the back of the device. This new double-sided multi-touch mobile interaction model enables intuitive finger gestures for manipulating 3D objects and user interfaces on a 2D screen.”

( Shen 2009 .25)

{Image}

(4) Sixth-Sense Technology:

“SixthSense: is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.”

( Mistry & Maes 2009.35)

We as humans have five basic senses sight, touch, hearing, smell and taste; these natural senses help us to process information which helps us make decisions and chose the right course of action to take. The creator of the SixthSense technology Pranav Mistry argues that the most useful information that can help us make this decision is not perceived by our five senses, which is data, information and knowledge that has been created about everything and which today is increasingly available. Traditionally information is confined to paper or digitally on a screen.

“SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures.”

(Mistry 2009.35)

This new technology aims to access all this relevant information from it digital home and apply it to real-world situations in real time. The SixthSense system comprises a mirror, a camera and a small projector that all can be worn by the user. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information onto any surface such as walls and physical objects around us to be used as interfaces; while the camera tracks the user's hand gestures and physical object. The software tracks visual coloured fiducials (tracking markers) that the user wears a on the tip of each finger. The movement of each fiducials are processed as gestures and can execute programmed set of instructions that occur on the application surface. There is no limit on the number of fiducially once each one is unique, which enables mulit-touch and multi-user interaction. Features of sixthsense technology include a map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger. SixthSense also recognizes user’s freehand gestures . For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the ‘framing’ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analogue watch.

Chapter 6: Conclusion

Still today on the market most IT products both hardware and software are marketed and sold on the included features rather than the benefits to the user. Only after a user has purchased a product based on it features, the user can find the product difficult to use and navigate.

However, most of the current interfaces remain firmly tied to the traditional flat rectangular displays of the today’s computers and while they benefit from the directness and the ease of use, they are often not much more than touch-enabled standard desktop interfaces. In

Fact, it is hardly surprising that most of the current applications mimic the characteristics of the flat display with two-dimensional (2D or 2.5D) rectilinear user interface elements and concepts, such as rectilinear buttons, windows, scrollbars, etc.

Sixthsense to be realse open source free code and software with instructions on how to use it ….the future

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.


More from UK Essays

Doing your resits? We can help!