New emerging technologies are beginning to utilize multi-touch and gestural interface as the sole method of user interaction. Traditionally human computer interaction methods of user interface design have not changed much in the last thirty years and the design guidelines and principles still cover the same basic input device such as the mouse, keyboard and joystick. However with multi-touch and gestural technology allowing user interaction to incorporate a more natural method of touch and gestures and as the sole interaction method, can the same human computer interaction (HCI) principles and guidelines be utilized to design this natural user interface? Are there other areas and design related fields of study that can help the design of these new systems? As these new systems develop the human body will become the sole interaction method for navigation for these natural users' interfaces.
For my final year major project in BA (Hons) Interactive Digital Media I propose to design and build an interactive multi-touch surface system using the FTIR (Frustrated total internal reflection) technique. The project will be completed in two stages running simultaneously, the first section is the planning, design and construction of the physical components of the table and second is the design and coding of the user interface (UI). The user interface is the most important aspect of the project as it is the sole point of interaction between the user and the software that will be developed incorporating touch, sound and visual elements.
The Topic for this dissertation is the new emerging use of multi-touch and gestural technology and the development of a natural user Interface (NUI). How designing for these new systems compare to designing from traditional based computer systems. The use of human computer interaction principles and how they apply to the natural user interface. This dissertation traces the progress that has been made in the design and implementation of multi-touch and gestural technology. Explains the more traditional design principals of human computer interaction (HCI) and how they can still shape the design of a new natural user interface.
The overall aim of this dissertation is to discuss the different system configurations and interaction methods focusing mainly on multi touch and gestural technology, its applications, uses and the different types of touch input that are available.
Natural user interface design will transform user experience from using physical input devices to gestures and touch while aiming for the main goal of using the entire range of body movements to navigate each system. Will the traditional HC guidelines be obsolete or adapted to create new NUI guidelines and principles?
4. Chapter 1: Multi What?
History of Multi-touch
Over the last number of years, multi-touch-sensitive interactive surfaces have evolved from research prototypes to commercial products with main stream adoption. The focal point of this research aims towards the idea of ubiquitous computing, where everyday surfaces in our environment are made interactive. With the recent release of the Microsoft Surface and Apple's iphone a greater interest concerning multi-touch and gestural technology have emerged in the last number of years. In the movie Minority Report (2002) Tom Cruise donned a pair of gloves and interacts with a translucent computer screen that responds to the gestures made by his movements instead of the traditional keyboard and mouse. Minority Report (2002) gave the main stream public its first taste of multi touch interaction and the possibilities it presented. Since then it has influenced a new wave of interaction and multimedia designers to develop new and exciting ways to interact with these new multi- sensory systems.
But the reality is multi-touch technologies have been around for a long time and have a long and varied history.Â To put this into perspective, a group at the University of Toronto was working on multi-touch way back in 1984 (Lee et al 1985), the same year that the first Macintosh computer was released.Â During the development of the iPhone, Apple was very much aware of the history of multi-touch, dating at least back to 1982 (Mehta 1982) and the use of the pinch gesture, dating back to 1983 (Buxton 2005). It is important to those wanting to understand the process of innovation that new technologies like multi-touch are not developed in a relatively short space of time.Â While a company hoping to capitalize on the idea of the "great invention" for a successful marketing campaign, real innovation rarely works that way.Â The evolution of multi-touch technology is a text-book example of what Bill Buxton calls "the long-nose of innovation." (Buxton 2005)
A very good example of this phenomenal is the mouse, first built in 1965 by William English and Doug Engelbart. (English 1965) By 1968 it was copied (with the originators' consent) for use in a music and animation system at the National Research Council of Canada. Around 1973 Xerox Parc adopted a version as the graphical input device for the Alto computer (Edwards 2008). Then in January of 1984 the first Macintosh was released and it was this Macintosh computer that brought the mouse to the attention of the general public (Edwards 2008). However it was not until 1995, with the release of Windows 95, that the mouse became ubiquitous and is now the essential devices sold with every desktop pc. (Buxton 2005)
Since the mouse was first developed Computer interaction has used the same interaction techniques that we as use ubiquitously in our everyday interaction with a vast array of systems. As new technology continues to develop these newly developed computer systems will create new interaction techniques that will take advantage of the whole human body using gestures to control the system. In this instance a gesture can be defined as
"Any physical movement that a digital system can sense and respond to without the aid of a traditional pointing device such as a mouse or stylus. A wave, a head nod, a touch, a tap and even a raised eyebrow can be a gesture" (Saffer 2009 p.2)
User Interface (UI) & Natural User Interface (NUI)
The point of this interaction between user and system is called the User Interface (UI). The user interface is the part of the computer and its software that people can see, hear, touch and direct commands to. Proper interface design should provide a mix of well-designed input and output methods that's satisfies the user's needs, capabilities and limitations in the most effective way possible. The best interface is one that is not noticed, one that allows the user to focus on the information and task at hand.
Many ideas for re-designing computer workspaces beyond the computer screen have been developed. As mentioned earlier the main goals of this type of research have been to turn everyday surfaces, such as tabletops or walls, into an interactive surface (Rekimoto 2002). The users of these systems can manipulate, share, and transfer digital information in situations that are the complete opposite of using a standard Desktop PC. For these systems to work the user's hand, finger or movement must be tracked and be recognizable to the system. Hand and finger based interaction offers several advantages over traditional mouse-based interfaces, especially when it is used in conjunction with physical interactive surfaces.
1.4 Multi-touch Configurations
At the forefront of the interactive multi-touch and gestural revolution is Jeff Han who's refined frustrated total internal reflection (ftir) sensing technique (Han 2002) has created the possibility to design and create a much cheaper alternative of interactive multi-touch surfaces. From his now famous TED presentation in 2006 where he unveiled his multi-touch pressure sensitive display surface a new wave of new designers and programmers have began to develop and create a low cost DIY approach to multi-touch towards its construction and implementation. This DIY approach allows to the user to construct an interactive surface from just a few components such as IR Led's, sheet of acrylic, standard web camera, projector and if the configuration needs to be enclosed some material like mdf or plywood. A sensor of some sort is used in all of multi-touch and gestural interfaces to detect any changes in its environment. The most common types of sensor used are
Pressure: which detects if something is being touched or pressure is being applied.
Light: usually infra red, to detect changes in the light frequency and the surrounding environment.
Proximity: to detect spatial awareness such as objects in the area.
Acoustic: to detect the presence of sound.
Tilt: to detect an increase of angel vertical or horizontal.
Motion: to detect movement and speed of any object.
There are currently three main methods of creating low cost Multi-touch interactive surfaces: Frustrated Total Internal Reflection (FTIR), Diffused Illumination (DI) and Laser Light Plane (LLP)
Frustrated Total Internal Reflection (FTIR)
Infrared light is shined into the side of an acrylic panel (most often by shinning IR LEDs on the sides of the acrylic). The light is trapped inside the acrylic by internal reflection. When a finger touches the acrylic surface this light is "frustrated" causing the light to scatter downwards where it is picked up by an infrared camera.
(Teiche, A. et al. 2009)
Figure 1: Shows the frustrated total internal reflection (ftir) technique for construction of a multi-touch interactive surface.
Diffused Illumination Rear and Front (DI)
Diffused Illumination comes in two main forms. Front Diffused Illumination and Rear Diffused Illumination. Both techniques use the same basic principles but rear DI is favoured over front DI. The Microsoft Surface configuration uses a Rear DI set-up. Infrared light is shined at the screen from below the touch surface. A diffuser is placed on top or on bottom of the touch surface.Â When an object touches the surface it reflects more light than the diffuser or objects in the background; the extra light is sensed by a camera. Depending on the diffuser, this method can also detect hover and objects placed on the surface.
(Teiche, A. et al. 2009)
Figure 2: Shows rear diffused illumination (DI) configuration for construction of a multi-touch interactive surface.
Laser Light Plane (LLP)
Infrared light from a laser(s) is shined just above the surface. The laser plane of light is about 1mm thick and is positioned right above the surface, when the finger just touches it, it will hit the tip of the finger which will register as an IR blob.
(Teiche, A. et al. 2009)
Figure 3: Shows rear Laser Light Plane (LLP) configuration for construction of a multi-touch interactive surface.
5. Chapter 2: Tracking Touch
2.1 Multi-touch tracking Software
To design and develop a multi-touch and gestural interface each system has to be able to recognize and track each gesture and touch event. All natural user interface systems and specifically multi-touch based systems use tracking software such as Community Core Vision (CVV) (Nuigroup 2009) to track multiple fingers at once. CCV is an open source/cross-platform solution for computer vision and machine sensing. It takes a video input stream and outputs tracking data (e.g. coordinates and blob size) and events (e.g. finger down, moved and released) that are used in building multi-touch applications.
Figure 4: Shows community core vision (cvv) tracking software. It shows the source image of contact with the interactive surface and the then shows the blobs that are tracked via the IR camera.
2.2 Touch Input
It is this tracking software that enables multiple fingers to perform various actions without interrupting each other. The use of this software creates a system for a number of users interacting at any given time, which is what we call "multi-user." In the real world there is already multi-touch and multi-user, and we use gestures all of the time; if done right, these systems should be as easy to interact with as picking up a pencil, drawing a picture, and showing it to someone, "innovation should not make things more complicated for the user "(Norman 2003). One of the main goals in designing these new types of system should be no different than the main goal of user design which is to try to make complicated things easy to understand and use. When discussing touch input there are three main areas:
In a basic touch system the users would use standard interaction devices such the mouse,Â keyboard, joystick, track-ball, and a standard single touch event touch- screen.
Multi-touch applications involve multiple touch points and multiple user interaction at any one time and such applications require a different design approach. In the paper Beyond flat surface computing:Â challenges of depth-aware and curved interfaces Benko (2009) identifies one of the main drawbacks to Multi-touch:
"The design of most interactive touch-sensitive surface systems can restrict user interaction to a 2D plane of the surface and disregard the interactions that happen above it."
(Benko 2009 p. 936)
Gesture input generally involves no touch event for interaction and in contrast to multi-touch most of the gestures take place some distance from the surface, causing an event to execute when each gesture is recognized.Â Gesture movements can be intuitive and range from simple hand and body movement to more subtle complex movements such as facial expressions, where sometimes the slightest involuntarily movement can be classed as a gesture. Using pen or mouse to gesture, applications can enable the user to author musical scores, create interactive mathematical demonstrations, and make architectural drawings similar to CAD.
Multi touch and natural user interface designs will require applying new ideas and techniques from the cutting edge of interaction design and human computer interaction (HCI). Challenges that arise can be users who share the same physical space and co-interact, these situations create new challenges for software designers, than when designing system for standalone single users: these users are said to be co-present.
"co-present ,are users who share the same physical space and situations differ from the challenges software designers have long dealt with when needing to support physically distributed users. "
(Baecker. M, et al. 1995)
2.3 Natural Interaction
New interfaces such as the Nintendo Wii also make use of freeform hand gestures to make for a more physical and engaging interactive experience. Numerous ways of how touch and gesture are incorporated into different systems have developed. Some systems, such as the iPhone, use touch as the only input method while systems such as Windows 7 allow for multiple input methods such as touch, stylus, keyboard and mouse. Some free-form gestural interfaces don't require the user to touch the device; however a glove can be used to track more pronounced movements. As gestural interfaces develop the goal of this technology is to use the body as the sole source of interaction and input. For these system to be designed and function correctly designers must included the study of anatomy, physiology, and the mechanics of human movement which is called is called kinesiology (Saffer 2009) and is just as important as HCI when designing natural user interfaces. So far there is little in the form of standard hardware for these interfaces to be developed on. Most systems use the CCV camera systems and others can use different capacitive technologies, which can lead to multiple design approaches depending on which system set-up and configuration is being used at the time.
2.4 Multi-touch Limitations
When designing the user interface designers must consider the appropriate method of input for any given task, it is important for designers to take the use, and practicality of these systems into account. For example, an interactive wall surface should be designed with as little user interaction as possible otherwise the user's arms maybe become tired. Considerations when designing for touch points using a finger instead of the cursor, the cursor is a very precise small target while the finger is attached to the hand and in turn the hand attached to the arm which can cast shadows and obstruct the user's view. Fingers can also be a problem for designers, fingers secrete natural oil which can make the touch surface being used greasy and oil causing the surface to become difficult to use. Fingerprints can also be a problem leaving the surface with many smudges allowing for a dirty used look, however the use of bright colours in the design can help hide this problem but dark black colours in the background can sometimes highlight the problem.
6. Chapter 3: Human Computer Interaction
3.1 Human Computer Interaction (HCI)
All new multi-touch and gesture interface design will draw on some of the more traditional established techniques from Human Computer Interaction or HCI. The term human-computer interaction has been widely used since the early 1980's but its roots can be traced back to more established disciplines which include Psychology, Ergonomics, Linguistics, Sociology, Anthropology, Graphic Design and Typography (Hill 1995)The goal of good HCI design is to make technology easier for the user to learn and use. Human - computer interaction is defined as:
"The study of people, computer technology and the way these influence each other. We study HCI to determine how we can make this computer technology more usable by people."
(Dix A, et al 1993 p 5.12)
Natural intuition can also play a role in how we perceive and what we can actually do with physical objects in the real world. We can start by looking at how people interact with objects in the real world, and the idea of affordances (Norman, 1990; Gibson, 1979). Norman states
"The term affordances refer to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could be possibly be used."
(Norman D 1998 Chapter 1 p. 1-33)
In context this implies that we interact with objects in a natural way every day, when a phone rings we answer it, converse with the other person on the other end, or maybe pass it on to somebody else. We can pick up a postcard, turn it over to see if what is written on the back, and who sent it. By doing this we interact with real world objects in a natural way. The main aim of a natural user interface in gesture-based systems is to re-create this natural feeling of interaction with everyday objects, in how we interact with digital objects. So rather than trying develop new and complicated ways to interact with new digital objects, the aim is to try to re- create how people naturally interact with objects and each other when designing gesture based systems.
HCI design must employ a variety of factors when designing for user interaction such as what people want and expect, what physical abilities or even limitations each user processes and what people find enjoyable and attractive. The computer is a complex tool created to have the potential to expand our knowledge and improve the way we can function in all aspects of our everyday life. (Baecker et al 1995) For decades people have invented, designed created objects some having a basic simple use while others have a more multi-propose and complex use. In the first chapter titled "The Psychology of Everyday Things" from the book The Design Of everyday Things ( Norman 1998) , Norman introduces six concepts that we can use to evaluate good and bad design, Affordances, Constraints, Conceptual models, Mapping, Visibility and Feedback. Of these Norman identifies two key principals that help ensure good HCI which are Visibility and Affordances.
"Controls need to be visible, with good mapping with their effects, and there design and should also suggest (that is afford) their functionality "(Norman, 2002)
Although HCI coves both areas of system input and output for the purpose of this dissertation concerning gestural and multi-touch the author covers input only. Input is how a person communicates their needs or actions to the computer. As computer design and systems evolved over the years a number of input devices were also developed, most of which have not changed much in the way of their design or functionality over the last thirty years.
"Input is concerned with recording and entering data into the computer system and issuing instructions to the computer. In order to interact with computer systems effectively, user must be able to communicate their instructions in such a way that the machine can interpret them"
(Preece J, et al, 1994, p-212)
These basic touch devices' as mentioned earlier in chapter 2 are the mouse, keyboard, joystick, trackball and standard touch-screen (single point contact only). Three key aims in selecting an input device and deciding how it will be used to control each interface its actions must:
Match the physiological and psychological characteristics of users, there training and experience
Are appropriate for the tasks to be performed
Is suitable to the intended work and environment
(Preece et al 1994 p-212)
The design and implementation of these new systems must adhere to these standard design principles while also developing new interaction principles and techniques. If we look at some of the key principles of usability, and user experience in general there are aspects of having multiple users and inputs that can be an advantage. (Norman 1990: Gibson 1979). Most software developed today must accommodate both the novice and expert users. Novice users can discover the features and applications through the use of visible clues such as drop down menus that list all potential actions, buttons and icons etc. Accommodating the novice users is achieved through visual aids about what actions and features are possible. Accommodating expert users on the other hand is a little more complex. A frequent user can quickly become tired and frustrated with having to use the same icons and visual aids that help novice users. If an expert user already knows a specific action to take they are usually offer keyboard shortcuts for known simple actions like copy (Ctrl +C) and paste (Ctrl +V), to the more advanced actions that you'll find in all software released today. Although this can be time consuming to learn the multiple shortcuts, with practise they can speed up and enhance user interaction and experience.
3.3 Universal design
Most objects and systems designed have a user interface, some are relatively simple to use and need little or no instruction to use such as a door: it has a handle to facilitate opening and closing. There are universal design principles that can be incorporated in to any user interface that can transcend barriers such as language, culture and disabilities. Multi-touch and natural user interface design can draw on some these principles. Universal design principles can be applied to any interactive systems by allowing any user to engage with any system, with any range of disabilities, using any technology platform. This can be achieved by designing systems that incorporate the use of four of our five senses: sight, touch, sound and speech. Vanderheiden (2002) defines universal design as:
"A focus on designing products so that they are usable by the widest range of people operating in the widest range of situations as is commercially practical"
(Vanderheiden 2000 p.32)
In 1998 a group of researchers at North Carolina State University proposed seven principles for universal design. The information contained in this paper "The Universal Design File: designing for all people of all ages and abilities "discussed the challenges in the universal design approach should be taken as an inspiration for good design(Story et al 1998 ch.3 p 3-6). The project was funded by the National Institute on Disability and Rehabilitation Research of the U.S. Department of Education. The seven principles are:
Principle one is equitable use: The design is useful and marketable to people with diverse abilities. Designs are usable by people with a range of different abilities; no user is excluded or stigmatized. The design should appeal to all.
Principle two is flexibility: The design accommodates a wide range of individual preferences abilities. Designs should provide a choice of interaction methods, allow for ease of access
Principle three is simplicity and ease of use: Use of the design is easy to understand, regardless of the user's experience, knowledge or language skills. Designs should support the users' expectation while being unnecessarily complex.
Principle four is perceptible information: The design communicates necessary information effectively to the user regardless of the environment or the user's abilities. Designs should present information in different forms such as graphics, verbal, text, sound and touch.
Principle five is tolerance for error: The design minimizes hazards and the adverse consequences of accidental or unintended actions. Designs should minimize the impact caused by user error while dangerous actions that can damage the system should be very hard to reach.
Principle six is low physical effort: The design can be used efficiently and comfortably and with a minimum of fatigue. Design should allow the user to maintain a natural posture while interacting with the system.
Principle seven is size and space for approach: Appropriate size and space is provided for approach, reach, manipulation, and use regardless of user's body size, posture, or mobility. Designs should provide for different hand size and allow room for assistance device to be used
These universal design principal were discussed and developed primarily for people with disabilities, allowing each design to be used by any conceivable user. However if these principles are applied to the design of the natural user interface they can only serve to enrich and enhance user experience.
7. Chapter 4: Multi-Touch Application and Functionality
4.1 Multi-touch Design Guidelines
So could these design principles combined with some of the HCI disciplines create a standard set of principles and rules that govern the design of the new interactive surface or gestural interface as HCI governs traditional user interface design? When we discuss Multi touch and the idea of a natural user interface we can compare how a users' interaction can differ greatly using multi touch and gestural interfaces compared to the standard interaction. Based on the findings from Determining the benefits of direct-touch, bimanual, and multi finger input on a multi-touch workstation Kin et al (2009), the authors' suggest Design Guidelines for developing applications for multi-touch systems. These guidelines compare the differences between single-touch vs. multi-touch and single finger vs. multi-finger interaction.
A one finger direct-touch device delivers a large performance gain over a mouse-based device. For multi target selection tasks even devices that detect only one point of touch contact can be effective.
Support for detecting two fingers will further improve performance, but support for detecting more than two fingers is unnecessary to improve multi target selection performance.
Reserve same-hand multi finger usage for controlling multiple degrees of freedom or disambiguating gestures rather than for independent target selections.
Uniformly scaling up interfaces originally designed for desktop workstations for use with large display direct-touch devices is a viable strategy as long as targets are at least the size of a fingertip.
(Kin, et al, 2009)
As these technologies evolve the user interface will evolve, the user will begin to expect a more natural user friendly maybe even intuitive responsive user interface. So this leads to the question "What type of applications might be a viable option for touch-based user interfaces"? Below are four examples of cutting edge technologies that allow for multi-touch and gesture based application to be developed. Some of these new applications can act as stand-alone features or combined they can create new functionality
The technologies are:
Multi-touch and gesture systems replace mouse clicks with finger taps and standard keyboard short cuts and actions with two finger gestures. Current multi-touch systems recognize multiple fingers and objects at once, for example the Microsoft Surface can currently recognize and track fifty two fingers, objects, or tags which are bar-code type identifiers (Derene 2009) such as phicons and can be used to show possible actions. A phicons or physical icon is:
"the tangible computing equivalent of an icon in a traditional graphical user interface, or GUI"
(Fidalgo F, et al 2006 p 201)
Phicons are small physical objects (e.g. a pawn of queen on a chess board) that can be identified by a system through the use of a bar code that cameras can pick up and identify. Phicons can be used as images that will allow a user to gain valuable information such as what action or idea they represent or to perform.
Figure 5: Shows a drinking glass that that has a specific barcode tag, which the interactive surface can recognize allowing the user to access information such as ordering another drink, ordering food or information relating to live event on the premises.
4.3 Augmented reality
Augmented reality is also a key new technology in this new and exciting digital revolution. The iphone can be at the fore-front of most new emerging technology and augmented reality is no exception. With numerous augmented reality iphone apps available to download with functionality ranging from layer reality browser's to navigation and live real time information in most major cities. Augmented reality (AR) is defined as
"A term for a live direct or indirect view of a physical real-world environment whose elements are merged with (or augmented by) virtual computer-generated imagery - creating a mixed reality. Artificial information about the environment and the objects in it can be stored and retrieved as an information layer on top of the real world view."
(R. Azuma 1997 p. 355-385)
An interesting use of this technology is installed Lego stores throughout the United States. The idea behind this system is that the user selects a product the wish to purchase, then once scanned into digital AR kiosk; the kiosk then displays a real time 3D rendered version of the model with characters and sound similar to a live action piece. This enhances the user interaction and experience by mixing the real world with the digital.
Figure 6: Shows the Lego digital box. Also displayed is the augmented reality 3D render of the chosen product.
4.4 Mobile Technology
As more interactive surfaces emerge casual interactions from passersby are a new and exciting way for users to interact with these systems .Today most people who use an interactive surface might be also expected to carry a mobile phone with them. Current models have many advanced features, such as Bluetooth, a camera, 3D graphics and most models are able to run custom applications. Therefore, it is reasonable to assume that the majority of users who interact with any interactive surface would also like to use their mobile phone as a tool for this interaction. This can offer new possibilities for interaction between the users' and say any large public interactive display surface.
Figure 7: Shows two passers-by interacting with a large public display that was place in the window of a store in the United States.
The Microsoft Surface (Microsoft 2008) can be deployed in a mobile operator's store so that customers can place their own phone (or new phone model) where the mobile phone can act as a controller, as shown by Poker Surface: Combining a Multi-Touch Table and Mobile Phones in Interactive Card Games. (Shirazi 2009) This method can provide a "tangible feeling" and allows the users to hold their cards in their hands. By applying gestural actions to the mobile device such as vertical ,and horizontal tilt, shake or even flick the user can control the game, which in poker actions translates to look at your cards, fold your cards, check and deal.
Figure 8: Natural gesture interactions using a mobile phone: (a) look into cards, (b) fold with cards open, (c) fold with cards closed, and (d) check.
Even further development is taking place in the area of mobile technology and multi touch development a possible further successor of the iphone is the "Lucid Touch". Shen et al 2009 present the "Lucid Touch" a new mobile interaction model based on the idea of a double-side multi-touch, based on a mobile device that receives:
"Simultaneous multi-touch input from both the front and the back of the device. This double-sided multi-touch mobile interaction model enables intuitive finger gestures
for manipulating 3D objects and user interfaces on a 2D screen."
(Shen 2009 p 4339-4344)
Figure 9: Shows the "LucidTtouch" prototype allowing double sided multi-touch input and manuiplation.
4.5 Sixth-Sense Technology
"SixthSense is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information."
(Mistry & Maes 2009)
We have five basic senses sight, touch, hearing, smell and taste; these natural senses help us to process information which helps us make decisions and chose the right course of action to take. The developer of the SixthSense technology Pranav Mistry argues that the most useful information that can help us make this decision is not perceived by our five senses, which is meta-data, information and knowledge that has been created about everything and which today is increasingly available in the digital world. Traditionally most data and information is confined to paper or digitally on a screen.
"SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures."
(Mistry & Maes 2009)
The SixthSense system consists of a mirror, a camera and a small projector that can be worn by the user. Both the projector and the camera are connected to a small mobile computing device usually in the user's pocket. The projector projects visual information onto any surface such as walls and physical objects around us to be used as interfaces, See Fig 10. while the camera tracks the user's hand gestures and physical objects in the environment. The software tracks visual coloured fiducials (tracking markers) that the user wears a on the tip of each finger. The movements of each fiducial are processed as gestures and can execute a pre programmed set of instructions that are displayed on the chosen application surface. There is no limit on the number of fiducials once each one is unique, which enables mulit-touch and multi-user interaction.
Mistry (Mistry 2009) also demonstrates some of the features and applications of SixthSense technology which include a map application that lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by multi-touch based systems, letting the user zoom in, zoom out or pan using hand movements. There is a drawing application that lets the user draw on any surface by tracking the fingertip movements of the user's fingers. Mistry (Mistry 2009) also shows that SixthSense can also recognize user's freehand gestures. For example, the SixthSense system captures the gesture with the in this instance is a gesture of forming a picture frame, this gesture then is processed and an image is captured and stored in the system see fig 11. The user can stop by any surface, project the captured images on to the surface and flick through the photos. see fig 12 Augmented reality can also be implemented with this technology, for example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. see fig 13
Figure 10: Shows Pranav Mistry wearing the "SixthSense" portable device
Figure 11: Demonstrates the image gesture capture technology, by gesturing the shape of a picture frame the SixthSense system snaps a digital photograph.
Figure 12: Shows a demonstration of using any surface, in this case a standard wall as an interactive.
Figure 13: Shows an augmented reality live news feed video projected directly on to the newspaper surface.
8. Chapter 6: Conclusion
HCI and related disciplines grew out of collaboration between psychology and computer science and are directed towards functionality between humans and computers rather than experience
Still today on the market most IT products both hardware and software are marketed and sold on the included features rather than the benefits to the user. Only after a user has purchased a product based on it features, the user can find the product difficult to use and navigate.
However, most of the current interfaces remain firmly tied to the traditional flat rectangular displays of the today's computers and while they benefit from the directness and the ease of use, they are often not much more than touch-enabled standard desktop interfaces.
Sixthsense to be released open source free code and software with instructions on how to use it â€¦.the future
References are arranged as they appear in the main body of text.
Spielberg S. (Director) (2002). Minority Report. [DVD]. 2Oth Century Fox.
Lee, SK. Buxton, W. and Smith, K.C. (1985). Multi-touch: a three dimensional touch-sensitive tablet. Conference on Human Factors in Computing Systems: Proceedings of the SIGCHI conference on Human factors in computing systems. New York: ACM
Mehta, N. (1982). A flexible machine interface. M.A.s thesis. Department of Electrical Engineering, University of Toronto. New York: ACM
Buxton, B. (2005). Multi-touch systems that I have known and loved (online). Microsoft Research: Available from: http://www.billbuxton.com/multitouchOverview.html
English, W. K. Engelbart, D. C. and Huddart, B. (1965). Computer aided display control -final report. Stanford research institute, augmentation research centre. New York: ACM
Edwards, B. (2008). The computer mouse turns 40 (online). Macworld.com. Available from: http://www.macworld.com/article/137400/2008/12/mouse40.html
Saffer, D. (2009). Designing gestural interfaces. 1st edition p.-2. California: O'Reilly
Rekimoto, J. (2002). SmartSkin: an infrastructure for freehand manipulation on interactive surfaces. Conference on Human Factors in Computing Systems: Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves (UIST). New York: ACM
Han, J. (2005). Low-cost multi-touch sensing through frustrated total internal reflection. In Proceedings of ACM Symposium on User Interface Software and Technology
(UIST). p. 115-118. New York. ACM
TED (2006). (online). Jeff Han demos his breakthrough touch screen. Available from: http://www.ted.com/talks/lang/eng/jeff_han_demos_his_breakthrough_touchscreen.html
Teiche, A. et al. (2009). Multi-touch technologies. Nui group authors
1st edition. Community release: Nuigroup.com.
Community Core Vision (Formally known as tbeta). (Online) Available at http://ccv.nuigroup.com/
Norman, D. (2003). Emotional design: why we love (or hate) everyday things. 1st edition. New York: Basic Books
Benko, H. (2009). Beyond flat surface computing:Â challenges of depth-aware and curved interfaces. International Multimedia Conference: Proceedings of the seventeen ACM international conferences on Multimedia (UIST). P.936. New York: ACM.
Baeeker, R. M. (1995). Human-computer interaction; toward the year 2000. 2nd edition.p1 San Francisco, California: Morgan Kaufmann Publishes, Inc.
Dix, A., Finlay, J., Abowed, G., and Beale, R. (1993). Human Computer Interaction. 3rd edition. Essex, England: Prentice-Hall.
Baeeker, R. M. (1995). Human-computer interaction; toward the year 2000. 2nd edition.p-1 San Francisco, California: Morgan Kaufmann Publishes, Inc.
Norman, D. (2002). The psychology of everyday things. 2nd edition. New York: Basic Books.
Preece, J. et al (1994). Human computer interaction. 1st edition. P.212. Wokingham, England: Addison-Wesley.
Gibson, J.J. (1979). The Ecological Approach to Visual Perception. 1st edition Boston. : Psychology - Press
Norman, D. (1998). The psychology of everyday things. 1st edition chapter 1, p.33. New York: Basic books.
Vanderhriden, G. (2000). Fundamental principles and priority settings for universal usability. PhD. Conference on universal usability November 16 - 17 2000 p.32. New York: ACM Press.
Story, M.F. Muller, J. l. and Mace, R. L. (1998). The Universal Design File: designing for all people of all ages and abilities. The Centre for universal design, North Carolina State University: chapter 3 p 4 - 6
Kin, K. et al. (2009). Determining the benefits of direct-touch, bimanual, and multifinger input on a multi-touch workstation. ACM International Conference Proceeding Series; Vol. 324 Proceedings of Graphics Interface. p. 119-124. New York: ACM Press.
Fidalgo F., Silva P., and Realinho V.: "Ubiquitous Computing and Organizations", Current Developments in Technology-Assisted Education. Lisbon Portugal. P.201.: http://www.formatex.org/micte009
Azuma, R. (1997). A survey of augmented reality in presence: Teleoperators and Virtual Environments 6, 4 (August 1997), p355-385. Malibu, California: http://www.cs.unc.edu/.
Dietz, P.H. and Eidelson, B.N. 2009.SurfaceWare:Â dynamic tagging for Microsoft Surface Full text .Proceedings of the 3rd International Conference on Tangible and Embedded Interaction: Tangible and embedded interaction. P. 249-254Â Â Cambridge, United Kingdom: ACM.
Shirazi, A.S. (2009). Poker surface: combining a multi-touch table and mobile phones in interactive card games. MobileHCI '09: Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. New York: ACM Press
Shen, E-I. E. (2009). Double-side multi-touch input for mobile devices. Conference on Human Factors in Computing Systems: Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. p. 4339-4344. Boston, MA: ACM Press.
Mistry, P. and Maes, P. (2009). SixthSense:Â a wearable gestural interface. International Conference on Computer Graphics and Interactive Techniques. SESSION: Haptic, gestural, hybrid interfaces. Article no.: 11. Yokohama, Japan: ACM