This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Touchscreen screen applications have spread widely and successfully in public information kiosks, bank teller machines, ticketing machines and with the expanding technology. For this thesis, an interactive application called Wayang Kulit was developed using multi-touch technique known as Frustrated Total Internal Reflection. Wayang Kulit allows users to interact with object by touch gesture following the concept of direct manipulation. Interaction on touch sensitive screens is literally the most "direct" form of human computer interaction, where information display and control are but one surface.
Direct manipulation can be seen in some distinctive systems such as APPLE Macintosh, and many application software products such as spread sheets, word processors and drawing tools. In this paper, I present a study that explores the advantages and disadvantages of touch interface, specifically in relation to direct manipulation. I discuss the techniques and results.
Chapter 1: Introduction
As touch screen technology becomes more available at a lower price, multi-touch systems have been implemented in many different interfaces. Compared to traditional touch techniques, the multi-touch technique allows the user to perform complex manipulations using two fingers, or even ten fingers simultaneously.
In this paper I will discuss the project Wayang Kulit. The project designed is an interactive application displaying visual puppets for users to interact with. Wayang Kulit combined multi-touch technology Frustrated Total Internal Reflection and Flash Actionscript and software protocols such as Tangible User Interface Outputs and Community Core Vision.
Wayang Kulit is also known as shadow puppets and is a unique form of theatre employing light and shadow. In the real world a puppeteer controls the shadow puppets behind a stage canvas. The puppets are crafted from buffalo hide and mounted on bamboo sticks. When held up behind a piece of white cloth, with an electric bulb or an oil lamp as the light source, shadows are cast on the screen.
In discussing the application I will focus on direct manipulation and examine the advantages and disadvantages of direct manipulation. I will also explain the building and design process of the interactive application.
The interactive installation invites user to interact with puppets by using touch gestures and this creates an experience with direct manipulation of objects. The project is also aimed to make the user experience a theatrical performance and learn the story being told and interact with movements of these puppets. This is a form of digital storytelling (Miller 2004) which has emerged in the past few years.
This paper will only discuss multi-touch displays, for example computer displays on which the display surface itself also acts as a multi-touch sensing input device.
This paper is organized as follows. Chapter 1 takes a look at working definition and existing multi-touch technologies that have emerged. Because of their affordability, flexibility, and scalability I focus especially on Frustrated Total Internal Reflection technology.
Chapter 2 is to review literature of previous work and projects that have been carried out.
Chapter 3 explains the design of a multi-touch device and Wayang Kulit application that has been developed for the practical project.
Chapter 4 discusses the design of a multi-touch device and look at the application of Frustrated Total Internal Reflection. This section explains the technique used to interactive with the Wayang Kulit application.
Chapter 5 discusses the direct manipulation concept. With a better understanding of how multi-touch displays operate, I present the challenges involved in both interpreting the data produced by these input devices. The advantages and disadvantages of the concept of direct manipulation are discussed in this section. Here I will also present the performance evaluation experiments and its results.
Finally the conclusion and future work will be presented. I present various approaches, challenges, and research findings involved in the implementation of multi-touch displays and their user interfaces.
A video of the Wayang Kulit is available online should the reader need to review the application. The video can be found at the link below:
A touchscreen enables a single touch of a finger interaction directly with what is displayed, and multi-touch allows for the ability to use multiple finger gestures simultaneously onto the visual display. Single touch creates a touch event on the surface of a touch sensor so it is detected by the touch controller and the application can determine the X and Y coordinates of the touch event. Touchscreen are considered easy to use because of their directness. The control surface of a touchscreen is overlaid on the display, no extra input control device or space is necessary for touch screen interaction. Pointing at a touchscreen is achieved by moving a finger to the corresponding target on the screen.
The importance of touch displays is users directly interact or touch the visual representation of the graphical user interface. This extends the idea of direct manipulation that was introduced by Ben Shneiderman (1983). Shneiderman's term generally referred to users pointing to objects on the screen and manipulates them using a mouse and keyboard. When using a touchscreen the user really touches the visual representation.
Chapter 1 Existing Technology
Multi-touch technology is not entirely new; the first human input multi-touch system was developed by Nimish Mehta (1982) from the University of Toronto. Mehta's work consisted of a frosted-glass panel with a camera placed behind the glass. When a finger or several fingers pressed on the glass, the camera would detect the action as one or more black spots on an otherwise white background, allowing it to be registered as an input. The size of a dot was dependent on how hard the person was pressing on the glass and this resulted in the system to be pressure-sensitive.
Myron Krueger (1983) introduced VideoPlace a vision based system that combines live video images of visitors with graphic images. Two people situated in different rooms, each containing a projection screen and a video camera, were able to communicate through their projected images in a shared space on the screen. Visitors can interact with 25 different programs or interaction patterns. A switch from one program to another usually takes place when a new person steps in front of the camera. This system however has still not achieved its ultimate aim of developing a program capable of learning independently.
Bell Labs at Murray Hill (1983) published a comprehensive discussion of touch-screen based interfaces and in (1984) Bob Boie developed the first multi- touch touchscreen. The device used a transparent capacitive array of touch sensors overlaid on a CRT. The success of the touchscreen was it could manipulate graphical objects with fingers with good response time
Few years later Matsushita and Rekimoto (1997) presented the HoloWall. The HoloWall is a finger, hand, body and object sensitive wall. Multiple users can interact with the surface on the front side of the wall. The wall is made out of glass with a rear projection material attached. Behind the wall a digital projector is placed for display. On the same side as the digital projector a camera and an infrared illuminator is placed. When a user or object touches the wall, it reflects infrared light which is captured by the camera.
Diamond Touch table was created in (2001) by The Mitsubishi Research Labs, this table is a multi-touch and multi-user sensitive input device. The table works by transmitting an electrical signal to the array of rows and columns embedded in the surface. Diamond Touch supports small group collaboration by providing a display interface that allows users to maintain eye contact while interacting with the display simultaneously. This table is capable of distinguishing which person's fingers/hands are which and supports This unique touch technology supports multiple touches by a single user and distinguishes between simultaneous inputs from multiple users.
In (2002) SmartSkin was created to investigate new sensor architecture for making interactive surfaces that are sensitive to human hand and finger gestures.
This sensor recognizes multiple hand positions and their shape as well as calculates the distances between the hands and the surface by using capacitive sensing and a mesh-shaped antenna. In contrast to camera-based gesture recognition systems, all sensing elements can be integrated within the surface, and this method does not suffer from lighting and occlusion problems.
In (2005) Jeff Han presented a low cost camera based multi-touch sensing technique called Frustrated Total Internal Reflection. FTIR describes the internal reflection of light, inside a material (a piece of acrylic). The light will be an infrared light that that internally reflects inside the acrylic. The technique is force-sensitive, and provides unprecedented resolution and scalability. Han presented elegant implementation of a number of techniques and applications on a table format rear projection surface.
Early 2007, Apple presented the new Apple iPhone. The iPhone is a mobile phone with a user interface built around the device's multi-touch screen, including a virtual keyboard rather than a physical one. The iPhone senses touch by using electrical fields. On touch, the electrical field will change value which is measured. This allows the iPhone to detect which part of the phone is touched.
Microsoft presented their version of a multi-touch table called Microsoft Surface Computing (2007). The table looks like a coffee table with an interactive surface. The technique used in the table is similar to the HoloWall (1997). The table is illuminated with infrared light from the back. When a user touches the table, it will reflect infrared light which is captured by the cameras inside the table. This technology made the transition from research, development to commercial applications.
Chapter 2 Literature Review
In (2009) Han et al. wrote a paper on a general method for performing direct manipulation of 2D as well as 3D objects on a multi-touch surface.
This paper describes that since objects move in a predictable and realistic fashion, users are given the impression of "gripping" real objects. Direct manipulation essentially provides an intuitive and controllable mapping between points in local space and points in screen space, without the need for any explicit gesture processing.
The method provided was minimizing a quadratic energy function which attempts to match points in the object's local space to pixels in screen space. The results achieved were two interesting and useful bimanual interactions. Two fingers can be used to define an axis while a third finger can be used to control how much to swing the object about that axis. The three point rotation is an easy way for users to manipulate objects.
The second result is a 4 point interaction perspective rotate where the user places four fingers on the object in a roughly rectangular configuration. The user then decreases the distance between two of the contact points while simultaneously increasing the distance between the other two. The object then rotates to best achieve the perspective described by the new hand configuration. They discovered their method produces valid solutions but energy minimization may result in object rotating out of the screen when the user expects the object to rotate into the screen (or vice-versa).
In the same year DeRose et al (2009) presented a user study that compares direct touch, bimanual, and multi finger interaction on a single task of multitarget selection. DeRose et al created an experiment with several scenarios for users. Users either use a mouse based workstation equipped with one mouse, or a multi-touch workstation using either one finger, two fingers (one from each hand), or multiple fingers. The results found were direct-touch with one finger accounts for an average of 83% of the reduction in selection time.
For bimanual interaction, using at least two fingers, one on each hand, accounts for the remaining reduction in selection time. For novice multi-touch users there is no significant difference in selection time between using one finger on each hand and using any number of fingers for this task.
Both this studies have shown...
Chapter 3: Design of a multi-touch device
This section will discuss building the Wayang Kulit project. I decided to explore multi-touch technology and build a project for learning and entertainment purposes. Wayang Kulit is also known as Shadow puppets and is an interactive application suitable to be placed in a museum for users to explore the narrative of the system.
The requirements based on the thesis subject were to construct a multi-touch visual display allowing users to collaboratively interact with puppets and view visual feedback. Based on these requirements it was decided to create a table based horizontal multi-touch panel. A table shaped system would allow users to interact with the multi-touch display hence directly manipulating the objects on screen.
3.2 Frustrated Total Internal Reflection
Frustrated Total Internal Reflection setup involves three components which consist of a sheet of transparent acrylic, a chain of infrared LEDs (light emitting diode), and a camera with an infrared filter. The LEDs are arranged around the outside of the sheet of acrylic so that they shine directly into the thin side surfaces.
Once the infrared light is inside the acrylic, it strikes the top and bottom surfaces of the acrylic at a near-parallel angle, and is subject to the effect known as total internal reflection. This causes it to be wholly maintained in the acrylic. When the user comes into contact with the surface, the light rays are said to be frustrated, since they can now pass through into the contact material (usually skin), and the reflection is no longer total at that point. This frustrated light is scattered downwards towards an infrared webcam, capable of picking these 'blobs' up, and relaying them to the tracking software. Figure 1 is a diagram of the Frustrated Total Internal Reflection technique.
Figure 1 Frustrated Total Internal Reflection © Jeff Han
Image taken from: Frustrated Total Internal Reflection multi-touch display how to guide
Chapter 4: Multi-touch application
4. 2 Implementation of the application
In order to demonstrate the input capabilities and benefits of a touch device a set of application was developed. The application demonstrates examples of touch based interaction on the device which will be used to analyze the concept of direct manipulation.
The Wayang Kulit application is created using Flash Actionscript 3, Community Core Vision and A Protocol for Tangible User Interfaces libraries (TUIO).
Figure 2 shows an example of the multi-touch application. The program written adds circles to the screen display when a user places a finger and touches the screen. This example was the first code executed during the development of the Wayang Kulit project. Refer to Appendix B for the program code.
Figure 2 Circles are created on the touchscreen when user places their finger on the screen.
Circles, Sharifah Noorazel 2010
This section describes the Wayang Kulit application and explains how a user interacts with the application.
Wayang Kulit was developed to allow users to interact with puppets similar to the real world puppeteer manipulating the puppets on sticks. This application was design to give an entertaining and knowledgeable experience for the users. The interface is simple and straightforward for users to interact with.
For a user to use this interactive application the user stands and faces the screen and starts interaction by placing a finger onto the Start button see (Figure 2). Users will directly engage with the objects in the domain. Once the Start button has activated the screen shows a curtain opening scene and display a puppet figure with a background shadow and items on stage.
The gestures used for this application are 'touch down', 'touch up' and 'tap' gesture (Figure 3). These three events is the equivalent of a 'mouse down' event. When the program detects a finger being placed on the screen, the event takes place. For the 'touch up' event when a finger is removed from the screen, the event takes place.
The 'one finger single tap' event is the equivalent of 'mouse click' event. When a finger is placed on the screen and then removed within a certain time interval, the event takes place.
Figure 3 Touch gestures
Image taken from http://gestureworks.com/about/supported-gestures
For example, in the first screen display refer to Figure 4, it is natural and convenient to point and touch the Start button with a finger. This is as if the button were a real button on a table that has been pressed to activate. There is no intermediary in between the user and the objects. This is discussed further in Section...
Figure 4 User starts an action
Wayang Kulit, Sharifah Noorazel 2010
Figure 5 shows how user initiates and action on the puppets by placing two fingers to touch the stick visual object. User first starts by touching with one finger then the second finger and they can place more fingers on to the display.
When the user touches the stick attached to the puppet joints for example stick attached to hand and leg of puppet, the puppet joints will move. Simultaneously the shadow puppet will move and correspond to the main puppet movements.
Figure 5 User performs actions
Wayang Kulit, Sharifah Noorazel 2010
The Wayang Kulit application is also designed to handle mouse events so it can be executed on a desktop application. For users to operate the application on a desktop a device such as a mouse is used to move a cursor to its destination. Please refer to Section for the mouse....
Please see Appendix A for the Project Documentation.
Direct manipulation was a term coined by Ben Shneiderman in his paper "Direct manipulation: A step beyond programming languages" (1983). The paper describes in a direct manipulation interface, task objects are presented on a screen and the user has a repertoire of manipulations that can be performed on any of them. The user has no command language to remember beyond a standard set of manipulations, few cognitive changes of mode, and a reminder of the available objects and their states shown continuously on the display. Direct manipulation was referred to highly usable and attractive systems such as display editors, early desktop office systems, spreadsheets, CAD systems and video games. These systems had graphical interfaces which allowed them to be operated 'directly' using manual actions rather than typed instructions. Direct manipulation interface provides:
1. Continuous representation of the object of interest.
2. Physical actions (movement and selection by mouse, joystick, touchscreen) or labeled button presses instead of complex syntax.
3. Rapid incremental reversible operations whose impact on the object of interest is immediately visible.
When using the Wayang Kulit application, the user performs a physical action of touching the visual representation. The aim is that users will have the feeling of being in control and move to their own selected path. This will encourage the user to feel a sense of engagement in a natural and convenient environment.
In comparison if a user selects a mouse as their source of interaction, the mouse will be the intermediary device in between the user and the application.
The user has to perform the action of pressing down the mouse button while the cursor is over the name of the Start icon in Figure 4 and then moving the mouse while holding down the mouse button. The usage of a mouse device distracts the user's attention from their actual task which increases cognitive effort.
For the touch gesture a single finger acts in the same way a mouse cursor does.
Due to this level of indirectness, switching between logical functions becomes costly in terms of cognitive effort, caused by distraction of the user's attention from his actual task.
On the multi-touch table no cursor is presented, users will be free to use the hand closest to the object. Cursor positioning with a touchscreen is different than cursor positioning with other devices. Using a mouse device, there is always a cursor present and movement of the device moves the cursor. On the touchscreen, there may not be a cursor present when the user's finger is not touching the screen. Once the user touches the screen, the cursor is placed near the finger and can then be dragged around the screen. Mouse is indirect and takes longer to learn, users have to learn coordination between hand movement and the position of the cursor on the screen.
In Wayang Kulit application users operate the system without having to learn any command language or syntax. An experience user can demonstrate to novices on how to use the system.
Chapter 5: Discussion
From the application user uses direct hand gesture to touch the screen. Will using direct manipulation in the system give users the feedback they perceived? Will there be different styles of direct manipulation emerging in the future for graphical interfaces?
5.2 Advantages of Direct Manipulation
Within the scope of direct manipulation there are advantages and disadvantages of using this concept.
Firstly, direct manipulation shows immediate display of the results of an action. This has the power to attract users because it is a rapid process allowing instant feedback. For example, in Figure 4 when the user touches the Start button the screen will display the next scene showing an opening curtain act. With a rapid process, any reversible actions are visible immediately. Users are able to make changes in their actions for instant selecting different targets of the objects.
Second advantage is the actions presented are physical instead of having a complex syntax. The user does not need to write a complex programming logic to execute an action on their side. Learning the basic functionality of how to use the graphical interface is made easy for users.
The third advantage, being presented with visual display interface users can see whether their actions are producing output and they can also change the path of their actions to the desired output. For example if the display presents 3 choices of puppets users can control their selection.
This leads to the fourth advantage where users will gain a sense of confidence and mastery because they are the initiators of action, they feel in control, and they can predict the interface's responses.
The fifth advantage is the speed of using a finger to select something rather than using a mouse or keyboard arrows to move objects. For example
5.3 Disadvantages of Direct Manipulation
Direct manipulation however has several disadvantages which are discussed in this section.
The first drawback is the human finger acting as a pointing device has very low resolution. Pointing at targets smaller than the finger width can be difficult to achieve. These limitations have been realized and dealt with before by Potter et al (1988). In their study a technique called Take-Off provides a cursor above the user's finger tip with a fixed offset when touching the screen. The user drags the cursor to a desired target and lifts the finger (takes off) to select the target objects. They achieved considerable success with this technique for targets between finger size and 4 pixels. For very small targets (1 and 2 pixel targets), however, users tended to make a large amount of errors with Take-Off
Second disadvantage of using physical gesture is arm fatigue. The continuous finger tapping and the second is long-distance movement of the hand across the touch screen. Users are required to tap and move their hands on a screen to generate relevant events such as movement and click events in order to guarantee compatibility with traditional graphical user interfaces. (Wang et al 2009)
The second problem is direct manipulation designs may consume valuable screen space and may force valuable information off screen. Using the Wayang Kulit, user has to be close to the screen in order to touch it and depending on user height they may face ergonomic  problems.
Another disadvantage of touch-screen monitors is that the display gets dirty frequently because of constant touching with oily or sweaty fingers.
5.4 Performance evaluation and results
This section reviews Wayang Kulit application in terms of its capabilities and performance.
The Wayang Kulit's interface appearance should transform the user's understanding from the real world to the digital version. In the system built the interface is itself a world where the user can act. From this experience the user is presented with a feeling of directness between themselves and the digital version. The goal of this is to minimise cognitive effort.
The feeling of direct engagement is described in this example. In Wayang Kulit the user is free from intermediary devices and objects and the world of interest is explicitly represented. Here the interface has provided the user with a world in which to interact with. The domain will also change status responding to the user actions. An interface system that offers direct manipulation has to give a qualitative feeling that one is in direct control of the objects, not with the computer. Pointing makes the feeling natural because everyone knows how to point and touch things.
The objects of that world must feel like they are the objects of interest and user is doing things with them and watching how they react. In order for this to be the case, the output language must present representations of objects in forms that behave in the way that the user thinks of the objects behaving. Whatever changes are caused in the objects by the set of operations must be depicted in the representation of the objects.
In relation to the real Wayang Kulit and direct manipulation
Results taken from testing the Wayang Kulit interactive application:
As the screen is placed horizontally, results from testing the application shows that the ergonomics of the screen should be improved. [Don't conflate the two. And view by whom? The user or the audience? More thought needed.]
There is a high friction with the multi-touch surface. The surface which is made of acrylic makes it difficult to perform smooth motions over long distances. After completing all experiments users mentioned painful fingertips.
The purpose of our study was not to eliminate one or the
other technique, but rather to understand the relative advantages
and disadvantages of touch and tangible interaction
and to inform further iterations of the design. What follows
are examples of how our findings can be used to improve
Chapter 6: Conclusion
My experiment with Frustrated Total Internal Reflection has resulted in the conclusion that Frustrated Total Internal Reflection provides the most robust vision based multi-touch recognition under all varying lighting conditions. Multi-touch technology has proved to become affordable. The Wayang Kulit application has demonstrated the power of multi-touch devices and users were able to manipulate objects in a natural way by touch. While multi-touch is capable of performing some task faster than the mouse device, it should not be considered as a replacement. Multi-touch device however, encourage collaboration.
Our test results show signi_cant improvement when the number of users is increased.
The results of my study highlight several advantages and disadvantages of touch user interfaces.
The key issues are
Future intended work on this subject mainly includes the evaluation of newly proposed designs.
I will also investigate the tracking of finger input
We will also investigate the effects of using direct
multi-touch devices and indirect multi-touch devices and we
will assess how they differ from our current results and observations.
New things to investigate