This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
SixthSense is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user's pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user's fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.
We have proposed a sixth sense Atm which uses No touching of any objects but by just using your gestural interface it lets us do the normal operations of ATM authentication system.
Considering the cost and the Variations in the Projectors in the testing Environment we have eliminated the Projector Part and used a Standard System as a projector and Manipulator with a standard camera as the capturing unit.
KEYWORDS handgesture,finger movement,menu interface.
Along with the development of sensors, networks, and computer technologies, home computer usage will change in the near future. Various consumer electronics (e.g., televisions, PCs, audio players, photo viewers, DVD players, etc.) will be embedded in the home, and the user will interact with them on one display. It is also expected that, in the home, equipment (e.g., light switches, sensors, etc.) will be connected by through a network, and the user will be able to check and control them on a TV screen. In this scenario, users will want to have the ability to control the system remotely, so they do not need to return to one position to control a device.
3.1 HAND GESTURE
The proposed system is a sixth sense Atm which uses Hand gestural movements. The Person has to interpret everything in a imaginary space. These Gestural Interfaces are being recorded by a video cam and the video is processed for finding the actual password of the Atm which the person intends. By doing so the normal operations performed by key touch are done just with Gestural movements. The proposed system is a sixth sense Atm which uses Hand gestural movements. The Person has to interpret everything in a imaginary space. These Gestural Interfaces are being recorded by a video cam and the video is processed for finding the actual password of the Atm which the person intends. By doing so the normal operations performed by key touch are done just with Gestural movements.
3.2 FINGER MOVEMENT
In this technique, the user can perform all operations through a combination of basic movements of his/her fingertip
(e.g., of the index finger). Currently, we define three types of movements as the basic operations. The user can select buttons, operate menus, and even input text with the
combination of these three operations.
In this paper, we describe One-finger Interaction, and also explain the implementation of two prototype systems that use One-finger Interaction. Our prototype systems conveniently only use a typical web camera for recognition of the gesture. By merely attaching an LED marker on the user's
fingertip, the system can recognize One-finger Interaction.
We show the results of the preliminary evaluation of the system. We made two prototypes of the One-finger Interaction.
One is called "crossing-type interface", where the user can
execute operations by selecting the icons and using "doublecrossing". The other is called "direction-rotate interface",
where the user selects the menus by combining the "direction of the movement" and the "rotate operation". It is not necessary for the user to memorize the various movements
of the fingertip for operations, because he/she only has to move according to the displayed menus.
3.3 MENU INTERFACE
In this method, the user can change the menus according to the applications. For example, when the user watches TV, he/she can perform operations such as "change channel" and "adjust volume" through the menus. The user can also perform functions such as "scroll" and "switch tabs" in web
browsing. One-finger Interaction can be used to control different applications like this through changes in the content of the
menu. However, the menus that are frequently used may be different for each user, even if within the same application.
Thus, the menus should be able to be easily customized
to correspond to each user. In One-finger Interaction, the menus, which can be customized for each user, are generated by reading XML files that the user programmed when the system starts. XML files contain information about each menu, such as the function, the position, information on the display, and so on.
The user can make XML files by writing directly. However, only a person who knows how to write XML files can make them. We developed the interface for the menu making independently. In this interface, the user can make icons and set their size by dragging and dropping. The user can also set the properties of the icons, including color, commands, and so on, by editing the values in the property window. This interface automatically converts the icons that the user made into the XML file.
The system arranges many menus in one window. The user can execute them by selecting among the arranged
menus. However, assuming that the user uses them in many
applications, it may not be possible to display all menus in
the window at once. In One-finger Interaction, the user can
switch the displayed menus either manually or automatically.
4.System architecture overview
We explain the basic structure of the One-finger Interface. The basic structure of our system consists of one PC and one camera. The system takes an image of the movement of the user's fingertip with the camera, then detects the position of the finger and interacts with the target object. In the case
of a large display, a camera is set up in front of the user as shown in Figure 2. The PC recognizes the finger and interacts with a large display.
In this experiment, the users selected the menu the number of which corresponded with a random number that was displayed on the screen. We measured the time and error rate of this operation, where the users selected a menu item. We prepared two sets of 16 and 32 menus. Each menu had a number from 0 to 9 , and all menus were
assigned a different number. The users selected the menu of
. Average of selection time
the same number that was displayed. The users selected all menus once in one trial. In each case, they were allowed two
attempts. They were given time to practice each operation before the trials.
We propose the One-finger Interaction as a technique for interacting with an object remotely by movement of a fingertip. In this technique, we were able to performing
various interactions by combining the movement of the fingertip with the menu interface. We also developed two
types of menu interfaces as a prototype for One-finger Interaction. One is the crossing-type interface that allows the user to perform operations when crossing a menu twice
in a short time. The other is the direction-rotate interface, which follows the direction of the fingertip and selects with
a circular movement of the fingertip.
Future works include the improvement of operations and the selection techniques of a menu. The direction-rotate interface needs improvement because the user needed too
much time to select a menu. Improvements for customizing the interfaces that the user can design are also needed.