This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Design of invisible multitask mouse presents a new approach for controlling mouse movement using a real-time camera by means of human computer interaction technology. We propose to change the hardware design by complete replacement of mouse. Our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything current mouse devices can.
Invisible Multitask Mouse is a crude system, build to replace the mouse hardware with palm. In this we use palm to perform various mouse operations using gesture recognition algorithms.
Gestures of fingers are used to perform operations of mouse and palm gestures are used to create shortcuts to open drives.
Keywords- HCI, mouse replacement, gesture recognition using palm, real-time.
With the enhancements in touch screen technologies, IT is taking a step forward for embedding sixth sense functionality in computers using web-cam and other electronic devices. Invisible Multitask Mouse is our contribution in human-computer interaction domain to carry out various tasks performed by mouse without even using one. The aim of our project is to perform various functions of mouse maintaining some distance from the computer with convenience and efficiency, at lowest possible cost.
As the name suggests, Invisible Multi-task Mouse performs various functions of a mouse using hand gestures. The project is a very crude system to simulate computer vision of using the hand gestures to control the mouse. This is some kind of a virtual reality system built in low level programming language and pre-available libraries. Invisible Multitask Mouse is a real time application.
Our project is divided into two parts:
Performing a Mouse Operation:-
Mouse operations are:
a. Moving cursor
can add more functionality
Shortcuts Operations are nothing but by using hand gestures we perform some operation like open VLC Player, Ovi, C: D: drives etc.
The hardware consists of only single device. Device is a web-cam which tracks the position of red fingertip. The camera is positioned such that it recognizes the moment of finger tips and performs the operations stated above.
In our application, we make use of fingers .When we perform hand gestures, the web-cam will track the hand gestures and get the X-co-ordinate and Y-co-ordinate for the same. Once it gets the co-ordinates, it will move the cursor by feeding the co-ordinates. For performing the left-right-double click operation it will use some set of predefined gestures.
In Short cuts operation by using some predefined gestures we can perform the operation like opening browser or media player or opening any drives. The purpose of providing such operations is that instead of moving cursors to particular application file we can use some shortcuts to open it.
The main virtue of this project is it is a replacement of mouse by mere stimulation. Also one can operate a few features maintaining some distance within some range.
This project can be further modified for more enhancements and augmented to include more features for enrichment.
II. EXISTING SYSTEM
2.1 HCI technology
Human-computer interaction (HCI) is an area of research and practice that emerged in the early 1980s, initially as a specialty area in computer science. HCI has expanded rapidly and steadily for three decades.
Until the late 1970s, the only humans who interacted with computers were information technology professionals and dedicated hobbyists. This changed disruptively with the emergence of personal computing around 1980. Personal computing, including both personal software (productivity applications, such as text editors and spreadsheets, and interactive computer games) and personal computer platforms (operating systems, programming languages, and hardware), made everyone in the developed world a potential computer user, and vividly highlighted the deficiencies of computers with respect to usability for those who wanted to use computers as tools.
2.2 Gesture Recognition
One of the technique based on accuracy is human physiological phenomena . This technique is implemented in two ways: measuring changes in physiological signals, such as brain waves, heart rate, and eye blinking and measuring physical changes such as sagging posture, leaning of the driver's head and the open/closed states of the eyes . This technique, while most accurate, is not realistic, since sensing electrodes would have to be attached directly onto the driver's body, and hence be annoying and distracting to the driver. In addition, long time driving would result in perspiration on the sensors, diminishing their ability to monitor accurately.
2.3 Portable Mouse
Genius has announced the release of its Wireless Thumb Cursor Controller, or ring mouse for short. The clicking and scrolling action of the lightweight mouse replacement is thumb-controlled, its proprietary optical touch technology offers users 1000 dpi sensitivity and it's said to last a month between charges.
The 1.15 x 1.32 x 1.25-inch (29.3 x 33.7 x 32 mm), 0.42 ounce (12 g) Ring Mouse has 2.4GHz wireless connectivity with a range of around 30 feet (10 meters), and links to a USB nano/pico receiver slotted into a spare port on a Windows-based computer or laptop. Worn on the index finger, left and right click and optical touch control tracking technology are set around the top for thumb control.
Gesture Detection using Palm
Universal Gesture Mouse is a new way to control your Windows 7 PC using the Kinect for Windows sensor. It is a lightweight application that complements the traditional mouse, keyboard, or touchscreen to control your cursor with intuitive and easy-to-use hand gestures. The Universal Gesture Mouse will work with nearly any Windows 7 program to create a fun and engaging experience in a wide variety of applications.
Gesture Mouse is built on the top the Kinect for Windows platform. The Universal Gesture Mouse takes advantage of the Kinect camera technology to track the users body position, specifically the hands. A virtual trackpad is created in front of the user which maps the position of their hand in space to the position of cursor on the display. The Universal Gesture Mouse can issue left-click, click and hold, and click and drag commands via several different click modes.
All you need to get started is a Windows 7 PC, a Kinect for Windows sensor, and the free evaluation version of the Universal Gesture Mouse software.
Gesture Mouse provides a great starting point for designers looking to spice up their user experiences, for businesses or organizations that would like to differentiate themselves or their products, or for software developers that would like to jump start their own unique applications.
III. PROPOSED SYSTEM
Invisible Multitask Mouse aims at removing the use of hardware with gesture recognition using fingers and webcam. This is some kind of a virtual reality system built in Python using the OpenCV library.
To run the application you need to have:
OpenCV library with wrappers for .net
The features it supports are some mouse functions and character recognition, using which browsing is made easy and less time consuming.
The flow diagram for tracking hand and its movements is as follows:
Fig.3.1 Flow chart of the proposed system
System has following control flow:
1. INPUT IMAGE:- We get the input from web cam.
Input will be the image.
2. IMAGE RESIZE:- First to recognize a hand gesture, we need to resize the input image in order to map camera coordinates to screen coordinates.
3. SEGMENTAION:- We need to separate the hand area from a complex background. It is difficult to detect skin color in natural environments because of the variety of illuminations and skin colors. So, we need to carefully pick a color range.
To get better results, we converted from RGB color space to YCbCr color space, since YCbCr is insensitive to color variation.
Fig3.2 Input and Segmentation
4. DENOISE:- Using this approach, we cannot get a good estimate of the hand image because of background noise. To get a better estimate of the hand, we need to delete noisy pixels from the image. We use an image morphology algorithm that performs image erosion and image dilation to eliminate noise . Erosion trims down the image area where the hand is not present and Dilation expands the area of the Image pixels which are not eroded.
5. COMPUTE CENTER:- After segmenting hand region and background, we can calculate the center of the hand After we locate the center of the hand, we compute the radius of the palm region to get hand size. To obtain the size of the hand, we draw a circle increasing the radius of the circle from the center coordinate until the circle meets the first black pixel. When the algorithm finds the first black pixel then it returns to the
current radius value. This algorithm assumes that when the circle meets the first black pixel, after drawing a larger and larger circle, then the length from the center is the radius of the back of the hand. Thus, the image segmentation is the most significant part because if some of the black pixels are made by shadows and illuminations near the center, then the tracking algorithm will meet earlier than the real background and the size of the hand region becomes smaller than the real hand.
Fig3.4 Computing center
6. GET FINGER TIPS:- To recognize that a finger is inside of the palm area or not, we used a convex hull algorithm. The convex hull algorithm is used to solve the problem of finding the biggest polygon including all vertices. Using this feature of this algorithm, we can detect finger tips on the hand. We used this algorithm to recognize if a finger is folded or not. To recognize those states, we multiplied 2 times to the hand radius value and check the distance between the center and a pixel which is in convex hull set. If the distance is longer than the radius of the hand, then a finger is spread. In addition, if two or more interesting points existed in the result, then we regarded the longest vertex as the index finger and the hand gesture is clicked when the number of the result vertex is two or more.
Fig3.5 Getting finger tips
Architecture for gesture recognition system is explained using layered architecture as shown in Fig 4 as follows:
1. Interface layer: This layer provides a feature to interact users with the system. This layer is having two main purpose
a) Input Area: allowing users to give data for manipulation
b) Display Result: allowing system to indicate the effects of user's manipulation
Generally, the goal of interaction is to produce user interface which makes it easy, efficient, and enjoyable to operate machine in a way which it produce the desired result so, for this operator needs to provide minimal input to achieve desired output.
2. Process layer: In this layer we define the process steps, the sequence in which they are executed, the roles that execute them and how the context data of process is passed between process steps. In invisible mouse system gesture is captured by the camera and it is recognized by the system.
3. Data Manipulation layer: In this layer we are manipulating the data and converting the gesture which is provided by the user into the meaningful command and then processing that command .
4. Data layer: In this layer we are saving the input given by the user and what result he/she got from the system so that it will be easy to modify if any error occurs in the system.
Fig4 Architecture for gesture detection
V SYSTEM REQUIREMENTS
Camera Unit- Web-cam or external camera connected to computer - Will be used for capturing the image of user (palm gesture).
Computer System Requirement- (Minimum Requirement: 1.8 GHz Processor, 512MB RAM.
5.2 Software Requirements:
1. OpenCV (Backend)
2. C#.net (Front-end)
4. Operating System-Windows XP or higher version
We developed a system to control the mouse cursor using a real-time camera. We implemented all mouse tasks such as left and right clicking, double clicking, and scrolling. This system is based on computer vision algorithms and can do all mouse tasks. However, it is difficult to get stable results because of the variety of lighting and skin colors of human races. Most vision algorithms have illumination issues. From the results, we can expect that if the vision algorithms can work in all environments then our system will work more efficiently.
This system could be useful in presentations and to reduce work space. In the future, we plan to add more features such as enlarging and shrinking windows, closing window, etc. by using the palm and multiple fingers.