Prototype Electric Wheelchair Controlled By Eye Only Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Recently, 2% of US citizen are paralyzed which caused by stroke, spinal cord injury, and multiple sclerosis. It becomes night mare for every Americans that they will loose their mobility function moreover lose their life. Electric wheelchair controlled by eye-only is being developed to provide independent mobility for paralyzed persons and return their life.


A prototype of wheelchair controlled by eye-only has been developed using camera which mounted on user's glass, micro-controller, and Yamaha JW II. The prototype allows user move forward, right, and left by look at these direction during 2 seconds. Several image processing analysis are used to analysis user gaze, and then use it for controlling wheelchair.


The prototype has been tested using five normal users with real circumstances. Vibration, accuracy, and illumination changes also have been evaluated. For all evaluation, our prototype solved problems of various users, vibration, and illumination changes.


The prototype demonstrates the feasibility and reliability of providing computer input for paralyzed user to control wheelchair.

I. Background


The development of wheelchair for paralyzed user is relative new, and represents smart alternative for person who are unable use movement organ to control wheelchair as their mobility device. The journeys of wheelchair development start with conventional wheelchair which rely on user's hand performance to give force to wheel. Beside force, user's hand also controls movement of wheelchair. Recently, although conventional wheelchair is also available, electric wheelchair reduced little function of user's hand by adding electric motor with battery to replace of hand force. Joystick, hand-based controller, is used to control movement of wheelchair.

Recently, there are 1 in 50 people of Americans living with paralysis [11] [12]. It is generally caused by stroke (29%), spinal cord injury (23%), and multiple sclerosis (17%). Diseases or accident, which broke nervous system, can cause person loses their ability to move the muscle voluntary. Because of the muscle voluntary is main actuator to enable our bodies to move, paralysis may causes person cannot move their movement organ such as face, arm, leg, and others.

Type of paralysis can be local, global, or follow specific pattern. Most paralysis are constant, however there are other forms such as periodic paralysis(mostly caused by genetic disease) and sleep paralysis(occurs when brain awake from REM (Rapid Eye Movement) but body cannot be moved during several second or minute) which is caused by another factors.

Person, who have paralysis problem, cannot use typical electric wheelchair type. It is caused by user cannot use their hand (or other movement organs) to operate joystick controller. In this case, only eye is organ that can show user desires. Regarding the reasons above, we develop electric wheelchair that can be controlled using eye-only.

Related Works

Wheelchair free-hand based as assistive mobility device can be broadly categorized into following categories,

Bio-Signal based system [1][2], Electrooculograph, Electroencephalograph, Electromyograph, and other bio-Signal instruments are used to acquire bio-Signal from user and used it to control wheelchair. Ref.[1] proposed electric wheelchair controlled using Electrooculography (EOG). EOG analyze eye movements by sticking electrode onto surrounding eye. EOG obtain two kind of signals: horizontal and vertical that represent eye muscle activities. Each eye movement has their own signal pattern. Eye movement of right, left, up, and down can be distinguished by analyze EOG signal. The results are used to control wheelchair. Ref.[2] proposed wheelchair controlled by Muscular and Brain signals. EMG and EEG are used together to analyze user desire, and used it to control wheelchair. Electrodes stick onto head, output signals are analyzed and convert into wheelchair command.

Voice based system. Ref.[3] proposed wheelchair guided by voice commands. The prototype consist of Speech recognition, motor control, user interface, and central processor modules. Speech recognition is used to recognize voice command. User have to record the oral command associated with every function. After first recording, user can begin with normal operating mode. For example, when user say "Forward", wheelchair will move forward. Likewise, when user say "Stop", wheelchair will stop.

Vision based system [4][5][6], utilize camera to acquire user image and analyze user desire. Ref.[4] proposed wheelchair controlled by head gesture. Viola-Jones face detection is used to recognize face profile. Head gesture such as up, down, left, right, and center are expected to give command of speed up, speed down, turn left, turn right, or keep the speed. Ref.[5] proposed wheelchair controlled using gaze direction and eye blinking. The gaze direction is expressed by horizontal angle of gaze, and it is derived from the triangle form formed by the center position of eyes and nose. The gaze direction and eye blinking are used to provide the direction and timing command. The direction command related to the movement direction of electric wheelchair and the timing command related to the time condition when the wheelchair should move. Ref.[6] proposed wheelchair with two cameras, indoor camera for monitor wheelchair movement and other camera is mounted on wheelchair for obstacle detection. Ref.[7] proposed wheelchair controlled by gaze. Stereo CCD cameras is used to estimate user gaze and head pose. Also, range finder is used to recognize surrounding environment.

System (1) requires direct touch with user and electrodes should be stick onto user body. These system are expensive and are not convenience to used. System (2) easy and simple to developed, but speech disturbance, when wheelchair used in real environment, should be consider. Aforementioned reason, we proposed wheelchair using vision system. The objective of our proposed system to develop wheelchair, which special designed for paralyzed user, with overcome various users, vibration, allows user movement, allows illumination changes problems of previous system.

We design our wheelchair special used for paralyzed users. Paralyzed users cannot user their hand, foot, body gesture, head, and others. Mostly, although they cannot move the movement organ, eye still can be used to show their desires. The easy way to show desire is by blink. By controlling time duration of blinking, they can communicate and show information to others. Even they can use blink, controlling blink during long period can makes eye become tired. Because of this reason, we decide to use user gaze as source information.

Our proposed system basically same with Ref.[7]. Both of us use gaze to capture user information. In Ref. [7] use stereo CCD camera to analyze gaze, but our system only use single camera which mounted on user glass. By this way, our wheelchair perform when used in real circumstances. Moreover, our system can works with cheap netbook PC and it makes our system more marketable when reviewed in economic side.

Our proposed system consists of single infrared camera, netbook, micro-controller, and, modified wheelchair. Our camera mounted on user glass in order to allows user movement. Infrared LED will adjust illumination when illumination of environment is changes. Also, this camera position will allows user movement because camera always follows head movement. Moreover, this way eliminates vibration because user body will reduce shock or vibration which comes from bottom side. After user's image acquired by camera, image processing analysis methods estimates user gaze from this image. Viola-Jones eye detection, adaptive threshold, and Kalman filter are used to estimate the gaze. Single ultrasonic sensor, which used to avoid collision, puts on front of wheelchair. In order to control wheelchair, invisible layout is used. Turn left, right, and go forward will selected by user by looking at the key during 2 seconds. Invisible keys mean that user know the keys position without any real mark. Our wheelchair did not used stop key for safety reason. When user changes the gaze direction, wheelchair will automatically stop. Also when system fails analyze user gaze, wheelchair will stop. By implement this system, wheelchair will move safely when used by paralyzed user. Moreover, this wheelchair will return their life because they get back their mobility.

This paper is organized as follows: section 2 describes our proposed system which involve hardware configuration, gaze estimation method, eye model, flow of selection key, and micro-controller circuit, section 3 describes our experimental result which involve robustness against various users, noise, and illumination changing, testing of vibration, and measurement of wheelchair performance, and section 4 describes the conclusion.

II. Proposed System

The utmost importance of our proposed wheelchair is that the prototype should be guarantee that works when used for all users and also works with real circumstances such as vibration, illumination changes, possibility of user movement, and controller should have perfect accuracy. Moreover, system should be guaranteed able to works safely. In order to realize perfect wheelchair for paralyzed users, infrared camera is utilized and mounted on user glass. This way has benefit such as allows user movement, reduce vibration, and infrared LED will automatically adjust illumination and makes output of image is stable. This way also should be follows with perfect image processing analysis in order to analyze user gaze. Our gaze estimation method employ pupil knowledge when estimate user gaze. Pupil knowledge such as size, color, shape, sequential location, and motion are used. After pupil location is found, simple model covert from pupil location into user gaze. Micro-controller circuit connect and adjust communication between netbook and wheelchair machine. This circuit converts RS 232 serial data communication into wheelchair command. When user looking at the key during 2 second, netbook will send command to wheelchair and makes it move into the selected direction. When user changes their gaze direction, wheelchair will automatically stop. No stop key is used for safety reason. Moreover, in order to avoid collision with obstacle, ultrasonic range finder detect obstacle in front side. When obstacle is detected, wheelchair will automatically stop and user only can turn left or right. Also, the backward key is not used for safety reason because it is dangerous when wheelchair move backward while user cannot look at backward. Fig. 1 shows block diagram of our proposed system.

Fig.1. Block diagram of our proposed method, micro-controller circuit changes role of original controller of wheelchair Yamaha JW II.

II.1 Hardware Configuration

Our proposed system utilizes Infrared Camera NetCowBow DC NCR-131 to acquire user image. This camera has 7 LED which automatically adjust illumination and obtain stable image even illumination of environment changes. The uses of IR camera will solve problem of illumination changes. We put the camera on user glass. Distance between camera and eye is 15.5 cm. This value is come from trial and error which consider that the camera can perform acquire eye and also the placement of camera will not disturb user's view itself. The position of camera is in front of eye but little up in order to avoid user's view disturbance. Moreover, the placement of camera will give extra effect that will reduce vibration. It is naturally that road will make wheelchair vibrate. When this situation happen, user body will reduce the vibration and it makes vibration will not influence the camera. The position of camera shows in Fig.2. We used Netbook Asus Eee PC 1002 HA, which is based on Intel Atom N270 CPU (1.6 GHz), 1GB Memory, 160 GB SATA HDD, and has small screen display 10 inch as main processing device. In order to convert USB to serial, we use Keyspan USA 19Qi USB to Serial converter. Micro-controller AT 89S51 is used to changes the original controller. Our software is developed under C++ language of Visual Studio 2005 and OpenCv, image processing library, which can be downloaded as free at their website. Also, in order to detect obstacle, PING ultrasonic range finder is used. This range finder is able to detect obstacle around 3cm until 3m. Fig.2 shows figure of our prototype hardware.

Fig.2 Hardware of proposed wheelchair

Table.1 Features of ultrasonic range finder [8]

Supply Voltage

5 V (DC)

Supply Current

30 mA (Typ), 35 mA (Max)


3cm to 3m

Input Trigger

Positive TTL pulse, 2μS min, 5μS (Typ)

Echo Pulse

Positive TTL pulse, 115 μS to 18.5 mS

Echo Hold-off

750 μS from fall of Trigger pulse

Burst Frequency

40 kHz for 200 μS

Delay before next measurement

200 μS


22 mm H x 46 mm W x 16 mm D

II.2 Gaze Estimation

In order to estimate user's gaze, several image processing methods are used. Method such as Viola-Jones eye detection, deformable template, adaptive threshold, and Kalman filter are used to estimate user's gaze. Flow of gaze estimation is shown in Fig.3.

Fig.3 Flow of gaze estimation

The estimation start with detect eye location. By using eye location, estimated eye area is locked. Because the camera is mounted on user glass, once eye location is known, next eye location will has same position. It is mean that eye detection step only runs once in the beginning. Next process is detection of pupil location. Pupil location is detected by using pupil knowledge. Pupil knowledge such as color, size, shape, sequential location, and motion are used. By using pupil location, eye model convert into gaze direction and obtain user gaze. The detail explanation of each process is described bellow,

Normally, eye is detected using deformable template. We captured eye image and apply Gaussian smoother onto this image. Eye image and deformable template is matched and is found the eye location. The advantage of this method is fast and can be used for large amount of user than original template matching. Because of this method is not always success detect eye location, Viola-Jones eye detection [9] is used as backup. This will take over the eye detection when deformable template fails detect eye location. XML file is required when we used viola-Jones eye detection of OpenCv Image processing library function. This file can be created by collecting object (positive sample) and non-object (negative sample) images. This function can be used using this following code:

CvSeq* objects = cvHaarDetectObjects(small_img, cascade, storage, 1.1, 2, 0, cvSize(30, 30));

The uses of this both methods have advantages that the processing time will be faster and is robust against different circumstances. After eye location is found, this location will be used to lock eye image. It is mean that for next process, eye detection step will be skipped.

Next is pupil detection step. We estimate pupil location using pupil knowledge. In order to extract pupil knowledge, we use adaptive threshold method to separate pupil and other eye components. We set our threshold value T is 0.27% bellow mean μ of eye image I.



Output from adaptive threshold is black pixels which represent pupil on image. In order to eliminate noise, we use median filter. The output of adaptive threshold has a lot of varies. We make three categories of adaptive threshold output: (1) case 1 (When black pixels clearly represent pupil without any noise), (2) case 2 (When noise appear on image, moreover size and shape of this noise same with pupil), and (3) case 3 (When no any properties of pupil can be used to find pupil location. All cases of adaptive threshold output are shown in Fig.4, Fig.5, and Fig.6.

Fig. 4 Case 1, this figure shows that output can be distinguished by it is shape and size.

Fig. 5 Case 2, this figure shows that output has noise which has almost same size and shape with pupil

Fig. 6 Case 3, this figure shows no any pupil properties can be used because there are no black pixels on image output.

After classification of adaptive threshold output, we estimate pupil location by run three step process based on pupil knowledge. In case 1, it is easy that pupil location is estimated by shape and size. Even noise appear on image, we still can distinguish the true pupil by consider it shape and size. In case 2, the condition is more specific than case 1. In this case, noise appears with almost same size and shape. This condition may happen when adaptive threshold fails separate other eye components. Eye component such as eyelid and eye corner may appears with almost same size and shape with pupil. In order to solve this case, we estimate pupil based their sequential locations. Every location of pupil is recorded as their history. When method confuses where true pupil is, we trust that the true pupil must be always most closest with previous location.


The reasonable pupil location P(t) is always in surrounding previous location P(t-1) with area C. The last case is case 3, it happen when pupil moves to end of directions except move to up. It is happen because size of pupil becomes small and disappears. In order to solve this case, we estimate pupil location based on their motion. We adopt Kalman Filter [10] to estimate pupil location.

II.3 Eye Model

In order to convert pupil location into gaze, simple eye model is used. We assume that movement of eye similar as sphere with radius R. Event the real movement is not perfectly exactly with sphere, this difference just give less effect to our method. Pupil is assumed that the location is in front of eyeball. When pupil moves, the movement will follows the sphere's orbit. We can model the pupil movements as shown in Fig.7.

(a) In x direction

(b) In y direction

Fig.7. Eye model, eye is modeled as sphere with radius R.

If the distance between normal angle and current pupil location is r, relation between θx, θy, R, and r can be calculated as follow,





The final result of gaze estimation process is (θx, θy). Although this output also can be used to other purposes, our system only require three outputs: left, right, and down, which calculated from angle of gaze. In order to convert from user gaze to wheelchair command, we use threshold user angle. When user looking at left or right exceed threshold angle, then left or right is selected. We use same threshold for left and right directions. Special for down direction, we use different threshold because user view should not be influence.

II.4 Micro-controller Circuit

After user gaze is estimated and command is already translated, netbook send this command to wheelchair. In order to adjust communication between netbook and wheelchair, we modified original controller of wheelchair with new controller. As is shown in Fig. 1, our new controller consists of micro-controller, buffer, and digital to analog circuit. By using serial communication, micro-controller will communicate with netbook. After command is send to micro-controller, micro-controller will convert into digital I/O and then is converted to analog by using Digital to Analog Converter. The analog output will give command to wheelchair through their Analog to Digital Converter.

II.5 Controlling of Wheelchair

In order to control EWC, we design three keys invisible layout, move forward, turn right, and turn left. Stop key is not required for safety reason. No screen display is required. Users understand the location of desired key so that it can be selected with users eye-only. Instance, when user looks at the right key within 2 second, EWC will move to right until user change the gaze. While user has not change his/her gaze, EWC will continue the moving. When user changes the gaze direction, EWC will automatically stop. This method is more safely than use stop's key. If we use stop key, user will need longer time to hit the stop keys and make EWC not safely. Also because of safety reason, we didn't use backward key. It is too danger moving backward while user cannot know the situation in the behind. Beside the above role, our EWC also stop when user looking at free area. The flow of controlling wheelchair is shown in fig.8.







Fig.8 Controlling of Wheelchair, this figure show that when every command changes, it is always through stop step first. It means that every command changing, wheelchair always stop first.

III. Experimental Result

In order to measure the performance of our prototype, several experiments have done in our laboratory for each method and also integrated system. The experiments consist of testing of pupil detection performance, testing of controlling of gaze estimation accuracy, testing of influence of illumination changes, testing of influence of vibration, and testing of integrated system performance. Detail experiment is described bellow,

III.1 Pupil detection performance

In order to test performance of our pupil detection, we involve five different users who have different race and nationality (Indonesian, Japanese, Srilanka, and Vietnamese). The uses of many samples will prove that our method works when used for all types of users, the eye movement data was collected from each user while was making several eye movement.

Eye images of three Indonesian have been collected as shown in Fig.9. This figure shows that even images was taken from same country, each person has different race and eye shape.

Fig.9 Collected images of three Indonesian, the top person has slanted eye and two of bottom have width eye and clear pupil.

Fig.10 Collected images of Srilanka with his skin color is black and thick eyelid

Fig.11 Collected images of Japanese with his skin color is bright and eye is slant.

Fig.12 Collected images of Vietnamese.

Number of images of Indonesian whose slated eye are 882 samples and other two Indonesian whose width eye and clear pupil are 552 samples and 668 samples. The collected data from Srilanka is shown in Fig. 10 with number of images are 828 samples, his skin color is black and eyelid is thick. Collected data from Japanese is shown in Fig. 11 with number of images are 665 samples, his skin color is bright and eye is slant. The last data is collected from Vietnamese as shown in Fig. 12.

This experiment evaluates pupil detection accuracy and variance against different user by counting the success sample and the fail one. After counted accuracy of pupil detection, our method is compared with adaptive threshold method and template matching method. The comparison of adaptive threshold method uses combination between adaptive threshold itself and connected labeling method. Moreover, other comparison uses pupil template as reference and matched with the images. The robustness of our pupil detection method against different users is shown in Table 1.

Table.2 Robustness of our pupil detection method against different users, this table shows that our method is robust against different user and has high success rate.

User Types


Adaptive Threshold (%)

Template Matching (%)

Our Method (%)







































The result data show that our pupil detection method has high success rate and robust against different uses with variance value is 16.27.

Next, we measured performance of pupil detection due to influence of illumination changes. The experiment was done by give adjustable light source to system and recorded the degradation of success rate. We measured illumination condition by using Multifunctional environmental detector LM-8000. First, zero illumination is given to system (dark condition). Even though no illumination is given to system, our IR camera will automatically adjust the illumination and make it allow used with out any illumination. IR camera with seven IR LED and light sensor will adjust illumination and makes result image is always stable. Unfortunately, when strong illumination hit the camera, it causes the pupil detection is not running well. The result of illumination influence experiment is shown in Fig. 13.

Fig.13. Influence of Illumination changing. This figure shows that our pupil detection method works without any illumination. The strong light caused the method doesn't work. This condition may happen when sun light hit directly to camera.

III.2 Influence of Vibration

The objective of this experiment is to prove that camera mounted on user glass gives extraordinary advantages that able to reduce and moreover almost eliminate the vibration. This experiment was done by recorded the vibration by using shock recorder G-MEN DR 10. By this experiment, we want to compare the placement of the camera between our system with other systems [5][7]. In Ref.[5] and Ref.[7], the camera is placed on wheelchair as shown in Fig. 14 point 2. Our system put the camera mounted on user glass as shown in Fig. 14 point 1.

Fig.14. Placement of camera, other system put camera on point 2, but we put out camera mounted on user glass (point 1)

In order to test performance of each camera placement, two shock recorders are placed on point 1 and point 2. After two shock recorders are turned ON, wheelchair is used to pass the stair and recorded the vibration of point 1 and point 2. The vibration data on point 2 is shown in Fig. 15 and vibration data on point 1 is shown in Fig. 16. The comparison of vibration between point 1 and point 2 is shown in Fig. 17.

Fig.15 Vibration on point 2, this figure shows that vibration on point 2 is high.

Fig.16 Vibration on point 1, this figure shows that vibration on point 1 is small

Fig.17 Vibration Reduction, this figure shows that by placing camera on point 1 will makes high vibration reduction.

The reduction of vibration can be happen because of user body is elastic and makes it similar with spring. In order to explain the reduction vibration, we modeled placement of camera as shown in Fig.18.

Fig.18. Vibration Model, this model shows that point 1 has more spring to reduce vibration than point 2.

If mass of wheelchair is m2 with stiffness of spring is k2 and mass of user is m1 with stiffness of spring is k1, we get the following equation:

Fs = k1x1 + m1g + k2x2 + m2g,


where g is gravity. Equation 8 shows that there are two kind of stiffness of spring that will absorb the vibration are k1 and k2. If we measure vibration reduction on point 2, only k1 is involved. Otherwise, If we measure vibration reduction on point 1, both of stiffness of springs are used. That is why the placement of camera on point 1 will more robust against vibration.

III.3 Testing of Integrated System

The objective of this experiment is to examine the whole part of our wheelchair when it is riding by users. All functions such as go forward, turn left, turn right, and stop are used. Start from start line, user is riding the wheelchair controlled by eye-only until finish line. The consumption time is recorded. The road map that is used for doing this experiment is shown in Fig. 19.

Fig.19. Road Map, users did all function in order to pass the road. User rides the wheelchair, go forward, turn left, and turn right from start until finish line.

This experiment involve five users, include expert user (ever rides before) and beginner users (never ride before). Before the experiment, our users did the exercise first. We explain and teach them how to ride this wheelchair. Because of this wheelchair also same with other vehicle such motor cycle or car, they need to practice before riding. Around ten minute is needed by user for exercise. They turn left, turn right, and go forward freely as they want. After they feel can control the wheelchair, the experiment is began. User begins the riding from start line and stopwatch turn ON. User go forward by looking at down. Wheelchair moves forward as long as user's eye didn't changes. Unfortunately, user always blinks their eye when eye become tired. Because of this blink, wheelchair will automatically stop. Although this will makes wheelchair moves slowly, we choose this role because of safety reason. After wheelchair stop because of user's blink, user can go forward again by looking at down. After go forward around 3.5 meters, user should stop and turn left. User turn left by looking at left. After wheelchair turn left, user goes forward around 1.9 meter and stop. User turns right by looking at right and go forward around 3 meters. After user pass the finish line, we recorded the time as shown in table.3.

Table.3. Recorded time when user rode the wheelchair from start line until finish line, this table shows that user can easily use our wheelchair even they never ride it.


Time (second)

Type of user
















Also, we compared the consumption time between eye-based and hand-based system using same road as Fig.19. When user controlled wheelchair by hand, the requirement time is 23 second. If we compared with our result, it is very vast than ours. It is almost four times faster than our system. Nevertheless, we can say that eye-based controller can be alternative to replace hand-based controller in specific condition.

After user rode our wheelchair, we made short interview with them about how they feel about this wheelchair. Almost them said that it is easy to controlled even never ride before. Also, they can use their eye freely when HOLD mode selected. It is caused by when user looking at upward, system become freeze. In this mode, system will ignore all eye movement. This mode will give advantage because when user's eye becomes tired, user can rest their eye by select this mode. Also, when user want to look at around freely, this mode will helpful.

IV. Conclusion

A prototype of electric wheelchair controlled by eye-only for paralyzed user has been successfully realized. The uses of IR camera mounted on user's glass given big impact such as allows user's movement, maintain illumination condition, and eliminate vibration. Moreover, our pupil detection based on pupil knowledge perform detect pupil almost perfectly whenever it is used to different users. Also, our wheelchair controlling method makes user easy controlled the wheelchair. Not only makes wheelchair easy to be controlled, combination between ultrasonic obstacle detection and this method guarantee that it is safe to be ridden.