Aims And Objectives Of Haptic Feedback Technology Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.


Haptic feedback technology has been increasingly used in areas of industrial automation, teleoperations and serious gaming. Supplementing the conventional visual and audio information with a force and tactile sense of a remote or virtual environment has been shown to increase a user's sense of immersion in a virtual simulation and to improve control of a remote device in telerobotic operations. However, to date, there is neither a standardised general purpose solution nor an affordable, intuitive hand controller system that can provide haptic feedback and control of more than three degrees-of-freedom (DOF), although numerous prototypes have been developed in support of the space and nuclear industries. The present paper proposes a hand controller allowing intuitive manipulation of multiple degrees-of-freedom and providing force and tactile feedback to the operator. By combining two Commercial-Off-The-Shelf (COTS) game controllers it is intended to be an affordable, easy to use, and practical alternative to the expensive, task-specific, haptic devices currently available. The hand controller is intended to provide adequate control for a range of dextrous manipulation tasks, such as controlling the robotic arm of a bomb disposal robot, or controlling a Remotely Operated Vehicle (ROV). It is proposed that the haptic feedback features of the design will enable users to negotiate, and gather information about, a remote environment under degraded visual conditions. The following paper details the progress made in designing and building the hand controller and virtual environment to visualise the project. Evaluation of the hand controller was performed by allowing users to explore a virtual environment and analysing their feedback. Future work and development is discussed including potential routes to exploiting and commercialising the results of this project.

Aims and Objectives

The aims of the project were to exploit COTS devices to design and build a hand controller system capable of controlling a virtual object in 5 DOF, whilst displaying force and tactile information to the user. Haptic feedback was used to allow users explore a virtual environment through interactions between the controlled object and its virtual surroundings.

The hand controller designed had to be affordable, intuitive to use and have the potential to be applied to a range of applications. For the purposes of evaluation a virtual environment was created. However, it was intended that the system could be used in 'real world' situations to control a device exploring a remote environment. In these situations the haptic feedback, generated by the hand controller, must provide adequate sensory information to be beneficial to the user; improving performance in tasks where visual cues maybe of poor quality.


Haptic feedback, or haptics, commonly refers to the process by which the force and tactile information of a remote environment, real or virtual, is displayed to a user via a human-machine interface (Burdea and Coiffet, 2003). Information about a remote environment is gathered via sensors on a remote device and used to actuate motors in the operator control unit or hand controller. The forces and vibrations imposed on the user are intended to reflect the force and tactile information of the remote device's surroundings (Siciliano and Khatib, 2008). Force information can include an object's weight, hardness and inertia, whilst tactile information may consist of surface information such as object smoothness (Burdea, 1999).

Generating haptic data alongside conventional visual-audio displays has been shown to help users control objects or tools in a virtual environment (Srinivasan and Basdogan, 1997) and to improve the manipulation of remotely operated telerobots in real world situations (Howe, 1994). Maclean (2000) suggests that because of these qualities, using haptic feedback is often required to give the best design solution, hence its inclusion in many applications. These include surgical applications and bomb disposal robots, exoskeletons for defence and remotely operated vehicles including subsea, space and nuclear waste handling tasks (Stone, 200). Other applications range from the entertainment industry for use in computer games, to make them more realistic and immersive, to education, computer-aided design and industry (Siciliano and Khatib, 2008).

Using robots for surgical operations allows smaller, nimbler equipment to be used. This results in smaller incisions being made, thus less surrounding tissue is damaged and the recovery time for patients is reduced. Haptic feedback is crucial for many telerobotic surgery applications, such as minimally invasive surgery. Surgeons often have to palpate organs and tissue during operations to gain sensory information which could not be provided through any other means. Surgical robots are able to gather force and tactile information through out an operation and a haptic feedback hand controller displays this information to a surgeon (Rovers, 2002). Bello et al (2010) discuss how important haptic feedback is to medical simulations using virtual environments. They conclude that a "good use of haptics" in training simulations is necessary to improve performance of operations that require a reaction to haptic prompts. It is important, however, that the haptic feedback accurately models the expected force response, and the haptic device is able to display these forces to an appropriate resolution. In order for a user to differentiate between tactile information of an environment the haptic refresh rate must be at least 1 KHz compared to a graphical update rate of 30Hz (Vafai and Payandeh, 2009).This means for high fidelity force feedback the motors of a haptic device must be capable of updating 1000 times a second.

In teleoperation applications, such as Explosive Ordnance Disposal (EOD) operations (Kron and Schmidt, 2004), space operations, and the handling of hazardous materials (Stone, 2000), the user's ability to sense objects and their surroundings is extended to a remote location. In these applications, an appropriate use of haptics can help reduce the need for a human operator to enter a dangerous environment by improving control of a remote robot or device (Howe, 1994).

Modern bomb disposal robots, such as the Cutlass, have robotic manipulators capable of up to 9 DOF. These necessitate a hand controller that is easy to use and capable of high degree-of-freedom input. The project presented intends to address both these issues by producing a device, capable of control in more than three degrees-of-freedom, which is intuitive to use and can be applied to a range of tasks.

In many of the applications described haptic feedback is important because the tasks are often carried out under degraded visual conditions (Nof, 1999). This is very apparent in the control of bomb disposal robots where a lack of visual information could, for example, be caused by poor camera positioning, low picture quality, a poor connection between the user and the remote device, or tasks being carried out in low light conditions (Barnes and Counsell, 2003). In these situations the extra sensory data provided by the force feedback allows operators to use a remote device to explore an environment whilst maintaing a high degree of control. Hokayem and Spong (2006) also suggest that due to limited real-time visual data from cameras mounted on mobile robots, in a remote location, force information is needed to help the operator understand the robot's environment but also reduces the need for high quality visual feedback and displays.

Evidence from Tavakoli and Aziminejad et al (2007) and Munglam and D'Amelio et al (2006) suggests that when haptic information is displayed along side "perfect" visual feedback, during a teleoperation task, there is little difference in performance compared to the when visual feedback is displayed alone. However, when the user was presented with a degraded or erroneous visual display the number of errors incurred was greatly reduced with the introduction of haptic feedback.

This research suggests that a hand controller capable of displaying haptic feedback to an operator would be very beneficial in real world teleoperation applications in which devices are operated in a remote environment. It has been shown that force and tactile feedback is of even greater importance in controlling these devices when the visual displays available are of poor quality.

Haptic feedback is an integral part of virtual reality interactions. Srinivasan and Basdogan (1997) suggest that the sense of touch is essential to realising the full potential of virtual environments, providing "a sense of immersion in the environment that is otherwise not possible". They conclude that greater immersion is probably achieved by using simple haptic interfaces with visual displays, rather than by using improved visual displays alone. Insko et al (2001) also showed that using simple passive haptic objects to support a visual virtual environment can improve a user's sense of presence in the virtual world.

Further more, Mason and Walji (2001) investigated the effects of adding or removing visual and haptic feedback from a task on a user's performance. Specifically, the tasks involved "reach-to-grasp" movements towards virtual objects. It was found that visual cues of the moving limb, or virtual object being controlled, and accurate haptic information about virtual objects are particularly important to achieve good performance in virtual environment tasks.

Interactive 3D and serious-games based training has used haptic feedback to increase both the physical and psychological fidelity of simulations (Stone, 2008). These simulations range from medical training simulations to EOD operations and firearms training. Stone (2008) outlines a series of human factors to consider and guidelines to follow when developing a virtual simulation and how these may affect the choice of operator interface or hand controller used.

Current hand controller solutions for implementing haptic feedback include bespoke or application specific controllers, high end haptic devices, such as those from Force Dimension, and Commercial-Off-The-Shelf (COTS) devices. Brooks & Bejczy (1985) also investigated a number of prototypes that have been developed in support of the space and nuclear industries. They suggest that currently there is no general purpose solution hand controller system, but rather present evidence in favour of particular designs for hand controllers over others. For instance, a joystick hand controller design facilitates a stronger grasp by the user compared to many other hand controller designs.

Bespoke solutions, such as the Harris advanced haptic arm and Operator Control Unit (OCU) (Figure 1 [1] ), are built for a specific task or application in mind. Because of this they cannot be applied to a range of situations. For example, the Harris OCU is configured for control of a particular robotic arm for remote dexterous manipulation tasks. Control of the robot supporting the arm is performed with a separate hand controller. Bespoke solutions are often expensive, and may require a degree of training before basic tasks can be performed.

High end haptic devices, such as those from Inition on the other hand, offer control and force feedback to a high dof, can be applied to a range of applications, and are considered to be very accurate. However they are very expensive, ranging from £6,000 for the Omega device from Force Dimension (Figure 2 [2] ) to £70,000 for the Haption Virtouse3. Because of their price these devices have not become widely used outside of laboratories and research applications.

A very popular user interface for virtual reality interactions is the Phantom Omni from SensAble Technologies [3] (Figure 3) (Burdea, 1999; Stone, 2000). This off-the-shelf desktop device provides six DOF input and three DOF haptic feedback. The stylus design makes it well suited for single-point interactions within a virtual environment. In such interactions all forces are calculated to one point and communicated to the user through an interface normally with a stylus or pen design (Okamura 2004). The forces calculated and displayed to the user will only be translational forces along the x, y and z axes. No rotational, or torque, forces can be displayed to the user through single-point interactions (Otaduy and Lin, 2006; 2008).

COTS haptic devices which are not as widely used include haptic gloves, such as the Cyberforce System available from Inition, an example of a wearable haptic interface. Although these gloves provide multiple points of interaction, allowing more dextrous manipulation of 3D objects, they are expensive, have complex user interfaces and tend to be quite heavy (Burdea, 2000; Burdea and Coiffet, 2003). Haptic gloves must be calibrated and reconfigured for each user and cannot display the weight of a remote object (real or virtual); making them inappropriate for many applications.

Force feedback devices are common in the field of entertainment, in particular video and computer games, helping to create a much more immersive feel for the user. These haptic devices tend to be quite simplistic, providing few degrees-of-freedom and crude feedback in the form of vibrations. However, their affordability and intuitive control makes them a keen area of research and development.

The Falcon game controller (Figure 4), developed by Novint [4] , has taken haptics for entertainment purposes a step further, producing a game controller with high resolution 3 DOF input and force and tactile feedback . Similar to the Phantom Omni, it is a single-point-interaction device; however it is relatively inexpensive and still provides simulation of objects, textures, recoil effects etc in a 3D virtual environment. For these reasons, the Falcon is being used as an initial controller to be developed. Two Falcon hand controllers will be used to construct an advanced hand controller capable of input and output in more than 3 DOF; making it more suitable for tasks requiring dexterous manipulation.

Figure 4: Novint Falcon games controller [5] 

There are examples of the Novint Falcon being used outside of the gaming and virtual reality industries. Martin and Hillier (2009), for instance, asses the Falcon's potential for use as a robotic controller. In their research, they characterise the Falcon device in terms of its geometric and inertial constraints, providing useful information on the controller's range of motion or workspace and its resolution. The authors conclude that the Falcon is an appropriate device for robotic control. However its fitness for purpose depends very much on the tasks undertaken. Chotiprayanakul and Liu (2009), and Linda and Manic (2009) also both propose methods of using the Novint Falcon to interface with remote telerobots. This research supports the idea that the hand controller system proposed in this paper could potentially be used to control remote devices in real world applications.

Lange and Flynn et al (2009) undertook usability assessments of COTS video games consoles and devices for rehabilitation purposes. The devices investigated included the Playstation EyeToy and Nintendo Wii-mote interaction devices and the Novint Falcon. When participants were asked about their experiences it was reported that the Falcon provided the best feedback and was better suited for rehabilitation of motor skills. However, participants found it difficult to grasp and described its workspace as restrictive, not allowing operators to use their whole arm when controlling the device.

The hand controller design proposed couples two Falcon controllers together to be controlled with a single joystick hand controller. This eliminates the ball grip, making grasping the device easier and enabling longer use of the hand controller before user fatigue becomes a problem. The two Falcon devices are also arranged in such a way to maximise the workspace of the hand controller system.

Coupling haptic devices

Shah and Teuscher et al. (2010) provide an example of how two Novint Falcons can be combined to build a five-degree-of-freedom haptic device (Figure 5). In the configuration shown, secondary functions provided by the additional buttons on the Falcon ball grip will be difficult to implement and grasping the stylus in such a way will cause user fatigue (Brooks and Bejczy, 1985). Nevertheless, the combining two Falcon devices to produce a single five-degree-of-freedom haptic feedback controller is of particular interest to this project.

Figure 5: Combined Falcons controlling a stylus in 5 DOF (Shah and Teuscher et al (2010)

Other examples of two haptic devices being combined include Krause and Neumann et al (2005) and Iwata (1993). In both these examples, two haptic devices capable of 3 DOF input and output are coupled together, with a joint end effector, to produce devices which can control a virtual object in 6 DOF, and display force and torque output to the user.

Krause and Neumann (2005) describe the rotational forces, or torque, caused by two forces acting on either side of a pen (Figure 6). The forces FA and FB are equivalent to the forces produced by each of the haptic devices acting on either end of a joint end effector. Without adding additional hardware, coupling the two devices together would produce a device capable of 5 DOF output; three translational forces and two rotational forces. The third rotational force shown in Figure 6, representing the torque along the long axis of the end effector joining the two devices, would not be felt by a user. In order to rectify this, the authors added an additional motor and encoder within the end effector which rotates about its long axis providing "the missing third rotational degree of freedom".

Figure 6: Rotational degrees of freedom (Krause and Neumann, 2005)

The force diagrams shown in Figure 6 show that a torque force can be applied to an end effector, and thus felt by the operator, by applying a force at either end opposite in direction and normal to the direction of rotation and normal to the direction between where the force is applied and where the desired torque is felt.

Using this method, a desired, known, torque can be displayed to the user by calculating and applying the appropriate forces to each Falcon device either end of the joint hand controller.

Haptic Rendering

In order to include haptic data in virtual simulations the concept of haptic rendering must be understood. This is the method of "computing and generating forces in response to user interactions with virtual objects" (Salisbury et al, 1995). Unlike other modalities, such as vision and audio, force information is exchanged in two directions; both to and from the user. Conti and Barbagli (2004) describe a typical architecture for haptic rendering in a virtual environment (Figure 7). It shows that haptic rendering can be implemented in three stages:

Collision detection algorithms determine when there is contact between the virtual representation of a device, the avatar, and other objects in the virtual environment.

Force response algorithms calculate the expected forces resulting from these collisions and return them as force and torque vectors. These vectors are also passed to the graphics simulation engine so the effects of these interactions can also be rendered graphically and the objects are seen to move as expected.

Control algorithms display the calculated force to the user via the haptic device (Conti and Barbagali, 2004).

Simulation engineSimulation Visual Rendering

Graphics engine

Force response

Collision detection

Haptic Device

Control algorithms


Haptic rendering

Figure 7: Haptic Rendering Architecture (Conti and Barbagali, 2004)

The force responses are partly determined by the shape of the avatar and the type of contact that it allows. For instance, a single-point interaction, representing the very end point of a syringe or tool for example, can only show forces and positions in 3 DOF. Torque forces will not be perceived by the user. This makes single-point interaction suitable for haptic devices which only allow translational movement in the x, y and z coordinate frame (Figure 3).


However, if a haptic device allows for greater than 3 DOF input and output it can be more accurately represented by a volumetric, or 3D, avatar of any shape in the virtual environment. The force response algorithms must determine both forces and torques resulting from interactions between the avatar and objects in the virtual environment in order to display 6 DOF force feedback to the user (Otaduy and Lin, 2006; 2008).

(Edit information on Object-object interaction)

In the case of the Phantom Omni, which allows 6 DOF input and 3 DOF force feedback, the virtual device avatar, must be a 3D object, in order to display rotations of the device stylus, whilst the output only supports single point interactions between the avatar and other virtual objects. This means apart from at a single point (normally the end tip) of the avatar, the avatar will simply pass through other virtual objects.

The haptic renderer being used is H3DAPI (, an open source Application Programming Interface (API) capable of handling both graphics and haptics. This already contains the algorithms necessary to display 3 DOF force feedback to the user. However, it does not contain 6 DOF force effects, so would not display torque forces to the user, via the haptic device, without modification. Specifically, a new force response algorithm calculating torque responses was implemented in H3D.

Singapogu and Sander et al (2008) describe a method of producing 5 or 6 DOF force feedback using 3 DOF rendering techniques. They present an algorithm which calculates the missing torque information "based on the forces on multiple points" in the virtual environment. The algorithm is tested using a 5 DOF haptic device and is found to produce the expected control of a virtual stylus and output to the haptic device.

The algorithm presented by Singapogu and Sander et al (2008) may be useful for this project as when the two Falcons are coupled together the highest degree of freedom available will be 5 DOF.

(Rigid body mechanics involved in producing the feedback/kinematic control of a rigid body)

(Collision detection and force response algorithms for both 3 dof and 6 dof control/force feedback. THEN include the Singapogu and sander study)

Simulation Fidelity

The fidelity of a virtual simulation refers to how well it represents its equivalent real world environment (Stone, 2008). Both Stone (2008) and Bello (2010) suggest that high fidelity simulations, i.e. those that closely match the real world in both physical and psychological attributes, are not always required to achieve the same level of learning in training simulations. The level of fidelity required is more dependent on the type of skills being developed and the tasks being used to develop them.

The simulation developed to visualise the project will be of low physical fidelity whilst maintaining high psychological fidelity (Stone, 2008). That is to say, a relatively simple 3D virtual environment will be made with a low amount of visual detail, but the tasks themselves will closely match the real world equivalent; testing a user's perceptual motor skills rather than their decision making skills. This type of interaction often uses specialist equipment purposely designed for the task, which can be very expensive and unsuitable for other operations.

Although H3D is not specifically designed for developing highly detailed virtual environments, the inclusion of haptic feedback and realistic control means a high degree of psychological fidelity will be maintained. Because of this, more focus will be put into developing a complete and realistic force feedback output to the user, and an intuitive human-machine interface than developing an extremely realistic virtual environment. The tasks used to test the hand controller are thus likely to be abstract task elements testing the hand controller's dexterous control of an object.



The initial plan of coupling two Falcon hand controllers with a common joystick controller (figure 8) has been realised in hardware (figures 9 and 10). This was accomplished using ball socket assemblies connected with Delrin plastic rods.

Figure 8: Initial plan to couple the two Falcon controllers

Figure 9: Two Falcons coupled togetherThe original ball grip attachments of each Falcon were modified such that a ball socket could be fixed. This required disassembling the ball grips and drilling holes such that the ball sockets fit tightly and could be secured in place. A ball socket was attached to either end of the joystick hand controller. The delrin rods were threaded at either end so that it could be securely fastened to the ball sockets, with one end connected to a Falcon and the other connected to the joystick hand controller. The Falcons apply forces to either end of the joystick hand controller and the user will feel the sum of the forces from the two Falcons.

This configuration requires the Falcons to be at an angle of approximately 450 allowing the joystick controller to hang over the edge of a surface. This ensures the joystick is free to move in all directions and Falcons can be placed closer together, reducing the total size of the system and making control easier.

The length of the each of the delrin rods is equal to the height of the joystick. This is so it can be rotated without being restricted by the positioning of the Falcons. The system can be disassembled, making easy movement and adaptation of the device possible.

Figure 10: Full hand controller systemFigure 11 shows a close up view of the PCB inside the Falcon ball grip. Additional wires have been soldered over the button connections, replacing the buttons of the Falcon with those on the joystick hand controller. Any secondary functions required of the hand controller can now be performed easily using the thumb and forefinger of the controlling hand to depress the buttons or trigger of the joystick controller.

Figure 11: Ball grip PCBA finger trigger joystick was used because, in a comprehensive review of hand controller technology, Brooks and Bejczy (1985) found that it performs better than most hand-grips in a range of categories, including controllability, human-handle interaction and human limitations. In particular, the wrap around grasp one can apply to a finger trigger joystick allows the application of greater force for longer periods. In contrast, a pinch grasp similar to the standard ball grip of the Falcon, only facilitates a small force to be applied for a short time before user fatigue would become a consideration.

Initial tests showed there is too much freedom of movement of the joystick controller before the Falcons would produce any force feedback. That is, the joystick can be moved a noticeable distance in each direction before being restricted by its connections to the Falcons. This makes controlling a virtual object difficult as movements of the hand controller in the real world are not replicated in the virtual environment by movements of the haptic device avatar. In order to rectify this, the movement of the joystick has to be restricted, such that force feedback is produced even when the joystick is subjected to small changes in position. To restrict the movement, the initial moveable ball sockets will be replaced with mechanically fixed connections which minimise any unnecessary rotation.

Virtual Simulation

Figure 12: Blender underwater sceneA single Falcon is capable of 3 DOF input, and 3 DOF output. This means the virtual representation of a Falcon can be moved in the x, y, and z directions and forces resulting from collisions in the virtual environment are felt by the user along these axes. It is believed, however, that by combining the two Falcon controllers the three missing rotational forces displayed in figure 4 can be controlled by the user, and torque forces can also be felt on the output of the combined controller.

Research from Krause and Neumann (2005) suggests that because a single Falcon cannot display rotational forces, when the two devices are coupled in the configuration described, a user will not be able to perceive the torque produced along the long axis connecting the two Falcons together, i.e. the rotation about the length of the joystick grip. This is because the joystick controller will be mechanically fixed at the top and bottom. This allows for 5 DOF control of a virtual object and 5 DOF force feedback output.

To visualise 5 DOF control a three dimensional virtual representation of the haptic device is required. Interactions between the 3D avatar and 3D virtual objects will result in multiple points of contact (Singapogu et al, 2008). Desired torque values can then be found by calculating the forces acting on these multiple points and the rotational effects they cause.

Figure 13: Underwater scene rendered in H3DThe initial proposal was to develop an application with falcon control in the game engine Blender. Haptic data would then be added to the models using the haptic renderer H3D. However, it was not possible to control objects in Blender with a Falcon without changing the programme itself. Thus blender was used purely as a modelling tool whilst the applications with falcon control and haptic information were developed in H3D.

The virtual task to test the hand controller system will include control of a subsea Remotely Operated Vehicle. Thus a simple underwater scene was developed in Blender (figure 12) and rendered in H3D (figure 13).

There is a noticeable difference between the Blender and H3D render of the same scene. This is likely due to differences in how some effects are implemented. For instance, the spot light effects used in Blender do not appear to be as effective in H3D.

Although the H3D render appears more simplistic and rougher, for this task only low physical fidelity simulations are required. More importantly, the haptic data implemented in the application

Figure 14: Application showing two Falcon avatarsshould closely resemble what a user would expect to feel carrying out a similar task in the real world.

At this stage of the project simple scenes with haptic information have been developed. These have generally consisted of simple objects and shapes with different frictional surface properties. Interaction with these objects has been achieved with single and multiple Falcons controlling simple avatars capable of single-point interaction. This can be seen in Figure 14, which shows two stylus avatars controlled by the two Falcons interacting with a virtual object.As well as producing different force feedback effects, events based on the position of the haptic device avatar have been implemented. These include grabbing and changing the colour of objects on contact with the avatar.

Models made in blender have been exported to H3D and frictional surface properties added. These models have been controlled by a single Falcon to realize single-point interaction with objects. Figure 15 shows an ROV model controlled by a single Falcon in 3 DOF which can respond to forces imposed on it by a moving cube with a solid surface.

Figure 15: 3 DOF control of ROV modelCurrently, these interactions occur at the centre of the ROV model as this is the default origin of the avatar. This can be changed by translating the point of haptic interaction until it is aligned with the desired point of the model. In this instance preferably the end of the manipulator arm.

Figure 16 shows two blender models rendered in H3D with a simple underwater background. This was implemented by assigning images to each side of the application background, creating a 360 degree effect.

Figure 16: Underwater scene with background image

The majority of the above examples were implemented using X3D, an architecture to represent 3D scenes and objects, because they require very little computation. An X3D scene graph is made up of fields which are data storage components for specifying the properties of objects, and nodes, that group fields together defining scene objects.

Adding surface properties to objects created in H3D requires creating a surface node with haptic properties and adding it to a shape's appearance. H3D specifies five surface nodes for adding haptic data to an object. The simplest of these are SmoothSurface and FrictionalSurface. The properties of a node can be altered by changing the associated field value. For example, changing the value of the 'stiffness' field in the SmoothSurface node will alter the hardness of the object, and the user will experience an appropriate change in the force feedback.

Effects caused by events occurring in the virtual environment are achieved by 'routing' between fields in H3D. This means that if two fields are routed together, a change in one field will send an event to the other, allowing it to change accordingly. For instance, if a position or orientation field were to change due to the user moving a virtual avatar, the colour of a shape may change. Routing can be used to implement events such as those described above, or more complicated force effects as a result of collisions between objects.

X3D is often used for creating the scene geometry and structure for applications whilst Python scripts define application and user-interface behaviour. Thus for more complicated applications and calculation of force effects it is likely that Python and C++ scripts will be used. Examples of code are included in the technical appendix for reference.

Future Work

In order to display 5 DOF force feedback in as natural a way as possible it will be necessary to generate an effective force and torque at a single point along the common joystick hand controller. To achieve this, the desired forces and torques resulting from user interactions with virtual objects must be computed in the haptic renderer, decomposed into right and left falcon forces and a single force vector applied to each Falcon.

Figure 17: Diagram showing the hierarchy of classes in H3D [6] 

The force response algorithms in H3D can be implemented globally through a 'HapticForceEffect' class, or algorithm (shown in black in Figure 17). The force effects already computed by the haptic renderer are HapticForceField, a HapticForceEffect class that produces a constant force and HapticSpring, a HapticForceEffect class that generates a spring force based on Hooke's law:


Where K=spring constant, and x=difference between avatar position and that of the device (position-device_position) [7] .

The structure for implementing torque is already in place, as the HapticForceEffect base class for all force effects, shown in black, "generates force and torque based on the position and orientation of the haptics device"9. However, a new HapticForceEffect class, similar to HapticSpring, must be implemented to calculate the desired torque from the position and orientation of the haptic device. The C++ code implementation of HapticSpring is included in the technical annex for reference.

Torque is calculated as the cross product of a displacement vector r and force vector F. . It is believed that by calculating the forces acting on multiple points, a torque can be resolved at a point between the multiple torque vectors.

Achieving multiple contact points should be relatively straight forward as each Falcon is capable of single point interaction. To create a 3D stylus or avatar between the two points of the Falcons will require defining a new HAPIHapticsRenderer subclass [8] .

Once 5 DOF control of simple objects, such as a simple stylus, is achieved more advanced applications will be developed. This will include 5 DOF control of more complex 3D objects and interactions with other objects in the virtual environment. Ideally, the end task will allow control of a virtual ROV, as well as control of a robotic arm attached to the vehicle.

More advanced haptic data will be added to objects in the virtual environment. This will be to increase the realism of the force feedback and test the range of the device's output. This haptic data may include using deformable surfaces, haptic layers, and a range of textures.

Rigid body physics may also be implemented using the H3D specific node 'DynamicTransform'. This will help to create more realistic virtual environments and tasks.

Detailed time plan

Figure 18: Time table for completion

Figure 18 shows a time plan for completion of the tasks discussed previously. It is intended that the tasks requiring the most time will be undertaken sooner whilst others can be developed simultaneously.

Testing and Evaluation

A virtual simulation will be performed in which, participants will use the hand controller to control a subsea vehicle to complete a particular task. An end effector or tool to interact with objects in the virtual environment, and the user should feel the force and tactile response that they would expect to feel in a similar real world application.

Lange and Flynn et al (2009) introduce the concept of usability assessments as an effective method to establish how easy a device or application is to use. From the results of a usability assessment it is possible to identify the areas that need improvement. Lange and Flynn decompose usability into the following categories:

Learnability: the user's ability to undertake basic tasks the first time they use the device.

Efficiency: how quickly and competently users can perform a task after they have got to grips with the design

Memorability: how proficient a user is after a period not using the device.

Errors: quantitative measure of the number of errors made, how severe the errors are and whether or not a user could recover from an error

Satisfaction: how enjoyable the design is to use

These are just an example of the categories which may be addressed in a usability assessment. The assessment can take the form of both participant questionnaires and observations, resulting in both quantitative and qualitative data.

An evaluation of the hand controller presented will be made based on the results of a usability assessment with reference to some of the above criteria.

An investigation into how well the proposed design compares to current solutions for advanced dexterous manipulation is important if a standardised general purpose solution is to be achieved. The parameters of the hand controller will be compared to those of a similar bespoke solution for a haptic hand controller capable of similar dexterous manipulation. These parameters will include the cost of the device, its accuracy, the number of degrees-of-freedom (DOF) users can control and the range of the force feedback output. Qualities such as the controller's ease of use and potential to control a range of tasks will also be compared.

(True critical analysis of the proposed system. Specify the advantages and limitations.)

Costs to date

Two Novint Falcon games controllers. $249.95 or £158 each. (

Ball socket assembly, M5, PK2: £3.63 (

Ball socket assembly, M6, PK2: £3.92 (

Delrin Natural Rod 12mm dia x 1000mm: £2.54 (

Logitech Attack 3 joystick: £17.99 (

Total costs thus far total £344.08. No further costs are anticipated.


The potential hazards involved in undertaking this project are:

Table 1: Risk assessment



Prolonged use of Novint Falcon controllers - leading to fatigue and potentially Repetitive Strain Injury

Take regular breaks according to manufacturer specifications - 10-15 minute break every hour of use.

Electrical shocks/damage

When modifying the Falcon controllers, ensure power supply is off. Also ensure power supply is correct according to manufacturer specifications. Do not use during lightning storms

Foreign transformers not rated for the 240V from the mains - rated for 100-120V

Sourced a new power supply to power the falcons. Adjustable voltage and current had to be controlled to give 30V and max 1A to each falcon as specified by manufacturer.


Research discussed in this paper gives a broad understanding of what haptic feedback is and the advantages of implementing force feedback in a range of applications. It has been shown that the addition of force feedback, alongside visual-audio displays, in teleoperation and virtual reality tasks can improve user performance, increase learning and enhance a user's immersion in a virtual environment.

However, of more importance are the current hand controller solutions for implementing haptic feedback. There were found to be several disadvantages of the current solutions including price, a controller's inability to be adapted to a range of tasks and in the case of exoskeleton devices, the use of technology unproven for real world use.

Design of the proposed hand controller system was undertaken with these disadvantages in mind. The aim was to produce a relatively cheap hand controller, which is easy to use and can be applied to a wide range of tasks or applications. COTS devices were exploited because they are relatively cheap compared to high end haptic devices and are purposely easy to use in order to appeal to a wide audience and in many cases it is assumed that the device will be used for a wide range of tasks. A drawback of COTS devices however is the low number of degrees-of-freedom on the input and output.

The paper describes the progress made so far in combining two COTS devices, the Novint Falcon, to produce a single hand controller system capable of more than 3 DOF input and force feedback output.

An understanding of the haptic renderer used to visualise the project has been shown through the development of several simple applications. More advanced techniques, such as implementing force effects and force response algorithms, have been researched extensively allowing for immediate progress to be made on the next stage of development: implementing 5 DOF control and force feedback.

The key tasks involved in generating the force and torque values required for 5 DOF feedback have been identified and supporting research has shown several possible ways for completing these tasks.

The project will be evaluated through usability assessments including user questionnaires and observations. From the results of these assessments it will be possible to determine, among other things, how easy the system is to use and how well it can perform dexterous manipulation tasks.