Autonomous Vehicle Motion Feedback To Hand Controller Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This report outlines the design and development of a system that allows the remote control of a semi-autonomous vehicle that provides tactile feedback to the user via a force feedback controller. Controllable visual feedback is also available to the user via a pan-tilt-zoom webcam. The system is designed to maximise control of the remote vehicle and prevent collisions while being able to gather information of the remote environment.

1. Introduction

'One of the most important objectives of military intelligence is to warn of potential or immediate threats. After the concept of surprise attack became a vivid reality with the Japanese attack on Pearl Harbor during World War II'' [1]

In military theatres all over the world, intelligence is used for operation planning. Commanders use such information as the location of friendly and enemy troops to allow them to command their forces effectively [1]. Intelligence comes in two forms; reconnaissance and surveillance. Reconnaissance data is exploring new areas to gain survey data and gather information on enemy forces size and strength. Surveillance is monitoring an area allowing activities in that area to be analysed [1]. Unmanned reconnaissance and surveillance vehicles reduce the risk to human life by gathering information and providing images of indoor and outdoor environment without military personnel being near the area [2]. This technology is not limited to the military however, other services such as the Fire Brigade, work in hazardous environment where teleoperated vehicles would prevent injuries or fatalities [3].

The aim of this project was to develop a means of using an unmanned-ground-vehicle to explorer remote unknown environments and gather reconnaissance data without there being any risks to the user. The remote vehicle must also provide tactile feedback to the user to give them a higher sense of presence in this remote environment. This is done via the users hand controller. A webcam allows a remote visual feed of the remote environment which is fed directly to the users control computer. Other sensors on-board the vehicle allow it to sense objects in its local area and allow it to make local decisions whether the user's commands are 'safe' for its own wellbeing and will not cause a collision. In emergencies these decisions can be overridden by the user to gain full control of the vehicle. An autonomous mode allows the vehicle to continue moving forward without the user providing an input, automatically stopping if it encounters rough terrain or an object, notifying the user that it has done so.

The controls for the vehicle had to be intuitive with the minimal amount of set up time or complexity so that an untrained person would be able to operate it with minimal amount of tutoring. The system must be robust and versatile enough to work on systems that it was not necessarily designed for. The final system solution needed to consist of a robust software package that can be run both on the remote vehicle's on-board computer and the user's control computer while interfacing cleanly with different hardware setups.

1.1 Project Requirements

At the beginning of the project a requirements capture was undertaken which lead to seven user requirements being developed. These top level user requirements are shown below:

1. The system shall run as close to real-time as possible to minimise the feel of delay in the system.

2. The vehicle hardware shall be able to move stably over different terrain profiles while carrying sensor and computing hardware capable of mapping terrain and environment.

3. The vehicle software shall be capable of running in manual and autonomous modes that shall safely control the robot and provide feedback.

4. The controller hardware shall provide tactile feedback from the sensors to the user as well as being able to control the vehicle and shall be able to be calibrated.

5. The control software shall be able to directly control the vehicle or leave it in autonomous mode, as well as storing/saving and displaying the terrain profile and feedback data.

6. The software solution shall be designed and written in a robust module form to allow maximum re-use potential.

7. The demonstration of the final system shall enable the user to see and experience the feedback system either using the hardware or a back-up solution.

These requirements are used to confirm the validation of the final system solution and will be re-referred to in the end of this report.

1.2 Solution Overview

The solution that was implemented involves three key Sub-Systems; the Control System, the Communication System and the Remote System. These three systems are shown in Figure 1 (Appendix 1 includes the full system design and interactions).

Figure 1 The three main Sub-Systems of the System.

The Control System is the interface allowing users to operate the Remote System; it contains the User and a Control Station. The Control Station consists of a controller and a computer, known as the Control Computer. Installed on this Control Computer is the Software Package that the user will interface with via the controller and other input devices such as the keyboard.

The Remote System is operated by the user from the Control System; it contains the Remote Vehicle within the remote environment. The Remote Vehicle contains sensors and a computer known as the 'Remote Computer'. This Remote Computer contains the same Software Package as the Control Computer however configured differently and interfaces with the Remote Vehicles sensors.

These two systems communicate via the Communication System which transfers information between the two computers and the Software Package. This communication link is in the form of wireless local network. A full scale solution could use the Internet or another communication type, for example satellite. However these are beyond the scope of this project.

The user operates the overall System via the Control System. This communications to the Remote System via the Communication System isolate the user from the remote environment which may or may not be hazardous. The Remote System in-turn provides sensory feedback to the user via the Control System completing the close loop design. This close loop approach allows the user to gain feedback from their actions allowing them to adjust their inputs if required.

Figure 1 also shows the hardware elements of the system (grey) and the software aspect (red and green). In the next section of this report the hardware elements of the system will be discussed before moving onto the larger software aspect.

2. Hardware Components

The hardware used in this project solution was partially supplied by the Loughborough Systems Engineering Innovation Centre (SEIC). The main hardware component is a small lightweight 6 wheeled robot, called a HERO robot. This robot has various hardware components mounted onto it including a stripped out computer, a USB digital data acquisition unit, IR sensors, battery packs and a Wi-Fi adapter. Other hardware supplied by the SEIC was a Pan-Tilt-Zoom (PTZ) Webcam and a Wired Xbox 360 controller. Additional hardware components were supplied to the project by the developer. These included a Nintendo Wii Remote, a Logitech Force-Feedback Joystick, a Bluetooth adaptor and a HP Compaq nc6220 laptop to be used as a simulated Control Station. A Lego Mindstorm NXT prototype robot was used for developed away from the main hardware.

2.1 Configuration of Hardware

The use of this hardware was crucial in the solution. All the sensory equipment available to the project was mounted to the vehicle's chassis allowing a maximum amount of local sensory data. The Control Computer was isolated from the Remote Computer via the Wi-Fi connection through a remote wireless router. This setup allowed the maximum distance and flexibility between the Control Computer and the Remote Vehicle. Figure 2 shows the configuration of this hardware.

The controller connected to the Control Computer was chosen to be the Xbox 360 controller over the Force-Feedback Joystick. The Xbox controller has the advantage over the joystick in that it is hand held and does not require a flat surface. This is beneficial if the system is to be used 'in the field' away from a desk. The Microsoft website states the Xbox 360 controller provides 'award winning compact ergonomics [that] provide a more comfortable gaming experience' and 'precise thumb sticks, two pressure-point triggers, and 8-way directional pad for enhanced PC gaming' [4]. This allows the user to have ergonomically placed buttons to improve the systems usability.

The Remote Computer on-board the Remote Vehicle has several USB sensory devices available to it. The main being the Pan-Tilt-Zoom Webcam that allows the user to gain a 180? view in front of the Remote Vehicle via the Software Package and Xbox controller. Additionally the Bluetooth linked Wii Remote mounted to the Remote Vehicles chassis registers any movement it experiences. These are transferred to the user's controller providing the tactile feedback. The final sensor array is the USB connected NI-USB 6008 DAQ which has 3 Sharp Infra-Red Sensors that allows objects in front and behind the vehicle to be detected. This information is used to adjust the speed of the vehicle that the user has commanded to avoid a potential collision. In addition to the sensor inputs the Remote Vehicle has a serial port motor control unit, this allows serial commands to control the drive motors speed and direction.

2.2 Hardware Risks

The main risk during the use and testing of the hardware components came from the HERO robot itself. Since this vehicle would be acting remotely and is normally out of view of its operator, it was important that the Remote Vehicle cannot cause damage to its surroundings or to persons near the vehicle. This was achieved by reducing the maximum speed of the vehicle minimising any damage caused from a collision. To further control the speed of the remote vehicle, as mentioned earlier, IR sensors are used to sense potential collisions and slow to avoid them. A manual override of this feature is available should the user wish to move or collide with an object. A wooden bumper was fitted to the front of the chassis to further protect the vehicle and other objects from being damaged. Finally the vehicle will default to a stationary state if the connection is lost and it becomes uncontrollable by the user. If all else failed a manual switch is located at the rear of the chassis that disconnects the power to the drive motors, while leaving the Remote Computer powered.

During motor command development the semi-developed software could send unexpected commands causing unpredicted behaviour. To overcome this, during development and testing, the HERO robot was raised off its driving wheels to prevent it from moving. This also had the advantage that a static mains powered power supply could be used to power the robot, decreasing the dependency on the batteries. The batteries themselves were a potential hazard due to the risk of fire by short circuiting them or leakage. To prevent this, the batteries were disconnected when not in use and supervised when charging.

2.3 Hardware Prototype

As the main HERO robot was not always available for development and testing; a prototype robot was used. This vehicle was a smaller Bluetooth connected robot with similar sensors to the full scale HERO robot. This robot was used largely during early development due to the heavy amount of testing and development to make the basic elements of the system work together. When the prototype solution was working 'wrap-around' code was added to allow the software to be integrated with the main hardware.

The main area of development of the system solution was the Software Package that was created to integrate all these hardware components.

3. The Software Package

The Software Package was developed for the sole purpose of this project. It integrates all the other elements into one software package and can be used on both the Remote and the Control Computers. The full source code for the final system is over 10,000 lines of code and will not be included in this report. A copy of the source code is available on request1; however, it is not required to understand the workings of the system.

3.1 The Chosen Language

The first decision that was made when it came to the software development was the language the software would be written in. A large number of languages were available to the project including Python, Visual Basic, C#, etc. However two main programming languages seemed appropriate to the project, C++ and LabVIEW. LabVIEW is written by National Instruments and is used for rapid prototyping of software. LabVIEW had been used for a similar project before with some success; it also allows easy networking and data acquisition from the National Instrument devices. C++ is used by many industries for large scale software packages and has a large amount of support and drivers available to it, including those for National Instrument's devices. Microsoft heavily supports C++ with their own software development kits (SDKs) been written in the language allowing Windows applications to be written with relative ease. C++ has a major advantage over LabVIEW and that is its open source and compiling software is freely available allowing any future development to be undertaken without any software purchases. For this reason C++ was chosen as the primary language however other National Instrument software was used to interface with their devices.

3.2 Existing Software Packages Used

A number of existing software packages were needed for development; the main being an Integrated Development Environment (IDE). For this project Microsoft Visual Studio 2008 was chosen since it was designed to organise large scale software packages and is freely available to students [5]. Other Microsoft packages were also used including the Window SDK and the DirectX SDK both available to download free of charge [6].

As mentioned earlier several National Instrument software packages were used however they are contained in a larger software package, NI-DAQmx [7]. This package contains the NI-DAQ drivers to operate the data acquisition device and NI-VISA library which allows serial port messages to be sent to the Remote Vehicle's motor drive unit.

The final two software packages used are Logitech Quickcam Version 11.1 which supplies the correct drivers for Logitech Sphere MP Webcam and is available free of charge [8]. And the last is the Internet Communications Engine (ICE), a publicly available networking package written by ZeroC that allows the set up of server and clients threads across a network. This is freely available with a large amount of support [9].

Each of these packages has small elements that are used in the solutions Software Package and are required for further development.

One final software package that is required is Real VNC [10]. This package allows remote computer to be controlled locally and has a free edition. Microsoft supplies a similar product called Remote Desktop, however this causes errors in the Webcam Feed if used, so Real VNC is used instead.

3.2.1 Other Existing Software Used

In addition to full software packages that are mentioned above a number of smaller open source software codes were integrated and used by the software. These included:

1. TWODENG.dll ' written by S. Kitchin. Originally written to support Windows based 2D games in DirectX, but is used here to link to Direct Input that has the Xbox 360 controller drivers.

2. Common.dll ' written by S. Kitchin. Uses high level functions to display pictures and text to a window.

3. NXT Library ' written by Ander. Uses functions to control a Lego Mindstorms robot [11].

4. WiiYourself! V1.15 ' written by gl.tter. Contains high level functions for use by the Wii remote [12].

5. VMR Capture ' written by K.R. Sivasagar. Contains a class that allows the webcam feed to be made [13].

6. QuickLZ ' written by Lasse Mikkel Reinhold. Contains functions to allow data compression [14].

7. CXBOXController ' written by Minalien. Contains high level function for use with the Xbox controller [15].

All these software codes are used in their entirety or partially in the main software package, however each is open source and free to be used.

3.3 Software Architecture

The Software Package was written following a software architecture that allows flexibility and the incorporation of existing packages to allow ease of development. The main executable is called Motion Feedback which is run to start the application and forms the main program; Figure 3 shows the software links around this main program.

Figure 3 Software Architecture of Motion Feedback.

Motion Feedback is run on both the Remote and the Control Computers. The application is identical in both cases and different functionality within the software is used depending on the set up of the system.

The user configures both computers manually during the initial running so they are correctly linked across the network and the correct hardware is available. After this, the application will load these settings automatically during start up. Similarly the system can be calibrated for the amount of tactile feedback given by the controller relative to the movements of the accelerometer. The collision prevention algorithm that affects the speed of the Remote Vehicle can also be calibrated, as well as the overall speed of the Remote Vehicle. These calibration settings are automatically sent to the Remote Computer from the Control Computer when changed and loaded during start- up.

3.4 Networking

The two computers operating, the Control Computer and the Remote Computer, have very different roles. The Control Computer gathers all user commands and sends this information to the Remote Computer. The Remote Computer acts on the user's commands and gathers data about the remote environment, sending it back to the Control Computer. Figure 4 shows this feedback loop. In the figure the red arrows symbolise the Control Computer and the blue the Remote Computer. These two computers interact with each other purely over the wireless network; no electrical connection between the two computers exists. This increases the isolation of the Remote Vehicle.

The main disadvantage of the wireless networks, however, over wired networks is that wireless networks are unreliable and have a very limited capacity for transferring data [16]. To overcome these two problems Network Checks and Data Compression are used by the system.

3.4.1 Network Checks

As mentioned in Section 2.2 the network connection between the Control and Remote Computers could fail leaving the Remote Vehicle uncontrolled. To overcome this each time a computer sends a packet of data to the other it waits for a reply say that the data was received correctly. If this reply is never received or it times-out then there may be a problem with the connection. If the computer is then unsuccessful at communicating to the other computer several more times it assumes the connection is lost. It also assumes that any new data sent by the other computer is invalid and stops transmitting data packet itself; this reduces the chances of an invalid packet of data affecting the system or valid data packets being intercepted by a malicious party. After a short while the computer will automatically try to reconnect to the other computer and will continue to try until the connection is re-established. Figure 5 shows this process in a flow chart.

This automatic reconnects happens immediately when the program starts, so if one computer starts up faster than the other they will automatically stabilise their connections reducing the need for user interactions with the Remote Computer directly.

3.4.2 Data Compression

Data compression has two advantages; 1) that a smaller packet of information needs to be transferred, and 2) it encodes the data reducing its readability to malicious parties [17]. The first area of the system that data compression is used is in sending the user command and feedback data using Bit Packing [17]. Both are compressed into a 64bit integer data packet before being sent across the network, and then uncompressed when it reaches its destination. Each packet is divided into 8 bytes of data each of which contains a piece of data. Three data packet types are used Command Packets, Feedback Packets and Calibration Packets. Both Command and Calibration packets are sent by the Control Computer and Feedback Packets are sent by the Remote Computer. Table 1 shows a breakdown of the contents of each byte in these packets.


Table 1 shows that the Calibration Packets contain two static values at the beginning of the packet, these values allows the Remote Computer to establish the difference between a standard Command Packet and a Calibration Packet without changing the data format. The exchange of these packets of data allows the receiving computer to add the data to its local data set giving it all the data that is available to the system as a whole.

The second area of data compression is used on the webcam feed. This feed is a continuous stream of bitmap images that would use a large amount of bandwidth if not compressed. To compress the image it is first transformed into a long string of characters which can be compressed by the QuickLZ [14] compression algorithm mentioned earlier. This new compressed version of the data can then be sent, reducing the bandwidth load. The receiving computer reverses the compression and displays the image.

3.5 User Assisting Algorithms

Incorporated within the software codes are several algorithms that use the information available to the Remote Vehicle to help the user to navigate in the remote environment. These include; Auto-Resolution, Safe-Drive and Auto-Drive.

3.5.1 Auto-Resolution

This algorithm is used to automatically control the resolution of the image being sent by the webcam. If the Remote Vehicle is stationary, high resolution images are sent to allow the user maximum view of the surrounding area. However if the system detects that the vehicle is moving it will switch to a low resolution image and speeds up the transmission rate. This low resolution image requires less bandwidth then the high resolution image, so reduces delays across the network. This provides the user with a high frame rate, easing the drivability of the vehicle. When the system detects the vehicle is stationary it automatically switch back to the higher resolution.

3.5.2 Safe-Drive

This algorithm uses the data from the IR sensors to detect object in front of or behind the vehicle. If the user commands the vehicle to collide with one of these objects the algorithm will slow the vehicle proportionally to the distance to the object until it stops the vehicle altogether. This algorithm is particularly useful when the vehicle is reversing as the webcam cannot be rotated to view behind the vehicle. As the data that is used for this algorithm is generated and stored locally it is not affected by the speed or availability of the network connection thus improving the reaction time of the algorithm. This algorithm can be overridden by pressing the two triggers on Xbox controller simultaneously, as shown in Appendix 3. In doing so the vehicles speed will be reduced to a crawl and collisions will now be possible.

3.5.3 Auto-Drive

Auto-Drive allows the vehicle to operate autonomously releasing the mental load of driving the vehicle from the user. Figure 6 shows how the algorithm works in a flow chart.

The algorithm sets the speed of the vehicle to the standard movement speed, using the Safe-Drive algorithm while moving forward. Safe-Drive stops the vehicle from colliding with objects in front of it; stopping before it does so. If the object moves out of the way of the vehicle, the vehicle will continue moving forward. In addition the accelerometer feedback is also monitored for small collisions or rough terrain, the threshold for which is set by the 'Low Vibration Threshold' level set in the Calibration window. If this threshold is exceeded the vehicle will stop and a large vibration is sent to the controller notifying the user.

To cancel the Auto-Drive the user simply commands the vehicle to move manually deactivating the algorithm and allowing them to control the vehicle as normal. While Auto-Drive is active, Auto-Resolution is deactivated to allow the user to look around in high resolution.

4. Operation of the Final System

The operation of the final system is designed to be intuitive and require minimal training. To aid with training a User Manual has been supplied with the solution, and has been included in this report in Appendix 2 for reference. The Software Package is designed around the standard Windows format using dialog boxes, standard controls and menus.

The user can use the Xbox controller to control most of the software once the initial setup is complete, removing the need to swap between the controller and the computer's mouse during operation. The controls for the Remote Vehicle via the Xbox controller have been included in Appendix 3. The various user assisting algorithms are always active, running in real time in the background (unless the user overrides them); so requires no setup or maintenance of behalf of the user.

During normal operation there is very little direct interaction with the Remote Computer via VNC and after initial setup this connection can be closed to save bandwidth.

4.1 Graphical User Interface

The main application window is designed to visually portray a large amount of information in a familiar format. Figure 7 shows the layout of the main display. The menu bar across the top allows access to the other software settings. Below this is the Status Bar, this displays what hardware is active at the local computer, left hand side, and the connected computer, right hand side. This enables the user to quick assess the status of the various hardware components the system uses in real-time.

The main background is the live feed from the Remote Vehicles webcam, this image is automatically adjusted to fill the screen if the window is resized or full screened. The white box in the bottom left is the Information Panel; this box is always in front of the webcam feed unless the user hides the panel. This panel displays the sensory information from the Remote Vehicle including its current speed and the distance to objects around it. The white box in the bottom right is the Map Panel, this is currently unused but can be used to display a map that the Remote Vehicle is generating of the remote environment, see 7.4 Additional Sensors and Environmental Mapping for further details. For full details on the Motion Feedback main window see Appendix 4.

The menu allows access to two other windows, containing settings. One of these is the Settings and Configuration Window, shown in Figure 8. This window allows the user to input the TCP/IP Address of the Remote Computer to gain control of the full system. Various other options and settings are available to the user through this window to allow them to fully configure the system. For details on these settings see Appendix 5.

The final window is the Calibration Window, shown in Figure 9; this allows the user to set the calibration of the system to match the hardware and their desired settings. This is only used at the Control Computer end where the user can adjust the calibration in real-time. When the user confirms a change the new values are automatically sent and updated on the Remote Computer. For details of the Calibration Window see Appendix 6.

5. Solution Validation

To validate the system solution, it is important to look at the requirements outlined at the beginning of the project in turn to see if they are met. Table 2 below shows the user requirements outlined in Section 1.1 and how the system solution meets them.


5.1 Transmission Delays

The system needs to operate at as close to real-time as possible to reduce the delay preserved by the user. Card, et al, (1991), states that a response quicker then 1 second is classed as an immediate response and requires no special feedback to the user [18]. Table 3 below is the delays times within the system.

Source Device

0.3s (Low Res.)

0.15s (Stopping)

0.35s (To User)voidance (Safe-Drive)

From Table 3 it can be seen that the systems response fall into this category of 'immediate'. The time taken for the vehicle to move in response to a user command is only 0.18 seconds. The Webcam will have displayed a new image on the Control Computer in 0.3 seconds and the tactile feedback takes a similar amount of time (0.25 seconds). This means for the response time of the closed loop system is 0.48 seconds which is deemed as a real-time response.

Even though this project meets the requirements that were set for it, the system has a great deal of areas in which it can be improved or functionality added and lessons learnt.

6. Lessons Learnt and Problems Encountered

The project, even though it was a success, had problems which needed to be overcome for the system to work correctly. Lessons were learnt from the project that may help future project achieve similar results.

6.1 Webcam Feed

The webcam feed is currently set at a relatively low resolution level of 320x240. This is below the maximum available for most webcams. This is due to the slow refresh rate of the feed at high resolutions which made the system useable. This was originally deemed to be caused by the slow network connection speed or the Software Package being inefficient. After extensive testing it was determined that the Remote Computer was simply not powerful enough to handle the large amount of data from the webcam at high resolutions and thus slowed the whole system to a crawl. Reducing the resolution improved the performance at the cost of picture quality.

6.2 Lack of Control

During early testing the system responded very slowly to user commands, often not performing them. This was caused by the webcam feed using all the bandwidth available, causing other network messages to be lost. To overcome this, a 'quota' of command packages must be sent and received before a webcam image is allowed to be sent. This prevents the bandwidth hungry images slowing the system, greatly improving the response time of the system.

7. Future Improvements

The solution has met the requirements laid out for this project however more can be added to it in the future. This would increase the number of features available to the user as well as the capability for the Remote Vehicle to act independently.

7.1 Main Battery Level Measurement

This feature was investigated during the course of development. However, the NI-USB 6008 only allows an input voltage of 10V maximum [19], the batteries used to power the vehicle produce 24V. This meant that a new piece of equipment would be needed to achieve this, and was not available to the project. A future project could incorporate this feature to allowing the user to see the power levels of the on-board batteries of the Remote Vehicle.

7.2 Super Resolution and Image Compression

Super Resolution is a means of generating a high resolution image from a sequence of low resolution images [20]. This would allow low resolution images to be sent across the network rather than high resolution ones, which take more bandwidth. Furthermore an advance form of image compression could take place at the Remote Computer and the image is un-compressed at the Control Computer further decreasing the bandwidth required. However, this would increase the number of calculations required at each computer.

7.3 Audio Feed

When the vehicle is in a remote environment it can be often useful to hear the noises in that environment or speak to others that are near the vehicle. To do this an audio link between the Control Computer and the Robot Computer would need to be established. Sound data would have to be sent across the wireless network increasing the bandwidth usage which may decrease the overall system performance. Data compression at both ends may overcome this problem however at the cost of CPU run time.

7.4 Additional Sensors and Environmental Mapping

A broad range of sensors are available to robots including Laser Scanners and motor encoders. These two would allow the vehicle to keep track of its movements as well as detecting the objects around the vehicle in much higher detail and precision then the IR sensors. By combining these two pieces of data, a map of the remote environment could be generated and displayed to the user through the already implemented map display.

8. Conclusion

The solution provided by this project was a successful, complying with all the user requirements. The solution allows the user to operate a remote vehicle in potentially hazardous environment without any risk to them. The webcam provides a sufficient feedback to the user and the accelerometer provides tactile feedback; both aiding navigation. The system allows the webcam to take photos of the area around the vehicle to be used for data collection or map making. The user assisting algorithms gives subtle changes to the user commands allow them to still feel in control of the vehicle with the advantage of the vehicle using its local data to assist them. The integrated user interface gives the user a short setup time and quick access to all the data available, while still being configurable and powerful. The robust module design of the code allows future project to integrate new functionality into the existing code with ease.