Mobile Robotics And Locomotion Computer Science Essay

Published:

Mobile robots have the capability to move around in their environment and are not fixed to one physical location (cited from Wikipedia).They have the advantage of consuming less energy and move faster than other type of locomotion mechanisms. The application of mobile robots have been successfully designed for both indoor and outdoor environments. Mobile robots incorporating vision systems are the most desirable type of mobile robots navigation which can provide a lot of information of the real world enough for better navigation. Furthermore, object tracking technique is utilized in various robot vision applications such as autonomous vehicle navigation, surveillance√ā¬†and many more applications. Although a lot of research and experiments have been done in the past few decades had been concerned and dedicated to solve the problematic issue of tracking the desired target in a chaotic and noisy environment.

Robots with vision-based systems are quite complicated since they must be equipped with the ability of detecting obstacle and avoiding them while traversing in any environment. They need to extract the desired information from the images taken from a stream of the location or environment which for certain consists of both stationary and moving obstacles by the robot camera. The obstacle should be performed in real world performance which makes it much more complex. Another issue of obstacle avoidance is the fact that moving a mobile robot to maneuver in a unknown environment is problematic since there exist obstacles of all form and conditions.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

For this project I would like to design an autonomous mobile robot equipped with a vision system (camera) to maneuver through desired locations and to locate objects. This will require programming a controller to adjust the robot movements according to the received images using image processing in C# frame. By implementing this autonomous robot, several advantages could be obtained such as securing homes, maneuvering through hazardous environments.

Problem Statement:

The use of remote controlled cars to traverse through an environment for following an object to isolate is tedious and needs human assistant. The presence of variety of obstacles (stationary and moving) in any environment complicates the process of moving objects from place to a place in that sort of environment. Then the design of a robot to follow an object of interest autonomously in a desired environment overcomes the human attention for obstacle avoidance.

Objectives:

The purpose of this project is to build a mobile robot with a vision system ( on-board camera) that is able to find its own path through its surrounding environment, avoid obstacles and ultimately, the robot will be able to autonomously detect an object of interest and to follow the object. Those actions shall be controlled intelligently without the need of human intervenes. The on-board camera captures videos, feed them back to the main controller, which provide image processing and gives command to control the orientation of the robot.

Product features:

The project will incorporate the following capabilities:

Navigate autonomously to find it's path.

On-board camera to transmit video signal to be processed.

Avoid any obstacle that interfere it's path.

Locate and object and drive towards it.

Locate a specific target as desired.

Benefits:

A robot exploring an environment hostile to humans.

Detects unusual objects or intruders.

Autonomously navigates corridors and obstacles.

Ability to search dangerous environments such as fire buildings or during war.

Provide more mobility and flexibility than traditional security cameras.

Replace manpower in such hazardous situations like mines disposal.

Locate objects using the camera.

Operates continuously for a long time before inspection.

Scope:

The robot is of type wheel- drive for locomotion. The robot is intended to move to a desired location, find its path. While traversing through the environment (room or small environment) it is able to avoid colliding into obstacles. Recognizes the desired object (for this project, the object is of circular shape, usually ball) and follows that object. The software used to program the microcontroller is C# through the use of visual studio 2010 tool. The VSDP image processing library developed by CAIRO is also to be utilized as much as possible.

Literature review:

2.1 Introduction:

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Mobile robots are able to perform desired tasks in a certain environment intelligently without continuously guiding it, which means they needs to by autonomous, which range from a low level to a high level of autonomous. Another issue needed to be addressed is the ability of the robot to predict the surrounding environments where many challenges or unexpected events may arise in the field of industry, manufacturing. The robots need to be trained to cope with its surrounding environment whether it's on land, air, space or underwater. The following characteristics of mobile robot are its ability to:

Gain information about the surrounding environment and adapt quickly.

Able to work independently for an extended period without human intervenes.

Complaint with safety requirements and avoid self damage.

Gain new capabilities through the training.

Those mobile robots with high degree of autonomous or self-directing are mostly desirable in almost all fields.

2.2 Mobile robots navigation:

The demand for robots with visual based sensor is increased compared with other types of sensors. It provides more robust and maneuverability than other types of sensor based systems. Providing vision feedback sensors i.e. Cameras is the best way for navigation and object isolation, and according to DeSouza(2003) the strides made in vision systems have been significant. The factor of environment realization has to be incorporated within the vision system, as stated by (Moravec,1980 & Nilson,1984) "It is essential for a vision based navigation system to incorporate within it some knowledge of its environment. Two methods can be used to utilize the vision system with the appropriated data, open-loop and closed-loop control methodologies.

Open-loop control; open loop based systems performs the two tasks of extraction and processing of image data and the positioning and controlling of the robot separately. First, after the images and videos are captured they are processed and then it will adjust the robot accordingly.

Closed-loop control: the process of video processing and robot control are done simultaneously. The camera captures images or videos as desired, processed using image processing techniques. The data is fed back into the system which in turn performs the positioning of the robot.

2.2.1 Types of navigation:

The vision navigation system of the robots has three different approaches:

Map Based Navigation: it means that the robot is trained through providing a model of the surrounding environment. The model can be as complex using a complete cad model of the environment or as simple using graph of interconnections or interrelationships between the elements in the environment ( DeSouza et al, 2003)

Map Building navigation: The onboard sensors (camera) are used to provide a 2D or 3D model of the environment after which the model is used for navigation in the environment. The method is to generate a #d map composed of static objects in the environments for mobile robot localization using human action recognition and artificial landmarks. It detects and eliminates dynamic objects easily moved by humans from the 3D map using recognition of human actions (Sakaguchi et al., 2004).

Map-less Navigation: This approach contains all systems in which navigation is achieved without any prior description of the environment.(M.S. Guzel, 2008). The process of navigation is done through determining the required robot motions based on observing and extracting any relevant information about the elements in the environment. There are three groups of maples navigation. Optical flow navigation ( Santos- Victor et al., 1993, Arena et al.,2007). Appearance based matching approach ( Booiji., 2007) and navigation using object recognition ( kim, 1995 & Nevatia, 1998). Other approaches are behavior based and vision based navigation in maples space (Nakamura , 1996)

2.3 Applications of Vision-based robots:

The project has significant features and can provide benefits to many aspects of interests including home security, rescue missions, defense field and in the industry. According to Prof. Johari(2009) "In tomorrow's flexible manufacturing system(FMS) environment, mobile robots will play an important role. They will transport parts from one workstation to others, load and unload parts, remove undesired objects form floors and so on. In addition to indoor mobile robots, there are some other outdoor occasions where mobile robots may take on heavy responsibilities. Examples include construction automation, military missions, handling of harmful materials, hazardous environments, and interplanetary exploration and so on." The areas and environments of the project as follow;

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Home Security: this robot or concepts similar to the idea could identify intruders; track their movements, besides that, new features as interested can be added to stop the intruders or can feed back information so that enforcement or action is applied according to the situation. An example is the NUVO robot which is able to maneuver through the house, has built in camera to capture images. It can send data to the user through internet to the owner. The problem of this kind or robot is its complexity and it's quite expensive.

Rescue Missions: Due to the difficulties and risks when the manpower is used in the rescue mission a modified version of this robot can be established to replace persons in rescuing especially during fires, hurricanes, earthquakes and during war. The robot would be able to maneuver even into small places or hazardous situations.

Defense Sector: In the modern battlefield, the robots are being partially to replace some of the missions done earlier by human soldiers which in turn results in death or in the least possible damage is the injury, such example is when human soldier is used to dispose mines in the battlefield, instead a modified robot with intelligent sensory system can dispose the mines very easily and with high accuracy. Furthermore, the robot can be used to tract the movements and activities of the enemy and report back to a central special unit.

Industry: The first robots were introduced in manufacturing industry in 1961 in the

USA. Since that time there has been a significant growth in the number of robots in use and the range of applications in which they are used. This model can be exhibited in the real cars or motors to be able to cruise intelligently without crashing, and predict unusual actions.

Exhibition activities: This version of robot can be utilized to compete with other robots when it comes to the robot ability to find its own path, avoid obstacles and ultimately recognize objects through the use of image and video processing techniques.

There have been many applications being done using robots with vision system. Ruben Zhao, Brian Chung, Richard Otap, a final year students have came up with an idea to built a car with vision system which will be able to track an object of an interest and follow it, but their main difficulty was that they were not able to isolate the object in the video and direct the car to follow it.

2.4 Description of Proposed project:

The proposal of this project is to build a mobile robot with vision system that is able to find its path and track and follow an object using a webcam, it can follow the path and avoid any obstacles. Once an object is tracked, a control scheme allowed the robot to keep track of the object and follow it as it moves. The board will be built on the robot including, the camera which will capture images, processed through the built-in microcontroller which in turns adjusts the movement and orientation of the robot according to the path of the object. The robot should be able to autonomously detect an object of interest and follow it.

Vision system:

A web camera is going to be used and mounted on the front of the robot. It is going to be as simple as possible due to cost constraints and for small robots purposes

Microcontrollers:

A microcontroller is used to interpret the incoming signals captured by the camera, process them and adjust the robot accordingly. The microcontroller acts as an intermediary to translate the serial data into a PWM control signal for coordinating the motors and actuators of the robot.

Motors and Motor drivers:

Many motors and actuators will be used to establish the movements of the robot. They will be controlled by the microcontroller.

Power source :

A suitable power source supply will be used to control the electronic instruments as well as drive motors. An external power supply is to be used, to allow the movement of the robot instead of using a regulated power supply.

For software implementation, C# is going to be used as instructed by the supervisor to program the robot to perform the desired outcomes, it is used to process the images received from the camera, communicating with the motors via the microcontroller to produce locomotion and orientation of the robot.

Methodology:

This project implementation and further development involves three phases for completion. The phases followed are, research, hardware and software implementation, and testing and verifications. Below is a descriptive of every phase.

3.1Research

During this stage, extensive reading and information gathering have been performed. This is done regularly, but it's mainly performed during the weekend due to the time constrains. The information collection is mainly for the image processing since it is not part of our curricula, especially learning image processing in the C# frame. I need to search and transform the understanding of C++ into C# environment.

Hardware and Software Implementation:

3.2.1 Block diagram:

Vision System (Camera)

Operating System (XP or Linux)

Microcontroller

Motor Driving Circuit

Motors and actuators

Power Supply

Figure 1: block diagram of the overall system

3.2.2Block Descriptions

3.2.2.1Power Supply

The power source to supply the components of the robot including the motor driving circuit and motors will be based on an external power supply to allow the mobility of the robot. Although, It is long term efficient to use regulated power supply, but the robot movements is restricted to the length and design of the power source.

3.2.2.2Vision System

The vision system will be utilized using a camera (webcam) .The specification of the camera is intended to be as simple as possible, requiring only a USB port and operating at a minimum frame rate. One camera is mounted in the front side of the mobile robot to allow maximum view for the front view of the robot. Based on the image captured the robot is able to identify the path for to follow.

Image processing using C# is going to be used. The type of image processing followed is background subtraction, which has fast image processing time and is immune to the varying of the size irrespective of how far the object is from the camera. The problem with background subtraction is it's sensitivity to small movements of objects that are not desirable to be detected, since it slow the rate of image processing. Therefore a noise reduction method is to be applied to eliminate for any erosion or blurring in the image. Besides that the color filtering technique is to be used. The object of interest colors will be considered and other colors for other objects will be discarded. Since the object to be detected is a ball, radius detection algorithm is applied to scan for the radius, even with some erosion of the surface of the circular shape. Once the ball has been located, an estimation of the ball location is sent to the microcontroller to issue commands for the motor driver circuit to actuate the motors to move to the desired object and keep tracking it. If the ball disappeared from the view of the camera, the last position is recorded, so that the robot will reach that point and then look for the object.

The flowchart diagram of the image processing technique is shown in figure 2.

Start

Read incoming image

Identify circular objects

Background Subtraction

Filtering out color of the desired object

Scan for the centroid

NO

Desired Object

Determine the amount the motors should actuate.

Inform the microcontroller

Save the image as a template

Figure 2: image processing technique flowchart

The details for image processing as in figure above are elaborated below:

First the image incoming from the camera is read and it's about to be send for processing.

The background of the incoming image is subtracted and a gray image of the background is to be produced for further processing

After the background has been subtracted, the image will have its color that match the colors of the desired object highlighted. This process will speed up the processing rate. other colors are discarded.

Then the centroid of the object to determine if it is a circular shape, since the desired object to be identified is a ball, a circular property needed to be checked (circle radius) . the radius is calculated and the image is processed even with some distortion in their outer shape as shown in the figure.circles3.png

If the desired object is detected then the process will continue, otherwise a loop is created to check again for new incoming image.

After the position of the object in real world is attained, the microcontroller will send signal to the motors to move in proportion to the position of the object.

The template image of the detected object is saved and updated in a the image containing the circular shape and centroid highlighted. Then the object is matched with the captured object.

Operating System:

Depending on the capability of the microcontroller board, an operating system is to be used ( XP or Linux).

It is intended to handle the functions of receiving image data from the camera, processing them, instructing the microcontroller to communicate with the motors to position and allow locomotion of the robot.

Microcontroller

It will issue the commands to orient and position the robot according to the received images.

It will act as an intermediary device for transferring the incoming signals into a real world locomotion of the robot.

Motor driving Circuits:

It is used to move and rotate the motors of the wheels of the robot according to the received command from the microcontroller.

Motors:

Orientation and positioning of the mobile robots is achieved via the motors movements according to the microcontroller commands.

Testing and Verifications:

First the project will be divided into subprojects to ease the process of testing every element before testing the whole project.

Camera navigation: the camera will be tested to ensure that the camera can capture images in real world environment and feed these images for processing.

Object recognition: it's actually the ultimate goal of my project. The developed software should be able to locate the desired object according to the incoming image. Actually the testing will be in two stages, object isolation and tracking. The software will be tested to ensure it is ability to isolate the object of interest through the use of radius property identification. The incoming image will be matched with the image with subtracted background. After isolating the object and identifying its location in the real world coordinate system, then the robot is tested if it can follow that object.

After all the subsystems have been individually tested and verified, the whole project will be tested to ensure the coordination of all the parts of the robot to perform the desired operation with the expected outcome.

4.0 RESULTS AND DISCUSION

Expected Outcome:

After completing the project, the mobile robot is expected to be able to isolate the desired object using background subtraction where it is at relatively close distance. Then it is expected to track the object of interest and remember its last location while the object disappears from the vision of the robot. While it is tracking the object of interest the robot should be able to avoid any obstacles or corridors in the environment