This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Robotics was born as science to develop machines, destined to factories, capable to accomplish boring, repetitive and simple operation as human substitute to product goods in a cheaper way .
In the last two decades, a lot of research and development has been done on the robots to work in hazardous and unreachable environments instead of humans. In these cases, it is very useful to teleoperate the robot from a remote site, even from the other side of the world connecting via internet. For this motivation a new branch of robotics is gaining importance: the Telerobotics.
Telerobotics is a branch of robotics, grew out of need to perform operations in unknown or hazardous environments where it replaced a human being to reduce operation cost, time and to avoid loss of life. Telerobotics is based on the idea to control a robot to support operation at a distance. The robot may be in another room or another country, or may be on a very different scale to the operator, where for an example a surgeon may use micro-manipulator technology to conduct surgery on a microscopic level. The goal of teleoperation is to allow a operator to interact and operate a telerobot at remote environment via communication channel. The human operator is responsible for high level control such as planning, perception and making decisions at operator's site, while the robot performs the low level instructions such as navigation and localization at the remote site.
Visual and control system are two essential components of telerobotic system. The visual system of robot presents visual feedback to the operator about environment through human system interface (HSI) and control system facilitates the operator to navigate a robot over internet or Wi-Fi connection at remote site. One of the critical problems of the telerobotic stereo-vision system is the communication delay .Due to this problem, a teleoperator sometimes has to go for a move and wait strategy. Thus, it leads to teleoperation instability. Various techniques such as predictive feedbacks, supervisory control and Augmented Reality (AR) have been proposed to solve the problems .
Telepresence is one of the key factors that enhance performance of a telerobotic system during teleoperation. Telepresence implies a feeling of presence at the remote site by displaying information about the remote environment to the operator in a natural manner. A good degree of telepresence assures the feasibility of the required manipulation task.
The next section introduces and discuss about successful applications of telerobotics in different fields for e.g. industrial, medical and service telerobots.
As earlier discussed, telerobots can assist human beings to operate in hazardous environments where it is not possible to place a human to work and it includes space, at volcanoes and under water applications. Such telerobotic system gives advantages to the scientists or users to work from the safe places and can monitor a robot using the user-interface through which he can able to see environment and make further decision.
Figure 1 ROBOVOLC system, user interface design for the navigation and for the manipulator arm components
ROBOVOLC is a robot for volcano exploration, capable to approach active volcano and can perform several kind of operations by keeping human operators at safe distance. It was mainly built to minimize risks to volcanologists, to enhance the chances to understand and study conditions during volcano eruption. Two separate PCs used to assist operation, the first system has a user interface through which volcanologists can drive and control the robot remotely in the volcano from the base station. The control of robot can be accomplished by two joysticks and a touch screen/mouse. The second PC controls manipulator to collect samples of rocks by using three fingers gripper, which has a force sensor to measure and control grasping strength, and to collect gases from volcano during eruption for the scientific measurements. The communication between the operator and the remote environment can be accomplished by a high-power wireless LAN.
The research and development of medical robotic system is growing predominately in telerobotics during last two decades to assist surgeons in various surgical specialties. ZEUS and Da Vinci surgical systems are best examples of medical robots. As precision plays key role in surgeries, the design requirements for a teleoperation controllers of a robot are significantly different from other telerobotic applications. Medical robots are equipped with computer-integrated technology and are comprised of programming languages, advanced sensors and controllers for teleoperation. A surgeon can see and work inside body by making tiny hole to the patient by giving instructions to robot from surgeon's console in which he can able to see 3D images from the stereo cameras. It needs further development in medical robots due to limitations like poor judgement, limited dexterity and hand-eye coordination, expensive, technology in flux, difficult to construct and debug, and limited to relatively simple procedure .
Figure 1 Da Vinci surgical system and the surgeon sit at a viewfinder and remotely operate robot arms during surgery
The Da Vinci robotic surgical system from Intuitive Surgical Inc performs surgical tasks and provides visualization during endoscopic surgery. It consists of an ergonomically designed surgeon's console, a patient side cart with four interactive robotic arms (one to control the camera and three to manipulate instruments) operated by the surgeon and a high performance 3DÂ Vision System to provide a true stereoscopic picture to a surgeon's console . It is used to perform a minimally invasive heart bypass, surgery for prostate cancer, hysterectomy and mitral valve repair, and is used in more than 800 hospitals in the Americas and Europe. Remote surgery is essentially advanced telecommuting for surgeons, where the physical distance between the surgeon and the patient doesn't matter. The use of Wi-Fi technology in the medical robots allows a surgeon to communicate and examine a patient visually from anywhere in the world.
Undersea robots are gaining much importance and growing body of telerobotic devices during last two decades as it is dangerous for human beings to work at deep level of sea because of high pressures and possibility of harmful chemicals. Remotely operated vehicles (ROVs) allows scientists, oceanologists and companies to monitor water quality in lakes and reservoirs using robotic-fish, to measure the Greenland ice sheet using drones, to know about world and life of creatures under water ,for undersea volcano exploration and to activate the deepwater horizon blowout preventer (BOP) in GULF Mexico etc. In the next century, undersea robotics will play an essential role for the exploration and production of mineral resources that lay in the seas. Operators can communicate with AUVs (Automated Undersea Vehicles) in several different ways, including low-frequency acoustics for long distances and high-frequency coded acoustics for medium distances. Recently petroleum company BP used undersea robots for closing one of the three rigs, which causes oil spill into water from Gulf of Mexico and which became threaten to anÂ economicÂ and ecological disaster on tourist beaches, wildlife refuges and fishing grounds in Louisiana, Mississippi, Alabama and Florida. During this operation, undersea robots with powerful stereo cameras and different type of sensors manoeuvred the 40ft-tall Giant metal box, which weighs nearly 100 tonnes, into position on the sea floor almost a mile down in an attempt to contain oil gushing from leaks in a blown-out well. But there is a possibility of problem to work at such a depth because of darkness, varying pressures and temperatures.
Romeo [ ] was designed like an operational test-bed. Its aim is to perform research on intelligent vehicles in the real subsea environment and to develop advanced methodologies for marine science.
Figure 1 Romeo undersea robot
Romeo is intended as a prototype demonstrator for robotics, biological, and geological research.
Application Scenario of 3MORDUC Mobile Robot Teleoperation
The 3MORDUC mobile robotic system is used for navigation and localization in unknown environments using teleoperation. This telerobotic system is comprised of two different sites, operator environment: The location where the user tele-operated the mobile robot, normally this was anywhere in the world with client software on the laptop and remote environment: The location where the 3MORDUC mobile robot operated, normally this was the Robotics Laboratory at the Department of Engineering Electrical Electronics and Systems (DIEES), University of Catania, Italy. Both environments are linked by a communication channel that transmits commands from the operator to remote the robot and sends back information of the remote task to the operator. The operator site is made up of human interface system (HSI) to control the remote robot by getting visualization information from remote site on each request. The remote site consists of a robot with different onboard sensors like stereo camera, laser sensor, sonar sensor, encoders and bumpers that take part in teleoperation task.
Figure 1 3MORDUC Application overview
The human interface system (HSI) plays key role in 3MORDUC telerobotic system. It provides input devices (keyboard) to generate operator commands to perform certain task and display devices (LCD screen) to monitor the interaction between remote robot and environment through visual feedback. Command processor sends commands given by operator through communication channel to perform operations on the robot. This multi-sensors feedback consists of 3D visual stereo image, laser data and odometric data which is generated by feedback information processor and displayed by feedback device. The purpose of feedback device is to excite the operator's senses in order to show him the remote robot status. Furthermore, augmented reality (AR) provides a better way in presenting remote environment using graphical display of obstacles and their distances using green (far) and red (near) lines to the user and so he can decide to drive a robot by avoiding collisions during teleoperation.
When the operator commands reach the remote site, the task processor converts them into actions and based on the request the robotic system do tasks like moving forward and back, turning right and left. The robotic system first scans its environment and capture before making any movements using onboard sensors. The information captured by onboard sensors is used in obtaining data from the remote site and sending them back to the operator site through communication channel. This gives operator a possibility to see and get idea about remote site to make further decisions. The sensor information processor process the required sensors data based on the command given by operator and transforms to the operator site.
The goal of project work is to design and implement a framework called AR4MORDUC for sensors data acquisition for 3MORDUC mobile robot teleguide. The framework should be able to collect the mobile robot onboard sensor data during real mobile robot teleguide session, manipulate and replay collected sensors data during offline mode. Sensor data acquisition is very important and necessary in order to assist robot teleportation as well as to provide realistic training data. Furthermore acquired sensors data are very important to determine quantitative parameters for assessing system performance.
Chapter 2 describes about State of Art and the background knowledge of project. This section explains about structure, on board sensors specifications and kinematic configuration of 3MORDUC mobile robot. Explanation of mobile robot teleguide, remote robot teleoperation and about odometry could be useful. Some of previous works carried out on this project are represented to reader for better understanding.
Chapter 3 describes the issues involved in designing of a framework for sensors data management. Currently saving sensors data coming from server during teleoperation for online mode into log files and replying all logged sensors data to the user during offline mode are the main issues. As bumpers are not working to detect collisions, timestamp
Chapter 4 is all about implementation of a framework for sensors data acquisition for online mode and implementation of offline mode to reply logged sensors data collected during pilot testing.
Chapter 5 presented with the results of appropriate data of 3MORDUC movements using Microsoft telnet as a user agent. It also presented with the results of the pilot test carried out at university of Catania using newly implemented framework AR4MORDUC to determine quantitative parameters through which one can assess 3MORDUC robot system performance. Finally it describes some areas that require further work and research on this project.
STATE-OF-ART AND BACKGROUND KNOWLEDGE
3MORDUC Mobile Robot
The 3MO.R.D.U.C (3rd version of the MObile Robot DIEES University of Catania) has shown in figure 1 is a wheeled mobile robot with a differential drive kinematic configuration . The mobile robot used in navigation and localization for indoor environments through which an operator able to drive a robot in all directions by avoiding collisions with obstacles. The robot consists of three shelves each linked together. On the bottom shelf, two lead batteries (12V/18Ah) provide the power supply. The robot autonomy is about 30-40 minutes for continuous working. The on board electronic rack controls each module of the robot like motion, sensors and communication modules. Several sensors on board monitor the workspace and the robot state. A belt of bumpers (16 switches) around the entire perimeter is mounted on the robot base, just over the wheels level. These sensors used to recognize and reduce damage in case of a collision. The two robot motor axes are equipped with incremental encoders each with resolution of 500 pulses per turn. These sensors are useful to calculate heading and position of the robot by using the kinematic model called odometry.
Figure 2 3MORDUC Mobile Robot
In middle shelf, it has laser measurement system (LMS) and sonar sensors to detect obstacles in the workspace. The LMS operates based on the principle of measuring the flight time of a pulsed laser light beam that is reflected by obstacle. An internal rotating mirror deflects the transmitted pulsed laser light so that a scan is made. The time between transmission and reception of the light pulse is directly proportional to the distance between the scanner and the object. The sonar sensorsÂ measure the distance from an obstacle using the flight time of an ultrasonic signal produced by means of a vibrating piezoelectric sensor. In the upper shelf the robot system comprise of an onboardÂ computer and two stereo vision cameras mounted on a rigid support, each one with a resolution of 1.3 Megapixel to facilitate real environment to the user. They are equipped with fixed focus lens of 6.0 mm. It allows us to adjust the camera distance in a range 5-20 cm. The CCD sensors of these cameras have a good noise immunity and sensibility. Moreover, it is possible to adjust all the image parameter, e.g. exposure gain, frame rate, resolution.
3MORDUC Sensors: Specifications and Limitations
The 3MORDUC robotic platform consists of a stereo vision camera, a laser scanner, two incremental encoders on two wheels, eight sonar sensors and a belt of bumpers (16 switches).
STH-MDCS2-VAR-C Stereo Camera:
To provide visual information of environment and obstacles, a stereo vision camera is employed. The Videre Design STH-MDCS2-VAR-C is a high quality digital stereo camera with an IEEE 1394 digital interface mounted on top of the 3MORDUC mobile robot looking forward. It uses two stereo camera modules, each with a 1.3 megapixel resolution, progressive scan CMOS imagers fitted in a rigid body. It also allows variable baseline in a range of 5-20cm (mono chrome) and 9-20cm (color). Figure 1 shows physical layout of Videre Design STH-MDCS2-VAR-C with two stereo cameras.
Figure 2 Videre Design STH-MDCS2-VAR-C with two stereo cameras
The two stereo cameras are synchronized to get two distinct images. When these images are correlated, then they provide depth information of real environment. Images are generally stored in 8-bit gray scale format and are transferred to the host PC through IEEE-1394 digital interface. Each camera can send images at faster frame rates up to 60 Hz at 640x480, and 15 Hz at 1280x960. However, because the IEEE 1394 bus is restricted to 32 MB/s in transferring video data, there is not enough bandwidth to accommodate two video streams at the highest rate. Some of technical specifications of Videre Design STH-MDCS2-VAR-C stereo camera are presented for future work .
Micron MT9M001 Megapixel Sensors
Maximum resolution: 1280 x 960 pixels
High sensitivity, low noise
Low pixel cross-talk
Fully synchronized stereo - left and right pixels are interleaved in the video stream
Monochrome or Bayer Color
High frame rates - 30 Hz for 640x480, 7.5 Hz for 1280x960
On-chip decimation - full frame 640x480 and 320x240 modes
Electronic zoom mode - centre 640x480 sub window
Extensive control of video parameters
Automatic or manual control of exposure and gain
Automatic control of black level
Manual control of color balance
50 Hz mode - reduces indoor light interference in countries with 50 Hz electrical line frequency
Stereo calibration information can be stored on the device, and downloaded automatically to the PC
IEEE 1394 interface to standard PC hardware - carries power and commands to device, data to PC
Standard C/CS mount lenses, interchangeable - focal lengths from 3.5 mm to 50 mm
Variable baseline: 5-20 cm (monochrome), 9-20 cm (color)
Anodized aluminium alloy frame for rigid camera mounting
SICK LMS Scanner:
The SICK LMS291 is a 2D laser scanner located at the middle shelf of the 3MORDUC system to detect obstacles in the environment. The laser sensor LMS291 works based on the principle time-of-flight. A single light pulse transmitted and reflected by an object surface, the time between transmission and reception of the impulse is directly proportional to the distance between the scanner and the object. To be more precise, a counter starts as soon as the light pulse is transmitted and stops when the signal is received. The counter value is used to calculate the distance between laser system and target. Figure 2 shows LMS291 laser system that used in experiments and its internal rotating mirror .
Figure 2 SICK LMS291 scanner system and the internal rotating mirror
The shape of the object is determined from the sequence of the received impulses. The LMS291 uses RS 232/422 data interface to send measured data for further evaluation to hosts.
If the distance between the laser scanner and the obstacle is represented by L and the light speed with c, then time-of-flight can be given as
Some of the technical specifications of SICK LMS291 are presented.
Maximum scanning range - 80 m (262.5 ft)
Field of vision - 180Â°
Angular resolution - 0.5Â°/1Â° (selectable)
Response time - 26 ms/13 ms
Measurement resolution - 10 mm (0.39 in)
Data interface - RS 232/RS 422 (configurable)
Data transfer rate - 9.6/19.2/38.4/500 kBd
LMS 291 can see an object with 10% reflectivity out to 30m
The LMS 291 is suitable to outdoor applications also, as it supports fog correction. It is more precise in measuring errors than the SICK LMS200. In current application LMS291 is used with angular resolution 1Â° to get 181 values over angular range of 180Â°. The figure4 illustrates 180 degree and 100 degree field of vision. Each scan is in anti-clockwise mode (from right to left).
Figure 2 100 degree field of vision and 180 degree field of vision
A digital optical encoder (figure 4) is a device attached to a rotating object (such as a wheel or motor) to measure rotation that converts motion into a sequence of digital pulses.
Figure 2 A rotary optical encoder 
Encoders can be either linear or rotary, but common type is the rotary encoders. The rotary encoders exist in two forms namely absolute encoders, where a unique digital word corresponds to each rotational position of the shaft and incremental encoders, which produce digital pulses as the shaft rotates.
The use of encoders is to determine a robot's change in position relative to some known position. For example, if a robot is travelling in a straight line and if you know the diameter of its wheels, then by counting the number of wheel revolutions it can determine how far it has travelled. Mobile robots will often have shaft encoders attached to their drive wheels which emit a fixed number of pulses per revolution. By counting these pulses, the processor can estimate the distance travelled. On the robot there are two incremental encoders with a resolution of 500 pulses per turn.
By knowingÂ wheel diameterÂ and encoder resolutionÂ (number of clicks per 360 degrees (turn), or counts per revolution), a robot can estimate how far it travelled from starting position by using following formulae.
Distance travelled per encoder count = Wheel circumference / Counts per revolution
A sonar sensor (figure 5) works based on the principle of time-of flight. It transmits ultrasonic sound waves and waits until it get reflected sound waves. To measure distance from an obstacle, time taken from transmission of a sound wave to reception is measured and converted into a range by knowing the speed of sound. The sonar has a field of view that is a cone, so the sensitive area increases proportionately with the distance from the obstacle. The sonar used on the robot is SRF08 and linked to the bus I2C. Its maximum range is 6 cm and minimum range is 3 cm.
Figure 2 SRF08 sonar sensor and its field of view
To avoid the false obstacles is needed to wait until the sent waves stop, so an inhibition time is necessary. The introduction of this delay doesn't allow reading distance too short.
A belt of bumpers is mounted around mobile robot, just over the wheels level. The bumpers allow the robot to detect obstacles and used to reduce damage in case of collisions with obstacles in the environment. The bumpers are simply switches that pushed and send impulses whenever a collision happened. They are linked to the same bus of the sonar sensors (I2C).
3MORDUC Kinematic Configuration
3MORDUC robot is characterized by a mobile platform with a differential drive kinematic configuration. The mechanical structure of the differential drive platform is composed by two active wheels and one passive wheel (castor wheel) to give stability to system.
Figure 2 3MORDUC bottom part and its differential drive configuration graphically
Odometry device consists of two incremental encoders mounted on the mobile robot drive wheels spaced by b called wheelbase of robot.The robot heading is the direction of the translation, Î¸, is perpendicular to the axis, which links the centre of the two wheels (wheelbase). When the two wheels of robot move with an equal and opposite speed then the robot turns around and changes its heading. The kinematic system is a non-holonomic differentially drive mobile robot and most commonly used for indoor applications. The robot can always reach D (a known final position) starting from S through a series of translations and rotations.
It is possible to write linear velocity and angular velocity as follows,
and are the velocities of left and right wheels of robot.
b is the robot wheelbase.
and are the linear and angular velocity.
The following equations describe the kinematic model of the mobile platform in differential drive configuration for continuous time system [ ].
andare the absolute coordinates of the robot.
is the robot heading.
But in discrete-time of robot kinematics it can described with the following equations
The estimated time is often affected by some errors and this measure is important on the accuracy of the equation solving by knowing and, the impulse counter of the left and right encoder.
It is possible to write,
represents reduction ratio of the gearbox.
represents encoder resolution.
For the angular speed, instead
The wheel encoders can be placed either on the wheel axis or before gearbox (on the motor axis). The advantage of the first method is that it can measure exactly the wheel rotation but with a low resolution. The second method has a higher resolution but does not allow measuring the effect of the errors on gearboxes.
Mobile Robot Teleguide
Mobile robot teleguide is a user interface (UI) through which an operator able to see environment and able to drive a tele-robot at a distance. Tele-robots may be driven on Mars surface, at Mount Etna nearby lava, at bottom of the Marianas Trench or higher regions of earth's atmosphere. Robot tele-guide enables an operator to see through the eyes of a robot. Mobile robot teleguide is for teleoperation. The sensor data for example stereo images, laser information, bumpers and sonar data are used to present to an operator, so that he able to understand the surrounding environment and thus he can make a decision about further action to drive robot. The robot is fully mobile i.e. it can able to move back and right, turns right and left. The operator can control robot using input devices such as keyboard, joy stick and mouse.
Functionality of keyboard:
For online mode For offline mode
Key Role key role
A turns left A go for previous trial
W move forward W move forward
D turns right D go for next trial
S move back S move back
Z show line and transparency Z show line and transparency
X show transparency X show transparency
C show line C show line
Remote Robot Teleoperation
The communication between two computers (master and slave) can be implemented by using a Transmission Control Protocol/ Internet Protocol (TCP/IP) or the User Datagram Protocol (UDP). In general, time delay problem occur when the teleoperation is performed over long distances such as in undersea, space operations. This is because of transmitting information from one site to another site, which is far away. The time delay can lead to instability on teleoperation. In general, the networked intelligent robot has four kinds of control architecture .
One to One: most of the robotic systems provide one user control for one robot. The robotic system permits the user to control a robot remotely from long distances and it provides real time sensors feedback to the user.
Example: 3MORDUC robotic system and Da Vinci surgical robotic system.
One to Many: networked intelligent robot systems permit one operator to control multiple robots. The operator monitors different sensors and submits control inputs based on the different sensor information. Finally, the Server Collector receives the data sequence and it sends to each robot the right command.
Example: service robots are good example as a user can control many robots at a same time.
Many to One: idea to work many people together to control a robot is due to number of tasks and with different technologies implemented in it. All control inputs must be combined to a single control signal for the robot or there is a user hierarchy.
Example: Robovolc, space and undersea robots are good examples for this type.
Many to Many: In this case several users can control several robots.
Introduction to Odometry
In wheel based mobile robotic applications, two position estimation methods namely absolute and relative positioning are used.
Absolute positioning: Determining the absolute position using landmarks or satellite based signals (for ex. GPS).
Relative positioning: Estimating the position and orientation of mobile robot using information provided by various on board sensors like wheel encoders, accelerometers, gyroscopes etc.
Relative positioning is usually based on odometry i.e., estimating a robot's relative position and orientation from the measurement of wheel revolutions and/or steering angles. In 3M.O.R.D.U.C wheeled mobile robot, odometry is implemented by means of optical encoders that monitor the wheel revolutions and/or steering angle of the robot's wheels. Then the encoder data is used to compute the robot's current position from a known starting position. Therefore, by knowing the circumference of the wheels and their configuration makes it possible to estimate the robot's position.
This method is sensitive to errors and causes two types of errors namely called systematic errors and non-systematic errors .
Unequal diameter of wheels
Average of both wheel diameters differs from nominal diameter
Misalignment of wheels
Uncertainty about the effective wheelbase
Narrow encoder resolution and sampling rate.
Non- systematic errors:
Travel over bumpy surfaces
Travel over unexpected objects in the environment
Wheel-slippage due to slippery floors, over-acceleration and fast turning (causes skidding).
1. Collision number:
Collision number is the number of collisions recorded during the trial. It provides information about obstacle detection and avoidance. This parameter plays vital role in measuring teleguide efficiency
2. Completion time:
Completion time is the time needed to reach destination in navigation trail and it gives information about user's environment comprehension and user's confidence. Completion time parameter is useful in evaluating the collision rate.
3. Collision rate:
Collision rate gives information about how the user performing driving during pilot testing. The collision rate can be represented as
4. Mean obstacle distance:
The mean of minimum distance gives the minimum distance to obstacles along the path followed during trial. This is important parameter to know how user feels comfortable during trial.
5. Mean speed:
It gives information about mean speed of each trial. It reflects confidence of user during pilot testing. In general, a slower mean speed is the result of a longer time spent to drive through the environment.
6. Path length:
It gives the information about the robot journey and the distance travelled during trail. This parameter can be calculated using odometry data coming from incremental encoders mounted on the two wheels of robot.
Previous works on 3MORDUC
This section mainly focuses on describing previous works carried out on 3MORDUC by students of university of Catania, Italy and Aalborg University, Denmark.
Augmented Reality based Teleguide using Video and Laser Data
In , Davide De Tommaso and Marco Macaluso developed augmented reality user interface for a robot teleguide by fusion of laser data and video data coming from on board sensors of the 3MORDUC mobile robot. The user interface allows operator to send requests from anywhere in the world to the server to drive mobile robot over internet connection. Augmented reality provides a better depth impression to perceive environment while driving robot in unknown environment. This user interface also contains laser alignment and edge detection options to perceive environment in better way.
Mobile Robot Teleguide based on Laser Data
In , they analyzed the differences between laser based teleguide and video based teleguide referring previous works done on 3MORDUC using usability evaluation. This includes estimating quantitative parameters from log files and qualitative parameters from questionnaires during pilot testing.
Mobile Robot Teleguide based on Video Data
In , they investigates the advantages of stereoscopic vision in mobile robot teleoperation in two different teleguide modes namely laser based teleguide and video based teleguide. They had fixed some bugs and made changes to previous application for telecommunication protocol. They conducted pilot test with different environments panorama, 3D laptop, and flat wall in mono and stereo visualization. They analyzed quantitative parameters coming from the logged data collected during the tasks and the qualitative parameters obtained from the questionnaires.
Core Idea and Argumentation
From previous works it is clear that mobile robot teleguide based on video, laser sensor information separately and using both sensors information together for teleguide purpose only. The main advantage of logging sensors data about environment gives us to assess the robot system performance during teleoperation with different visualization systems like anaglyph and HMD by determining quantitative and qualitative parameters.
It is necessary to log odometric data into log files during teleoperation, as wheel incremental encoders on robot two wheels provide relative odometry (position) data of robot in a coordinate system and useful in estimating position of robot. It represents position and heading direction of robot in the coordinate system.
Time stamp is a client time rather than server time and it starts when an operator starts driving telerobot and saving sensors data. It is useful parameter to synchronize odometric data with other sensors data during offline mode as well as it is important to understand about sensors data arrived time also.
As the bumpers of robot are not working to detect collisions during teleoperation, it is better to use Minlas (minimum obstacle distance) to record collision number into log file. The purpose of adding collision number is to know users performance and his confidence during teleoperation.
Offline mode of 3MORDUC data is useful in providing the logged sensors data to the user whenever she/he required or when he unable to connect to server to drive a robot. The primary purpose of developing offline mode is to test newly developed algorithms and for training purposes of 3Morduc mobile robot.
The Idea on Format of Odometric Data
Odometric data from robot's wheel encoders is obtained through internet connection using the HTTP protocol. Morduc robot hosts an HTTP server to handle client's requests. Each request include the service intended and if necessary the movement that the robot has to do.
Possible robot movement are:
During teleoperation, a client can request server with different type of services. Each service includes odometric data. Here the possible services for client requests.
Stereo image and odometric data for Stereo.jpg
Laser map and odometric data for Laser.jpg
Stereo image in grey scale, laser map and odometric data for Laserimg.jpg
Laser data and odometric data for Laser.txt
Stereo image, laser map, odometric data for DatilaserandImg.jpg
The operator request server with GET method specifying a valid URL resource and then server processes his request and sends back service to the operator through communication channel if available.
The following table describes the different services available in Morduc server to get odometric data. Each client request specified with method name in server, name of the URL resource and a set of flag to specify reply with laser map, stereo image, laser data and odometric data.
The service we used in our application is the one that provides all sensors data which includes stereo image, laser map and odometric data to develop an offline teleguide (mode) for 3Morduc robot.
>> HTTP Client Request for
>> DatilaserandImg.MOV.jpg header
GET <URL resource> HTTP/1.1 [CRLF]
Host: <ip address> [CRLF]
User-agent: MorducTeleGuide/0.1 [CRLF]
[blank line here] [CRLF]
>> HTTP Server Response from
>> HTTP Request datilaserandImg.MOV.jpg header: HTTP/1.1 200 OK [CRLF]
Data: Laser/<laserdata_1>/<laserdata_2>/...<laserdata_181>/ [CRLF]
Content-Type: image/jpeg [CRLF]
Content-Lenght: <imglenght> [CRLF]
[blank line here] [CRLF]
Framework for Sensors Data Management
Figure 3 AR4MORDUC framework for sensors data acquisition
AR4MORDUC is a framework explain abot options onframe work ,s ee davide report once
Online Mode: Sensors Data Acquisition
[Write about commands used to get data and table of commands and what information it gives on each request] , write code to save odometry data also ] flow chart for online. See alba file to get view once.
this is final no more modi.png
Offline Mode: Manipulating and Replying Sensors Data
Flow chart for offline
EXPERIMENTS AND RESULTS
Experiment Setup and Procedure
The experimentation would require facilities on two different sites namely called local (operator) and remote.
1. Mobile robot system:
In pilot testing, 3MORDUC robot system is used. The mobile robotic platform has cylindrical shape with 85 cm height and 75 cm diameter. This system equipped with a number of onboard sensors like laser, stereo cameras, encoders, bumpers and sonar sensors. The platform carries two car batteries for autonomy about 40 min and an onboard laptop system (server).
2. Visualization system:
A Dell laptop with 4GB RAM and 15.6 inches wide screen is used to present the remote environment to the user during teleoperation.
3. Network connection:
The teleoperation system is implemented as a client-server architecture based on the standard network protocol ISO/OSI. For teleoperation it is necessary to connect to the onboard server of robot from client via internet connection. HTTP (hypertext transfer protocol) is used to send packages from client system related to our teleoperation. HTTP is more preferable because of presence of firewalls and proxy systems at local site. The system used in our experimentation has a nearly constant delay of 1 sec, which allowed for interactive teleoperation. We consider the aforementioned delay as a realistic setting for many applications requiring visual feedback in teleoperation. There is no difference in transmission delay between mono and stereo viewing conditions, because the streamed video is always sent in stereo (and it is up to the local operator to set for mono or stereoscopic viewing). This assured us that they performed tests were not biased in principle by a delay difference. The images of the two cameras, each with resolution 640x480 pixels, are linked together side by side. They are then jpeg compressed and sent by the server through the HTTP package. The client decompresses the image that is then resized according to the visualization resolution and stereo approach.
The primary task of an operator is to navigate a robot in unknown environment from operator site by avoiding collisions with obstacles through getting visual feedback about remote site from stereo camera on the robot. The operator sends request to the remote site to move a robot in different directions, on each request from operator robot move and sends sensors data to the operator. The proposed framework would enable to save all data into log files.
The operator teledrives the robot towards a final position, which is 3.2 meters long from starting point by avoiding collisions. Figure 5.1 shows the robot work space graphically by mentioning obstacles measurements and distances between obstacles from the robot also. Figure shows 3MORDUC mobile robot during experimental trail.
Figure 5 (a) Shows robot work space graphically (b) Robot in workspace during pilot testing (c) Shows possible path for robot to reach destination
Estimation of Appropriate Data about 3MORDUC Movements using Telnet
Following data has been calculated according to odometry data coming from server when requested by Microsoft telnet. Data can vary with different environment and operative variables.
Environment used: Indoor
User agent: Microsoft Telnet
Method used: Averaging method (10 values for each direction)
Note: 1 radian = 180/Ï€ = 57.2958 degrees.
You can connect to server by specifying either server name or IP address along with port number using Microsoft telnet.
Telnet 126.96.36.199 80
Once you are connected to server, you can request server by sending commands using "Get" method and server will give response to client's request.
Ex: when client request server with command 'Get /datilaserandImg.fow1.jpg', then robot should make scan its work space and move one step forward by servicing request. In detail the server sends odometry, laser and image data to the client.
The robot has been run through an environment for 10 successful moves for each command to take average of distance or rotation moved. Averaging is compulsory as distance travelled by each mobile robot command is different (slightly), for example in first step it moves less or more distance when compared to successive step.
Distance or rotation given by where N=10
The main intention behind pilot testing is to acquire onboard sensors data which includes odometric, laser data and stereo images into log files for two reasons,
To use logged data in implementing offline teleguide (mode) for training mobile robots and testing a newly developed algorithms.
For evaluation of quantitative parameters to assess the system performance.
Figure 5 Pilot testing setup and user during teledriving of 3MORDUC robot
It is possible to extrapolate significant information about the guide efficiency by determining quantitative parameters. This section explains how quantitative parameters are analysed using odometry data and parameters saved along with odometry data into log file.
Results of pilot testing:
Distance travelled or path length: 3.2 meters;
Completion time: 180.2 seconds;
Collision number: 0;
Collision rate: 0/s;
Mean minimum obstacle distance: 650.4;
Mean speed: 0.01776 m/s;
CONCLUSIONS AND FUTURE WORK
This work developed in University of Catania, it is a continuation of previous work done by Students Davide De Tommaso and Marco Macaluso from University of Catania, Italy. With the previous work an operator can able to drive a robot in unknown environment relying on visual feedback and laser data. In our project, a framework for sensors data acquisition of on line mode has been developed. 3MORDUC robot has several on board sensors which include stereo camera, laser sensor, encoders, sonar sensor and bumpers. These all sensors give specific information about the remote environment to the teleoperator during teleoperation. In on line mode the operator able to drive a robot and can log all the sensors data into log files and further this log files can be used to determine quantitative parameters and appropriate data about 3Morduc movements. An offline mode has been proposed to enable user to access all the logged data when he is unable to connect server to drive robot. Furthermore, a newly implemented algorithm can be tested using those logged data during offline mode. For offline mode an edge detection algorithm has be implemented similar to online mode.
For future work bumpers can be fixed and used to determine the collision numbers during trail is much useful than using minimum obstacle distance coming from laser sensor. Stereo vision is more useful to provide better vision to the operator and can test with different environments called anaglyph, head mounted displays (HMD), etc.