Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Self-Driving Toy Robot Car: A Proposal

Info: 8105 words (32 pages) Dissertation
Published: 9th Dec 2019

Reference this

Tagged: Engineering

Contents

Who Did what

1. Introduction

1.1 Problem Statement

1.2 Measurable Organisational Value (MOV)

1.3 Aims, Objectives & Scope

1.4 Proposal Structure

2. Literature Review

2.1 Current Application

2.2 Existing Applications

2.2.1 BigTrak Rover

2.2.2 WL-Robot

2.2.3 ARCHOS Drone Remote

2.2.4 NXT Remote Control

2.2.5 BlueBot – Robot Controlled By Android Application

2.3 Pros & Cons Comparison

Application

Pros

Cons

2.4 Comparison Between Applications

3. Software Methodology

4. Resources

4.1 Android Studio

4.2 Visual Studio IDE

4.3 Microsoft Office 365

4.4 Microsoft Azure

4.5  Anaconda Python

4.6 Google TensorFlow

5. Project Risks

5.1 Communication With Servers & Services

5.2 Time Constraints

5.3 Training The Neural Network

5.4 Lack of Knowledge

5.5 Hardware Related Risks

6. Severity Table

7. Conclusion

8. Gantt Chart……………………………………………………23

9. Work Break Down Structure…………………………………………24

10.   References……………………………………………………25


1.  
IntroductionWho Did what
As we advance as a society we endeavour to ease the burden of work in whatever way we can, primarily through the use of technology and new ideas. The current focus of a great deal of research and development is also aimed towards reducing human workloads using Artificial Intelligence and machine learning to teach machines of all sorts to perform tasks that have traditionally been done by humans.

This development of using robots to replace humans is not a new development, instead being something that has been done to some extant since the 1950s (Wysk & Chang, 1997). The new addition to the problem is the use of machine learning such as neural networks to train software to perform tasks, leading to development of more general-purpose machines that only need appropriate software to perform tasks. This addition has been made possible by the explosion in processing power from more advanced CPUs and GPUs as well as more robust distributed computing systems from internet infrastructure, leading to a state where computing power to train networks is readily available.

The future is looking quite bright for software and data people as we endeavour to create new and better software to make use of the advances being made in machine learning and AI. And hopefully this project tin particular will stoke interest in these fields, and hopefully highlight just how powerful the tools available to us now are.

1.1   Problem Statement

The problem that we are seeking to address is, at its core, one of hardware. As time goes on we see more and more computing hardware put out into the market, from the plethora of ‘smart’ devise such as televisions and phones, to basically any of the ‘things’ on the eponymous Internet of Things.

The problem as we see it is what can we achieve with this readily available hardware, and what fantastic tasks can we repurpose it for. In this particular case we are looking at using a smartphone as the brain of an autonomous vehicle (a small battery powered car), showing that even quite cheap but very available technology can do great things when the appropriate software is used.

We are also seeking to show that specialized hardware is not a requirement to do some of this advanced machine learning tasks, at least not for the application of the trained neural net. This can help people in the future as they seek to make technology more widely available.

The sorts of people interested in this problem are, apart from interested academics, the sort that may benefit from some smart robot use in life but have a more limited budget meaning they cannot afford the more capable custom set ups. We can foresee such groups as farmers making use of this to run a machine to automate crop tasks (watering, fertilizing) as they already have the machine to do the task, and all that would be needed would be a self-driving system, that could be handled by their phone or equivalent.

1.2   Measurable Organisational Value (MOV)

The measurable organizational value is used as an alternative to the more common return on investment (which is only concerned with profit/loss in financial terms). It is used in situations where the desired outcome of the project is non-financial in nature, such as knowledge or other more ephemeral things.

The main MOV (measurable organizational value) that this project will return is a cheap self-driving robot car. This car will be able to navigate simple obstacles on a flat plane to get to a desired location.


The main advantage over current solutions in this space that our prototype will have is one of cost. The budgets for most self-driving technology projects are in the billions of dollars with a global investment in this area of at least 80 billion US dollars (Kerry & Karsten, 2017). Our project on the other hand, will likely have a complete budget of under 200 NZ dollars (not including labour costs), but will be quite a bit less advanced than these larger ones (no LIDARs on our robot, unfortunately).

Figure 1.2 The weekly budgets of 2 well known self-driving technology projects (Google and Toyota) as well as our own projected budget


As can be seen in figure 1.2, our own budget is in the realm of 1 million times less than the projects led by Toyota and Google but will likely be quite a bit better than 1 millionth of their projects.

Figure 1.2.2 The projected cost of the system to end users

From figure 2 we can see the ballpark estimate of what these systems are likely to cost to install, with a low high and mean price. From this we can see that our own system is projected to be so cheap that it barely registers on the scale used.

Once we have evaluated the competition we come to the conclusion that we are attempting a system that is absurdly cheap in the field of self-driving, as we aim to have it workable for less than $200.

1.3 Aims, Objectives & Scope

As we have previously mentioned the main overarching goal of this project is the creation of a cheap self-driving robot, for whatever purpose you would want such a thing for, from academic curiosity to building a more sophisticated system for a specific purpose.

The objective of this project is to create a self-driving toy robot car that is run using cheap off the shelf components such as a $50-dollar smartphone (Vodafone, 2018) as the brains of the robot.

The aims to achieve this objective are:

  • Make an android app to control the robot
  • Make a simple point to point pathing program
  • Train a neural network to recognize obstacles and implement it on the phone

For the scope of the project we have limited it quite sharply in comparison to other self-driving projects from Waymo and the like. After all, we hardly have their budget, and we are also operating on a fairly small timeframe (8 weeks) so the scope has to be reduced to compensate for this. As such, we aren’t looking to do more than a binary classification for the neural net (obstacle or not) as well as operating only on a flat plane.

1.4 Proposal Structure

The structure for the remainder of this document starts with a literature and current app review. Because there are 2 main deliverables (the app and the neural net) we have done both a review of current apps (focused on remote control apps that stream video to your control device, like those for drones and the like) and a more academic focused literature review for the current state of neural nets in self-driving cars.

Once the review is done we will articulate the methodology that we are using for the project, as well as the reasoning behind the choice of methodology. This is done to make it clear what development approach we are using, so that expectations can be tempered in the expected manner.

After that we move onto the resources that are being used for this project, from personal to technological. This is where we outline our development tools for the software we are coding.

Penultimately we have a section dedicated to the risks associated with our project and explain what may cause failures to meet our objectives in within the projected time span.

Finally, we have the work break down structure, that details who performed each task. We also present our Gantt chart for perusal.

2.   Literature Review

The idea of recognizing objects within images using a computer program is not a new one, with people being interested in this field since computers first started storing images. The approach in recent years has seen a large shift in the technology used, beginning with AlexNet in 2012 (Krizhevsky, Sutskever, & Hinton, 2012). With this change in the technology used we have seen more and more development in the use of convolutional neural networks for image recognition tasks.

Before delving into the world of convolutional neural networks and all that entails, we should first look at some of the work done before then, as a way to better understand the present by knowing the past. For the pre-neural net days. It is likely best to look at the entrants to the predecessor competition that gave rise to the ImageNet challenge, that saw the rise of convolutional neural networks. This older challenge was the PASCAL VOX Challenge, that ran from 2005-2012. It was a way for various object recognition algorithms to compare themselves to each other, as it provided a dataset to use and a compilation of entrants at the end of the even, so that more people could benefit from the work being done in the field.

From examination of the results posted (University of Oxford, 2018) from the PASCAL VOX Challenge, it shows that the recognition is mostly done with classification techniques, with the best results coming from various uses of support vector machines as the core of the solution. It is no surprise that further advances in object recognition turned to more powerful forms of classification, as even with their black box nature neural networks are one of the best tools for classification. Overall the PASCAL VOX Challenge paved the way for continued interest in this field and led to the ongoing ImageNet challenge as its successor.

Other approaches to the object recognition have been tried, such as using clustering techniques on the normal vectors (vectors that are at right angles to the plane i.e. the normal vector for flat ground points straight up) formed from planes in an image (Holz, Holzer, Rusu, & Behnke, 2011). As the normal vectors on a plain should all point in roughly the same direction, it means that you should be able to run a clustering algorithm and detect areas that have similar normal vectors and identify planes in this manner. This Approach is an interesting approach as it does not attempt to classify objects, instead it simply tries to distinguish them from one another.

A 2009 paper on object tracking in stereo (Baguley, 2009) has also been of interest, as it examines a way to distinguish objects (in this case a cricket ball) from the camera feed and track using a pair of cameras with a fixed distance apart, similar to how binocular vision works in humans. While not quite like our exact situation, as it involves tracking a specific moving object, there work with distance calculation is of consideration for our own purposes, as this is something we are likely to do as well. The main idea behind this paper is to use random sampling consensus to identify the ground, and then use stereo depth calculations and contour mapping to track the cricket ball.

The usage of convolutional neural networks in object recognition came about in 2012 with Krizhevsky et al (Krizhevsky, Sutskever, & Hinton, 2012). There results came about from having access to a large dataset to train with, something that was not publicly available until then. This was the ImageNet dataset, with over 14 million images in 22,000 categories and this allowed them to make use of deep learning in a convolutional neural network and substantially improve upon the best performance in object recognition thus far, with a top-5 error rate of %15.3 compared to the next best of %26.2. The top-5 error rate is the error rate for the 5 most likely categories for that image, so a top-5 error rate of %25 means that the image is correctly classified within the best 5 attempts that have been made %75 of the time.


The increase in score can be seen in figure 3, with the standout result the outlier in 2012, and with all notable results in 2013 showing a similar level of error from making use of deep learning techniques.

Figure 2 ImageNet object recognition results (Gkrusze, 2018)

An important aspect of convolutional neural networks is the way they work so well on modern graphics processing units (GPUs) like those used in many computers to run games, or for digital currency mining (as an example of recent non-core use of GPUs). The ability of GPUs to preform massive amounts of simple calculations in parallel far outstrips that of modern central processing units (CPUs), as is expected of an application specific integrated circuit. Making use of this expansive processing power is a key step in training these large networks, as they can have hundreds of thousands of neurons, possibly even millions (Krizhevskys’ network had 650,000 neurons) and so reducing training time for them is a consideration, as they will likely have to be trained dozens of times during development to tune them properly.

Due to the nature of convolutional neural networks they are sometimes trained in parts, and then joined at the end allowing you to pick apart parts of them for use in other applications. On the other hand, making use of this tends to harm performance in some manner, as an end to end trained network preforms better (Bojarski, et al., 2016), if not by a significant margin.

Lately interest has been taking in compressing or miniaturizing neural networks for use in smaller devices, and a number of ways to do this have been proposed from Huffman coding to vector quantization (Howard, et al., 2017). This has some fairly obvious applications for our own purposes that should be explored.

The overall state of object recognition is complex, with new developments happening all the time as a lot of money and resources are being poured into the field by a number of organizations. As we move forward in our project an eye should be kept on possible developments to improve our own project, as such a thing is not out of the bounds of possibility.

2.1   Current Application

Figure 2.1 XMRemoteRobot online webpage controller (XMRemoteRobot, 2018)

The current systems that have been implemented is only a website; the features the current website has are:

  • Page to control the movement of the robot.
  • Page to display the data & light patch which directs the robot which direction to move.
  • Provides Location of the robot using Google Maps.
  • Gamepad/Controller Support.

Our proposed added features include:

  • Add One or more alternate methods to control the robot, using Arrow Keys/WASD
  • More information provided from the phone displayed for the user, such features include
    • Battery Information
    • Connectivity Details
  • Video streaming from App to Webpage
  • Basic & Autonomous pathing
  • Object Detection

2.2   Existing Applications

There are numerous other applications and websites that have similar features. Some of these other systems have extra features compared to the existing system while some do not. We have selected 5 different apps/websites that relate the best to our proposed software.

2.2.1     BigTrak Rover

Figure 2.2.1 Webpage controller (Webdriver , 2018)

Our first application is the Bigtrak Rover web app & mobile application. The mobile application works as both the brain of a robot & can also control other robots. The app also can be controlled from the web also and controlled manually on the device too.

D:DownloadsScreenshot_20181022-221548.pngThe application is built in Adobe Flash, both the website & application. Some of the features the application provides:

  • Streaming from the phone app to the web app.
  • Cameras can be switched from the web app.
  • Streaming Quality adjustments.

2.2.2     WL-Robot

WL-Robot is an Android application designed to remote control a robot. The application provides real time video streaming and is transmitted. The user can also take a photo & video on the fly.

This app is different compared to what we have proposed as it does not use the mobile phone as a brain to the rover/robot but acts as the controller and has similarities to our proposed project in the sense of streaming & taking pictures & video remotely.

Figure 2.2.2 WL-Robot control screen + streamed footage from robot (WL-Robot, 2017)

2.2.3     ARCHOS Drone Remote

ARCHOS Drone remote allows the user to live stream and remotely take pictures and control the drone from the Android app. It works very similar to the WL-Robot but has less functionality but has similar functionality as to what we have proposed.

This app also provides VR support and also an emergency landing feature for the drone with connectivity details which is very useful for the user.

Figure 2.2.3 ARCHOS Drone Remote control features + live stream. (ARCHOS Drone Remote, 2017)

2.2.4     NXT Remote Control

NXT Remote control is an android app that controls a Lego Mindstorms NXT Robot using Bluetooth. This app only provides control of the robot and no video streaming and any other extra details.

This app is similar in the sense that is has similar functions as what we have planned for the ways you can have to control the rover & have the amount of power is used when manoeuvring.

The app has multiple different ways to control the robot, normal UP, DOWN, LEFT, RIGHT Controls and multiple other controlling schemes.

Figure 2.2.4 Multiple controlling schemes on the NXT Remote Control (NXT Remote Control, 2013)

2.2.5     BlueBot – Robot Controlled By Android Application

BlueBot is a Android app that is used to control a robot using Bluetooth. The app provides a easy to use GUI where the user can control the movement on the robot rover similar to the previous applications presented and the existing system.

The app is very basic in controlling there is no added functionality apart from the movement controls.

Figure 2.2.5 BlueBot app interface and Robot (Robot Controlled By Android Application, 2015)

2.3   Pros & Cons Comparison

Application

Pros

Cons

BigTrak
  • Live streaming with quality control
  • Both website & app can control the rover
  • Ability to switch streaming camera (Back/Front)
  • Developed in Flash doesn’t function correctly in some browsers
  • UI can be confusing to setup
WL-Robot
  • Ability to switch streaming camera (Back/Front)
  • Live stream drone footage to app.
  • Record footage & take pictures
  • UI can be confusing to use due to the old design
ARCHOS Drone Remote
  • Record footage & take pictures
  • VR Support.
  • Emergency Landing Functionality.
  • Stream drone footage to app.
  • Connectivity details.
  • Drone details (speed etc)
  • UI can be confusing to use due to the old design.
  • Has been cases where the app doesn’t function as intended or at all.
NXT Remote Control
  • Multiple methods to control the movement of the robot.
  • Easy to use and easy to learn
  • Functions as intended.
  • Manage the power used in movement

BlueBot
  • Simple UI
  • Easy to control the movement
  • Setup is easy

2.4   Comparison Between Applications

All these applications that we have discussed are very similar they all share similar functionality. Some provide more features while some are very basic. If we compare these applications to the existing system, most of these apps have more functionality, whether its streaming options or even more methods to control the rover.

If we talk about BigTrak rover, it provides a lot of functionality, you are able to use your mobile phone to either control a rover or to use it as the brain of the phone, streaming is also provided with  options to change the quality of the live broadcast on the controller website and the ability to change to either the back or front facing camera, and of course control the movement using either the app or website.

Then on the other hand we have applications such as WL-Robot, ARCHOS Drone Remote, NXT Remote Control & BlueBot which are focused on more of just the controlling aspect and not using the mobile phone as a brain of the robot but still shares similar aspects to the existing system and what we have planned to implement. These apps also provide good functionality, they all support the ability to control the rover/robot/drone. NXT Remote Control provides more than 1 way of controlling the robot which we will be using, numerous control methods are always a must. Some of the apps provide live streaming functionality, there is not an option to change the quality options of the stream, some also provide video recording and remotely taking pictures.

All the apps discussed have an easy to use GUI, a little outdated design wise but do what they are supposed to. We believe UI & UX is a very important part of our project as without an easy to use app the app will be useless, so it is an essential to our software that we have the usability on our app and site to be easy to use, simple to learn in no time, clean user interface and provide the best user experience.

These apps have very useful features that will be very good for us to include into our projects and make our own needed adjustments and even more features.

3.   Software Methodology

Agile methodology is the selected SDLC methodology used for this project. Agile is a software development methodology based on an iterative development, where the requirements and solutions evolve throughout the development process and are completed in sprints which are a set periods of time for each phase of the project. These sprints are completed once the time expires this means the project will continue to develop as each phase progresses and there will always be a continuous delivery of software and continuous improvements as each phase progresses.

Agile promotes project management processes, frequent inspection and adaptation to allow for rapid delivery of high-quality software.

The Agile principles based on the Agile Manifesto which we will be following mostly:

  • The highest priority to satisfy the customer with an early and continuous delivery of valuable software.
  • Welcome the change of requirements, even when changes are late in development as Agile process harness change.
  • Deliver working software frequently.
  • Developers must work together daily throughout the project.
  • Working software is the primary measure of progress.
  • Reflecting on how to become more effective and adjust the behaviour accordingly.

The reason we have decided to go with Agile for our development methodology is we believe our requirements may change & the time constraints we have, implementing Agile and the principles and values of agile we will have the best final product with different phases of our applications and we can rapidly develop and review our code as we work through the tasks in phases to deliver the best software.

4.   Resources

List of resources & services we will be using during the development process of creating our Android App & Webpage.

4.1 Android Studio

Image result for android studio logo

Figure 4.1 Android Studio Logo (Android Studio, n.d.)

Android Studio is the official integrated development environment for Google’s Android operating system, built on JetBrains’ IntelliJ IDEA software and designed specifically for Android development and we will be using Android Studio to develop our application which will be used as the brain for our robot.

Android Studio is personally one of the best IDE’s I have used when it comes to application development due to the wide range of feature’s Android studio provides, a few of the features include;

  • A rich layout editor – Simply drag and drop UI elements, preview your application on multiple screen configurations.
  • Debugging & VM – Easily debug and develop apps on different emulated android devices and different versions of Android
  • Deep Code Analysis
  • A flexible Gradle-based build system
  • Instant Run to push changes to your running app without building a new APK

(Meet Android Studio, n.d.)

And many more features, these reasons are why we decided to use Android studio to develop our also the programming language used to develop Android applications using this IDE is Java.

4.2 Visual Studio IDE

Related image

Figure 4.2 Visual Studio Logo (Visual Studio IDE, n.d.)

Visual Studio will be the IDE we will be using to develop our Webpage including the front end of our application and our backend. The coding language we will be using to develop our webpage in Visual Studio will be C# ASP.Net for most of the business logic while HTML, CSS (Bootstrap), JavaScript and numerous JS libraries such as jQuery & Ajax for the frontend of the website. (Visual Studio IDE, n.d.)

Some features Visual Studio provides which will be very useful for us:

  • Azure Apps – Build, manage & deploy cloud-based applications to Azure with ease. One of the reasons we will be using Visual Studio as publishing our website to our Azure app services will be very easy.
  • Collaboration – Easily collaborate in a team efficiently.
  • Technologies – Many different technologies we can use with Azure that may be very useful for our Machine learning and AI which we can develop using Visual Studio. (Visual Studio Development Features, n.d.)

 

4.3 Microsoft Office 365

Image result for office 365

Figure 4.3 Microsoft Office 365 Logo  (Office 365, n.d.)

Office 356 will be used throughout our project to complete numerous tasks. Some of these tasks include any documentation which we will be using Word, any presentation being made we will be using PowerPoint, Microsoft Project will be used to create our Gantt Chart.

Office 365 also provides a very useful collaboration system where multiple people can work on documents simultaneously which is very handy when working on presentations and other tasks.

4.4 Microsoft Azure

Image result for microsoft azure

Figure 4.4 Microsoft Azure Logo (Microsoft Azure, n.d.)

Microsoft Azure is a cloud computing service created by Microsoft for building, testing, deploying, and managing applications and services through a global network of Microsoft-managed data centres. (Microsoft Azure, n.d.)

The Azure Cloud Services that we believe we will need to use for the development of the app and webpage.

  • Windows Virtual Machine – Creating a RTMP/RTSP Server where we can stream from our app requires us to create a VM where we can implement the certain resources where streaming is possible.
  • App Service – Allowing us to publish our website so it can be accessed anywhere and can be used to then control our robot.
  • Azure SQL Database – We may require a SQL Database for certain operations that requires some data to be stored in a database. (Azure Cloud Services, n.d.)

 

4.5  Anaconda Python

https://www.anaconda.com/wp-content/themes/anaconda/images/logo-dark.png

Figure 4.5 Anaconda Logo (Anaconda, n.d.)

Anaconda is a free and open source distribution of the Python and R programming languages for data science and machine learning related applications.

4.6 Google TensorFlow

Image result for tensorflow

Figure 4.6 TensorFlow Logo (TensorFlow, n.d.)

TensorFlow is a software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains. (About TensorFLow, n.d.)

5.   Project Risks

This project has numerous different risks that may occur which we need to manage accordingly to produce a quality software.

5.1 Communication With Servers & Services

Communication between our app, website & server/services are very vital to the project as there needs to be a working connection between these various platforms and without a connection our project will not function correctly. There is always a risk of our server going down and experiencing unexpected downtime which will cause our applications to not function properly which is a severe risk as it will cause most tasks to not work at all as most of the project requires a connection to a server whether it is a streaming service or our webserver. Our systems are very dependent on certain services & servers. We will try our best to make sure and run tests/stress tests on our servers & services we are using to make sure even if there is a lot of load on our application the servers will be able to handle all the requests and will function as intended.

5.2 Time Constraints

Time is a big risk during the development of our project, we have eight weeks to complete our fully functioning autonomous controlled robot using a smartphone as the brain which will be difficult to get everything packed and working as planned within the time period we do believe we will be set and our team and time management for this project will be managed correctly and effectively. We conduct meetings twice a week in person and talk about our progress and what’s happening with our project online very regularly. We have created our Gantt chart that lists the tasks we have planned, starting time and ending time so we know what has to be done within what time frame. But with the time constraints we may face some unexpected issues that may affect the entire project and we will do our very best to make sure we are on track with our project.

5.3 Training The Neural Network

Training our neural network is a major part of our project as we need a working neural network for object detection as autonomous driving is a big part of the project and without a trained neural network the effectiveness of the object detection won’t be any good, so a trained neural network is required that can detect whether there is an object and move accordingly.

5.4 Lack of Knowledge

What we have planned for our project is an area neither of us have really got into, implementing a neural network with object detection to control a moving rover. Implementing this may be a struggle and there are some risks that we will face due to this being a relatively new area. We plan on doing a lot of research regarding these topics and the implementation and figuring out the next step and how it will interact with the way the rover is controlled and how it manoeuvres with the object detection.

5.5 Hardware Related Risks

Our project is very dependent on having a mobile phone to act as the brain of the rover and a computer/laptop to use as the machine to control the rover. Both are equally important. If any issues are to occur with our hardware it would cause major issues to our project.

If our mobile devices aren’t functioning correctly or even if our rover isn’t working with the device, it would cause nothing to work rendering our software useless, the same goes for the controller machine we cannot control the rover without a machine that sends the required information to the brain. Hardware is not perfect, and issues can occur without notice and we need to make sure our hardware is working as intended and are prepared if any hardware risks are presented.

6.   Severity Table

Risk Severity Rating
Communication between servers/services HIGH
Training the Neural Network HIGH
Lack of Knowledge HIGH
Hardware Risks HIGH
Time Constraints MODERATE

7.   Conclusion

Once all this research has been done we come to the conclusion that our proposal is of relative novelty (nothing is truly unique in today’s world after all) as well as being quite difficult to complete under the time constraints we are operating at.

As we move forward in this project we will have a clearer idea about the achievability of our goals and using the benefits of our agile methodology will hopefully be able to plan around any hiccups that occur.

The technical challenge of this project, as well as it’s close relation to an area of considerable interest in today’s world (self-driving cars) will hopefully that mean that this project will keep the reader interested right up until the end, and  our hopeful success with our goals.

8.   Gantt Chart

9.   Work Break Down Structure

Task

Persons Involved

Time Spent

Research Current Sate
  • Andrew
  • Gursimar
  • 4 Hours
  • 4 Hours
Proposal – Introduction
  • Andrew
  • 11 Hours
Proposal – Review
  • Andrew
  • Gursimar
  • 6 Hours
  • 4 Hours
Proposal – Methodology
  • Gursimar
  • 3 Hours
Proposal – Resources
  • Gursimar
  • 3 Hours
Proposal – Risks
  • Gursimar
  • 3 Hours
Proposal – WBS
  • Andrew
  • 3 Hours
Basic Idea Presentation
  • Andrew
  • Gursimar
  • 2 Hours
  • 3 Hours
Server Side Services
  • Gursimar
  • 3 Hours
Develop App
  • Gursimar
  • 8 Hours
Integrate Existing Code
  • Gursimar
  • 5 Hours
Design Neural Net
  • Andrew
  • 6 Hours
Train Neural Net
  • Andrew
  • 2 Hours
Implement Neural Net
  • Andrew
  • 6 Hours
Basic Path
  • Andrew
  • Gursimar
  • 12 Hours
  • 12 Hours
Auto Path
  • Andrew
  • Gursimar
  • 60 Hours
  • 24 Hours

10.         References

Agile software development. (2018, October 23). Retrieved from https://en.wikipedia.org/wiki/Agile_software_development

Trapani, K. (2018, September 26). What is Agile/Scrum. Retrieved from https://www.cprime.com/resources/what-is-agile-what-is-scrum/

12 Principles Behind the Agile Manifesto. (2017, November 15). Retrieved from https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto/

A Beginners Guide to Understanding the Agile Method. (2018, October 15). Retrieved from https://linchpinseo.com/the-agile-method/

Android Studio features | Android Developers. (n.d.). Retrieved from https://developer.android.com/studio/features/
Visual Studio IDE. (n.d.). Retrieved from https://visualstudio.microsoft.com/vs/

Visual Studio Development Features. (n.d.). Retrieved from https://visualstudio.microsoft.com/vs/features/

Azure products. (n.d.). Retrieved from https://azure.microsoft.com/en-us/services/

Your vision. Your cloud. (n.d.). Retrieved from https://azure.microsoft.com/en-us/

Project risk management. (2018, May 05). Retrieved from https://en.wikipedia.org/wiki/Project_risk_management

(n.d.). Retrieved from http://projectmanager.com.au/best-practice-risk-management-for-it-projects/

Artificial neural network. (2018, October 13). Retrieved from https://en.wikipedia.org/wiki/Artificial_neural_network

N., & A., M. (1970, January 01). Neural Networks and Deep Learning. Retrieved from http://neuralnetworksanddeeplearning.com/

About TensorFLow. (n.d.). Retrieved from TensorFlow: https://www.tensorflow.org/

Anaconda. (n.d.). Retrieved from Anaconda: https://www.anaconda.com/

Android Studio. (n.d.). Retrieved from Android: https://developer.android.com/studio/

ARCHOS Drone Remote. (2017, February 23). Retrieved from Google: https://play.google.com/store/apps/details?id=com.udirc.archosdroneremote

Azure Cloud Services. (n.d.). Retrieved from Microsoft: https://azure.microsoft.com/en-us/services/

Baguley, G. (2009). Stereo Tracking of Objects with respect to a Ground Plane. Christchurch: University of Canterbury.

Bojarski, M., Testa, D. D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., . . . Zieba, K. (2016). End to End Learning for Self-Driving Cars. Ithaca: Cornell University Library.

Holz, D., Holzer, S., Rusu, R. B., & Behnke, S. (2011). Real-Time Plane Segmentation Using RGB-D Cameras. RoboCup 2011: RoboCup 2011: Robot Soccer World Cup XV (pp. 306-317). Heidelberg: Springer.

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., . . . Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision. Ithaca: Cornell University Library.

Kerry, C. F., & Karsten, J. (2017, October 16). Gauging investment in self-driving cars. Retrieved from brookings: https://www.brookings.edu/research/gauging-investment-in-self-driving-cars/

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional. Advances in Neural Information Processing Systems 25 (NIPS 2012) (p. 9). Lake Tahoe: Online.

Meet Android Studio. (n.d.). Retrieved from Android: https://developer.android.com/studio/intro/

Microsoft Azure. (n.d.). Retrieved from Microsoft: https://azure.microsoft.com/en-us/

NXT Remote Control. (2013, August 26). Retrieved from Google: https://play.google.com/store/apps/details?id=org.jfedor.nxtremotecontrol

Office 365. (n.d.). Retrieved from Office: https://www.office.com/

Robot Controlled By Android Application. (2015). Retrieved from nevonprojects: http://nevonprojects.com/robot-controlled-by-android-application-project/

TensorFlow. (n.d.). Retrieved from TensorFlow: https://www.tensorflow.org/

University of Oxford. (2018, October 25). The PASCAL Visual Object Classes Homepage. Retrieved from The PASCAL VOC project: http://host.robots.ox.ac.uk/pascal/VOC/

Visual Studio Development Features. (n.d.). Retrieved from Microsoft: https://visualstudio.microsoft.com/vs/features/

Visual Studio IDE. (n.d.). Retrieved from Microsoft: https://visualstudio.microsoft.com/vs/

Vodafone. (2018, October 21). Vodafone Smart C9. Retrieved from Vodafone: https://www.vodafone.co.nz/shop/mobileDetails.jsp?skuId=sku1811664&hardwareSkuId=sku1811662&gclid=CjwKCAjw9sreBRBAEiwARroYm3foRfY9R96QfEp2DMC40gAqQe-b9FLBqGzGlFB-QM_Zu-SMVVLKyxoCeeYQAvD_BwE

Webdriver . (2018). Retrieved from Bigtrak: http://rover.bigtrakxtr.co.uk/webdriver.html

WL-Robot. (2017, July 29). Retrieved from Google: https://play.google.com/store/apps/details?id=com.wl.robot

Wysk, R. A., & Chang, T.-C. (1997). Computer-Aided Manufacturing 2nd. Upper Saddle River: Prentice Hall.

XMRemoteRobot. (2018). Retrieved from XMRemoteRobot: https://xmrrae.azurewebsites.net/Home/Test02

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Engineering"

Engineering is the application of scientific principles and mathematics to designing and building of structures, such as bridges or buildings, roads, machines etc. and includes a range of specialised fields.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: