0115 966 7955 Today's Opening Times 10:30 - 17:00 (BST)
Place an Order
Instant price

Struggling with your work?

Get it right the first time & learn smarter today

Place an Order
Banner ad for Viper plagiarism checker

Arguments on Artificial Intelligence

Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

Published: Thu, 31 Aug 2017

We live in an extraordinary time. Improvements in technology seem to be accelerating at an unbelievable rate. Every time they think Moore’s Law has reached its limits, tech companies come up with a new level of capability. No less is the advancement of artificial intelligence (AI). Our every day lives are already deeply immersed in AI, and we don’t even know it. It controls much of the financial markets, performs law enforcement tasks, and makes our internet searches more useful. Most AI today is weak AI, designed to perform a very specific task (Tegmark, n.d.). But the goal of all research and corporate investment is always more; what else can we know or do? Often, these entities are creating things in a vacuum, with limited moral, ethical, or legal boundaries. When is it too much? The driving force that makes us want to always explore further is what makes the development and use of artificial intelligence (AI) a risky course of action.

Why is this a risky course of action? Because giving control of systems to artificial intelligence could have seriously negative results. Take, for example, researchers working with the University of Pittsburgh Medical Center. In this case, they develop a neural network that returns suggestions for treatment of pneumonia patients. Using a historical database with the solutions and results of methods of treatment, the AI is supposed to provide suggested solutions to treat patients. In one solution, it recommended that certain high risk patients be sent home (Bornstein, 2016). This solution had high probability of resulting in death.

When working with and complex task, accomplished by human or machine, the law of unintended consequences must always be considered. No matter how well someone thinks they have thought a system through, it is nearly impossible to consider every possible outcome. Certainly, unintended consequences are not all bad, many drugs have side effects that are beneficial and completely not what the drugs was designed to do. On the other hand, many drugs have very negative side effects. Certainly, they are not intended to cause any adverse symptoms, but many have severe unintended consequences, including death.

Some would argue, AI is currently in use and benefits everyone with no negative effects. Singularity cannot happen. While we certainly use some types of AI currently and have had minimal negative effects. It is also true we have not reached singularity. It is the height of hubris to believe that we have total control over anything or that we have considered all possibilities. Consider Fukishima or Chernobyl, all possibilities were not covered and resulted in huge disasters.

Even NASA, the standard for careful scrutiny of complex systems and procedures has had some catastrophic failures in the form of space shuttle crashes due to hubris of the organization and/or individuals.

How many people died on the Titanic? A ship that was unsinkable was sunk by a simple iceberg, or was it hubris? The shoddy steel used in the construction of the hull, the poorly designed bulkheads that didn’t reach to the top deck, and the pressure to go as fast as it could are what sunk the ship. And not enough life boats on the unsinkable ship killed the passengers. Hubris lead them down the path to destruction.

We are at the point that we have the capability to combine AI to create autonomous military machines. Some are even in the testing phase of development. Machines that make decisions of life and death on their own (Russell, 2015). Absent human intervention, what is to keep one of these machines from deciding the wrong person is a target. A machine knows no morality, no ethical code, only its programming, its goal or reason to exist. Given a powerful enough computational system, it could decide to use everything at its disposal to achieve its goals (Anderson, 2017). Things like taking control of infrastructure, or even humans.

So, what do we do? Is there risk? Even captains of industry and experts like Gates, Musk, and Hawking suggest there is (Holley, 2015). It is clear we are already on the path to creating ever more complex and capable AI. We must recognize that we all make mistakes and constantly be on guard against mistakes and, more importantly, hubris. Most expansion of knowledge has risk. When confronted with a discipline that has catastrophic possibilities, we must fight the desire to run forward as fast as we can with no concern for the consequences. Methodical deliberation is the only course. We must consider the ramifications of each step and ensure safeguards are in place should we need to terminate or isolate any AI that develops goals counter to those of humans. If we manage to be conscientious enough and adhere to ethical principles, we might, just might, keep from developing the instrument of our own demise.

References

Anderson, J. (2017, February 16). Google’s artificial intelligence getting “greedy,” and “aggressive.” Activist Post. Retrieved from http://www.activistpost.com/2017/02/googles-artificial-intelligence-getting-greedy-and-aggressive/

Artificial Intelligence. (2015). In Opposing Viewpoints Online Collection. Detroit: Gale. Retrieved from http://link.galegroup.com.ezproxy.libproxy.db.erau.edu/apps/doc/PC3010999273/OVIC?u=embry&xid=415989d5

Bornstein, A. (2016, September 1). Is artificial intelligence permanently inscrutable?

Holley, P. (2015, January 29). Bill Gates on the dangers of artificial intelligence: ‘I don’t understand why some people are not concerned.’ The Washington Post. Retrieved from https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/

Russell, S. (2015, May 28). Take a stand on AI weapons. Nature, 521 (7553), 415-416.


To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:


More from UK Essays