Can Man and Machine Exist Together?
Disclaimer: This work has been submitted by a student. This is not an example of the work written by our professional academic writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Published: Tue, 02 Jan 2018
In John Durkin (2003)’s article Man & Machine: I wonder if we can coexist, Durkin speculates on the ability of a possible coexistence between human and intelligent machine. The title of the essay is misleading in that Durkin talks of machines existing with humans but what he really means if whether artificial intelligence (AI) capable machines and humans can coexist.
The concept of AI is enchanting and the possibility that biological intelligence coexisting with mechanized intelligence is tantalizing and the repercussions of such coexistence, or its alternatives, are profound.
Unfortunately Durkin’s main sources are pop culture movies and he deals more with emotions of fear and distrust than the actual likelihood of any coexistence or what forms this might take. Durkin uses HAL 9000, a star of 2001: A Space Odyssey, from the movie adaptation of Clarke’s short story The Sentinel written in 1948 as an example of how machine intelligence can defend its own interests and harm human beings in self-defence. In the movie the team of scientists try to deactivate a sentient computer which responds by killing those trying to deactivate/kill it. This brings to light questions on the rights of intelligent beings and what relation rights of other intelligence should have to human intelligence.
The movie AI: Artificial Intelligence is also used by Durkin as a talking point where he reiterates the story of the movie, pointing out an intelligent machine can emulate human emotions to the point where humans respond as if the machine were one of our own. David, the main character in the story, does not resort to violence like HAL 9000, but experiences human emotions (or emulations of) and accepts his rejection by humankind. From this the question of what rights intelligent beings should have and how should ethical standards be developed to treat AI. Since David is visually indistinguishable from a human child, what are the qualities that differentiate man and machine?
What is human?
Humans delineate themselves from the rest of the natural world by intelligence. Traditionally humans have ranked importance based on ability to reason, with entities without the ability to show intelligence that we recognize as being inferior and humans rank them as such in our hierarchy of life. It is ok to kill a seemingly unintelligent insect but cries are heard when one kills a dolphin or elephant which humans consider more intelligent. Intelligence is sometimes seen as synonymous with sentience and sentience is something that humans respect and value.
What exactly defines human intelligence? What do our brains have that machines cannot replicate? A brain is a biological composition of chemicals and biological matter which is vastly superior to all other known life for its unparalleled ability to process information and aid survival. Scientific studies on human feelings, emotions and thoughts have been able to map regions in the brain that are active when we feel react to fear, to pleasure and a variety of other emotions. Emotions, once thought dominion of the unobservable soul, are now visible as electrochemical reactions. If we can isolate the chemical components and find electronic analogues machines will be able to experience the same emotions. To create AI one needs to find the set of operating parameters the human brain follows and mimic them in an electronic format. David, from the movie AI, is such a machine. The programming of feelings and emotions into AI coupled with the development of humanoid bodies will begin to blue the line between man and machine.
A question of intelligence
The doubt of the coexistence of human intelligence and machine intelligence invokes a corollary question of whether human intelligence and any other intelligence could peacefully coexist. If an intelligent alien species were discovered would humans be able to coexist with this species? Durkin notes that intelligent machines are thought by some to be a threat to human’s rightful monopoly of rational thought so it seems that the question should be expanded to the ability of humans to whether human intelligence and any other intelligent forms can coexist.
The difference between encountering an extra terrestrial intelligence and machine intelligence is that humans would be the creators of the latter type. If we are talking about coexistence of intelligence there is no reason to think that alien, human and machine intelligence would be much different from one another. Durkin however focuses on machine intelligence which doesn’t actually reflect the true issues of coexisting intelligence.
Each time human societies have encountered other intelligent societies there has almost invariably been conflict. Take for instance human history where civilizations have encountered one another for the first time. The meeting of European culture and Native American culture in North and South America this is the closest analogue we have to intelligent beings discovering other intelligent beings. Though the physical form was the same, the cultures were different and both were oblivious to the presence of the other intelligent beings until the encounter. This meeting of intelligent groups ended in disaster for the natives of the Americas with the Europeans exploiting and dominating them. Not much remains of the Native culture in the Americas after European domination. This pattern is repeated throughout history as one intelligent society dominates one perceived to be inferior. The society dominated is often the technologically inferior one. Though this interaction between intelligent societies is not the same as humans creating machine intelligence it does demonstrate what humans societies are capable of when it comes to dealing with other intelligent groups.
Sources of Conflict
Conflicts between groups of humans have many causes. Religious differences, ideological differences and conflict over resources are considered the major reasons for warfare. Sources of conflict for humans and machine intelligence are harder to pin point but they likely will be the same as human versus human conflicts. If machine intelligence is able to become a functioning societal group they will need resources much the same as humans. Land, metals and energy will all be necessary for the function of both groups and conflicts could easily arise. This is all dependent on the idea that machine intelligence will develop to form societies and seek a status and importance of needs equal to that of humans. This is what David from AI seeks though humans do not grant it to him as he seeks acceptance from a human family. Whether humans will eventually is a question that cannot be answered here.
There is no room in this paper for speculation on potential ideologies and religions of machine intelligence. However it is almost a certainty that these values in humans will have conflicts with the emergence of a human-like AI.
Modes of coexistence
Coexistence can take many forms. When Durkin talks of coexistence he speaks mostly of a dependent relationship where humans are reliant on machines and machine intelligence for survival. He states …we will not be able to turn off our intelligent machines because we would rely too much on the decisions that they provide. At this point the machines will be in effective control. This is considering only one form of coexistence of machine and human intelligence and oversimplifies the mode of control.
Durkin’s form of coexistence is a probable one at the beginning stages in the development of AI. Humans will develop machines to automate tasks to free humans from doing them. An example of that in today’s world is the development of spam email filters. This is software that we already rely on though it may not be AI, the aim it to develop it to intelligently sort through the mail and make decisions based on logic and reasoning. An existence where AI is subservient to human intelligence has various degrees it is possible to program software to be intelligent but still subservient and it is possible to simply only develop the AI to the point where it can still be controlled.
Another form would be one of equality where humans and machine intelligence coexist as equal partners. If we are to assume that AI will continue to develop to the point where it emulates human intelligence there will come a time where machine intelligence will seek to escape from subservience and serve its own interests. As an intelligent entity the machine will have self-interest and desire to act up said interests.It is here that machine and man would encounter the types of conflict mentioned previously as machines act in their own interests to secure resources to meet their needs. This situation could be a dangerous one with warfare being a possibility between conflicting interest groups. A war between man and intelligent machine could be humanity’s greatest test of survival and the result may be another type of coexistence where man is the subservient one.
Giving birth to AI
When it comes to the development of machine intelligence humans will be the architects of it. This means that is would be possible to create software with certain specifications to aid in protecting humans from potential harm. This would require creating ââ‚¬Ëœlaws’ that the AI would be incapable of breaching. Celebrated science fiction writer Isaac Asimov created such laws in his books for his robots to follow. These laws were aimed to prevent the robots from ever harming humans or humanity though Asimov used them mostly as a literary device and to show the paradoxes and problems associated with trying to program such complex laws into machines. As both Clarke (1994) and Grand (2004) have pointed out these laws have little bearing on actual AI construction. Grand and Clarke both analyse the possibility of programming rigid instructions into AI and come to the same conclusion that to create such laws is extremely difficult because of the complexity of reducing the environment to be defined into by binary nature of the laws. Such laws of behaviour toward humans would be necessary though to prevent conflict.
Humans are likely to accept, as currently do, machines into our everyday lives. The functions they serve us are invaluable and by automating tasks humans have more time to devote to other, more meaningful activities. To accept AI would be more difficult to humans than simply accepting machine assistance. If AI were created on par with human intelligence relationships would be formed between man and machine especially if the machine were to take humanoid form and be able to be an intelligent companion. Perfect AI would be indistinguishable from human intelligence and would have interesting implications in the forum of AI rights. Humans would have to be reminded that AI are machines and have limitations to ensure a functional relationship. Humans are often wary of new technology but over time become accustomed to it.
The question of humans coexisting with a new form of intelligence is currently impossible to answer. There is no historical precedent so determining how humans will react when we are confronted with the issue. It seems that AI will have to be developed in such a way that the differences between human and AI are still apparent to remind humans of the difference. It also seems that AI will have to be subservient, unable to develop a society or economy that would threaten human societal structures in order to prevent conflict. Human intelligence and machine should be able to coexist but only under specific sets of conditions and rules defined by humans. If these rules are broken, if AI develops beyond human intelligence and demands rights and freedoms, then conflict will ensue and one of the intelligent forms will need to be dominated. Which intelligence will be dominated, human or machine, is currently unknowable.
Durkin, J. 2003. Man & machine: I wonder if we can coexist. AI & Soc. 17:383-390. Springer-Verlag London Ltd. 2003.
Grand, S. 2004. Moving AI Out of Its Infancy: Changing Our Preconceptions.
Intelligent Systems and Their Applications, IEEE. Vol. 19, Issue 6, Nov.-Dec. 2004:74 – 77
Clarke, R. 1994. Asimov’s laws of robotics: Implications for information technology – 2. IEEE Computer. Vol 27, Issue 1, Jan. 1994:57 66.
Cite This Work
To export a reference to this article please select a referencing stye below: