Coursework Assessment Artificial Intelligence English Language Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The Internet has introduced large changes in contemporary life. At the heart of these changes are new technologies for communication. One of the most established communication technologies is the conversational agent. Recent advances in Natural Language Processing (NLP) and Artificial Intelligence (AI) in general have advanced this field in realizing the vision of a more humanoid interactive system.

Conversational agents are communication technologies exploit natural language and computational linguistic technologies to converse with users in human-like, text-based information-seeking and Web-based dialogs. They can support a broad range of applications typically focus on helping users to completed a specific task, such as information search, planning, event management, or diagnosis.

Deployed on retail websites, they respond to customers' inquiries about products and services or facilitate the service agents to process customers' inquiries and provide them with answers or guidance. Also, conversational agents associated with financial services' websites answer questions about account balances and provide portfolio information. In educational field, conversational agents assist students by providing problem solving advice while they learn. Conversational agents for entertainment are deployed in games to engage players in situated dialogs about the game-world events. In some cases, conversational agents can interact with users using artificial characters. These agents are then referred to as embodied agents.

There are some programs functioning as a type of conversational agents, which are known as chat bots, or chatterbot. They are sometimes referred to Artificial Conversational Entities or talk bots. Basically, a chat bots is a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. Traditionally, the program attempts to simulate interpersonal conversation, with the aim of temporarily fooling a human into thinking that they were talking to another person. More recently, chat bots have been used for several practical purposes such as online help, personalised service, or information acquisition, in which case the program is functioning as a type of conversational agent. According to database, chat bots used to interpret user's input intelligently by simply scan for keywords within the input and providing responses with the most matching or similar wording pattern. However, the strength of a chat bots is measurable through the quality of the output generated in response to the user.


The term "ChatterBot" was originally coined by Michael Mauldin (Creator of the first Verbot, Julia) in 1994 to describe these conversational programs.

Basically a chat bots is a computer program that when you provide it with some inputs in Natural Language (English) responds with something meaningful in that same language. Which means, that the strength of a chat bots could be directly measured and response to the user.


In 1950, an English mathematician Alan Turing published his famous article "Computing Machinery and Intelligence" which proposed a method for determining whether or not a machine was intelligent what is now called the Turing test. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably - on the basis of the conversational content alone - between the program and a real human.

In 1966, the oldest chat bots, ELIZA was developed. The programming for ELIZA was done by Joseph Weizenbaum at Massachusetts Institute of Technology with the great interest stimulated by the Turing's proposed. In that time, ELIZA is one of the most famous Artificial Intelligence programs in which it seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent.

After ELIZA, came PARRY, written in 1972 by psychiatrist Kenneth Colby at Stanford University. PARRY implemented a crude model of the behaviour of a paranoid schizophrenic based on concepts, conceptualizations, and beliefs (judgements about conceptualizations: accept, reject, neutral). It was intended to reflect the mind of a real human based on the most primal and driving of human emotions like fear and anger. This was accomplished by using a custom designed dictionary database which could apply positive and negative values to certain words or phrases to act as triggers for various emotions. It also embodied a conversational strategy, and as such was a much more serious and advanced program than ELIZA.

After the classical chat bots, ELIZA and PARRY, More recent notable programs include A.L.I.C.E., Jabberwacky and D.U.D.E has been established. While ELIZA and PARRY were used exclusively to simulate typed conversation, many chat bots now include functional features such as games and web searching abilities. Some more recent chat bots also combine real-time learning with evolutionary algorithms which optimise their ability to communicate based on each conversation held, with one notable example being Kyle, winner of the 2009 Leodis AI Award. Still, there is currently no general purpose conversational artificial intelligence, and some software developers focus on the practical aspect, information retrieval.

Types of Chat bots

A.L.I.C.E. (Artificial Linguistic Internet Computer Entity), as known as Alicebot, was developed by Dr. Richard Wallace in 1995 using his own A.I.M.L. programming language. This bots won the Loebner Bronze medal three times. A.L.I.C.E. is a program that converse with a human by applying some pattern matching rules to the human's input. The units of A.I.M.L. objects are categories. Each category has a pattern which matches an input and there is a template which used to construct the A.L.I.C.E. response. A.I.M.L. tries to match word by word from the static database to obtain the largest pattern match, which is the best one. When a match is found, the matching process will stop, and the template belongs to the category will be processed to generate an output. If no match found, a random answer will be constructed such as "Sorry, I have no idea." The design of A.L.I.C.E. is very similar to ELIZA and PARRY because it is 100% human edited. Unlike Eliza and Parry, however, A.L.I.C.E. is not designed for a specific role; instead she was made to reflect a woman in general. Also, A.L.I.C.E. is updated frequently and it has inspired many other bots builders to use A.I.M.L as the source language for their projects.

MegaHAL was first introduced in the 1998 Loebner Prize Contest. It uses a quite different way of simulating conversation from other chat bots. MegaHAL learns during the progression of conversation, it can remember new words and sentence structures from the user's input. It can even learn some new ways to substitute words or phrases for other words or phrases. The input received in places into a sequence of words and non-words, where a word is a series of alphanumeric characters whereas non-word is a series of other characters. This feature enables the chat bots to learn not only new words but also the separators between the words. As an example, if the user used to put double space after a full stop, MegaHAL will be doing the same as the user. When MegaHAL is started, it has no knowledge of language, so, it is unable to generate responds at all. Therefore, it has to be trained using a source of text to make sure that it will not expose its premature identity. After MegaHAL has been trained using a large varieties of data, it is able to generate responds towards various topics of questions. However, MegaHAL does not actually understand the conversation or even the sentence structure. It constructs its output based on sequential and mathematical relationships.

Albert One is an artificial intelligent chat bots created by Robby Garner. It is a multi-facet bots which is designed to simulate the way human make conversations using natural language programming technique. Rather than using one algorithm in generating replies, Albert One has several algorithms. In 1998 and 1999, Albert One won the Loebner Prize contest, which is a competition among chat bots. This project has been started in 1995. The purpose of the project is to survey and gather the information about what kinds of things people would say to a chat bots. From that, a large database of human statement and associated response in words and sentence structures has been built in Albert One. This portion of the project was tested at the 1994 - 1997 Loebner Prize contests.

Fred, or FRED is an early chat bots written by Robby Garner. The name Fred was firstly suggested by Karen Lindsey, and Robby came up with a name, "Functional Response Emulation Device." This chat bots is designed to explore the natural language communication between human and computer programs. By the way, this is a study of conversation between human and the way a computer program can learn from the user's input to construct its own conversations. It is worked by storing a large database which consists of human statements and appropriate responses. By referring to the input made by users, it made its own reply from the database. This approach required expert coding and editing.

SECTION : 1 Description of starting code

Most of the chat bots work on a straightforward stimulus response principle. This is a given input from the user gives rise to a connected output. A good example for the use of this technique Eliza in this type of chat bots, a pattern-matching algorithm detects certain keywords or key phrases typed by the user, and replies with a pre-programmed response. If the bots receives a certain stimulus it replies with the corresponding response. If all goes well, the conversation might proceed as below:

User: Hi! How are you today?

Bots: Hello. I'm fine and how about you?

User: Good, thanks. What is new on the internet now?

Bots: Did you see the new trailer for transformers 3?

User: I have not seen it.

Bots: It is one of the popular upcoming movie this year.

User: I do not really like action movies.

Bots: What's your favourite type of movies

User: I prefer comedy movies

In this example the conversation flows well, and the bots does a reasonably good simulation of human conversation. But using the same lookup table, things can go badly wrong:

User: I hope you don't want to talk about transformers again.

Bots: Let's talk about the new upcoming transformers movie

User: Hello? Did you not understand what I just type?

Bots: Hi! How are you today?

User: I'm fine, but please don't mention transformers again.

Bots: Transformers is a very popular upcoming movie

User: Look, let's keep off the subject of movies!

Bots: What's your favourite kind of movies?

User: Are you trying to ignore me? Not movies already!

Bots: What's your favourite kind of movie?

User: Stop talking about movies!!

Bots: What's your favourite kind of movies?

User: You irritating bots! Why do you keep saying the same thing over and over again?!?!?!?

Clearly in this simple keyword-spotting technique have several limitations. The context in which a keyword appears is vitally important, but the program totally ignores it. A single word can have multiple meanings in English language. Take the word bank, for example:

A bank is a financial institution.

A river has two banks.

Bank can mean mound, as in grassy bank, snow bank or sand bank.

Electrical equipment can contain a bank of switches.

The cushion of a pool table is called a bank.

When an aircraft turns, it's said to bank.

And a dictionary will point out several more shades of meaning. It's clear that the stimulus bank can't be associated with a single response in a lookup table.

When we converse, as humans we have some understanding of the context in which a word is used. For example, the question is "Did you go to the bank at lunch time?" probably refers to a financial bank and not a river bank. We know this because humans living in the modern world, we know that going to the bank is an activity often performed by people during their lunch time break. These schemes are not very robust, although people have sometimes come up with schemes that allow chat bots to attempt to differentiate among the various meanings of a word. A computer program lacks this everyday knowledge, and currently there is no satisfactory way to give it such knowledge, despite the best efforts of artificial intelligence researchers.

For grammatical problems, many people have tried the idea of analysing conversational utterances grammatically. The aim is to label each word as a part of speech (noun, verb, adjective, etc). The next stage would be to show how extract meaning from the parsed sentences. Unfortunately, most people are very ungrammatical in their use of language. They only have to visit an internet chat room or forum to appreciate this. Any program based on parsing sentences is bound to fail when presented with ungrammatical input in other words. It won't work with real people, because we don't talk proper. What people say may not be grammatically correct, but it usually manages to convey meaning.

It must have some kind of personality and character, if a conversational system is to appeal to users. If you ask it "What is your favourite colour?" It needs to be known what it prefers. If you ask "Where do you live?" it needs to give a sensible response. And if you tell it "You're stupid!" it needs to react with a suitable emotion, as a human would. Without these abilities, a program will never be interesting to converse with.

Many people have tried to give their chat bots as a character by presenting users, by authoring suitable responses. The character is well constructed and self-consistent, it will fail to convince. Because chat bots are currently so arbitrary in their responses, zany or robotic characters are possibly the most convincing, though their conversation leaves much to be desired.

Clearly there is a lot more to a conversational system than just responding to individual words or phrases. The system needs to know how the same word can be used in many different ways, and it needs to have general knowledge about the world in which we humans live. Both of these are very difficult problems to solve. A chat bots should also have a consistent and convincing character if users are to hold satisfying conversations with it.

SECTION : 2 Propose Nlp Enhancements

There are quite a few things that we can be enhance, the first one is that since the chat bots tends to be very repetitive, we might create a mechanism to control these repetitions. We could simply store the previous response of that Chat bots and make some checking when selecting the next bots response to see if it's not equal to the previous response. If it is the case, we then select a new response from the available responses. The other thing that we could improve would be the way that the chat bots handles the user's inputs, currently if you enter an input that is in lower case the Chat bots would not understand anything about it even if there would be a match inside the bots database for that input. Also if the input contains extra spaces or punctuation characters (!;,.) this also would prevent the Chat bots from understanding the input. That is the reason why we will try to introduce some new mechanism to pre-process the user's inputs before it can be search into the Chat bots database. We could have a function to put the users inputs in upper case since the keywords inside the database are in uppercase and another procedure to just remove all of the punctuations and extra spaces that could be found within users input.

Clearly there are still many limitations with this version of the program. The most obvious one would be that the program use "exact sentence matching" to find a response to the user's input. This means that if you would go and ask him "what is your name again", the program will simply not understand what you are trying to say to him and this is because it was unable to find a match for this input. And this definitely would sound a little bit surprising considering the fact that the program can understand the sentence "what is your name".

There are at list two ways to solve this problem, the most obvious one is to use a slightly more flexible way for matching keywords in the database against the user's input. All we have to do to make this possible is to simply aloud keywords to be found within the inputs so that we will no longer have the previous limitation. The other possibility is much more complex, it use's the concept of Fuzzy String Search. To apply this method, it could be useful at first to break the inputs and the current keyword in separate words, after that we could create two different vectors, the first one could be used to store the words for the input and the other one would store the words for the current keyword. So, there you have it, two different methods for improving the chat bots. Actually we could combine both methods and just selecting which one to use on each situation.

Eliza is an AI Program that simulates the behaviour of a therapist. The first program of this sort was developed in 1967 in MIT. Such programs, which interact with user in simple English language and can simulate as a conversation are known as Chat bot. A program like Eliza requires knowledge of three domains which is Artificial Intelligence, Expert System and Natural Language Processing. Even though last two are sub-parts of the first one, they are emerging as science in themselves. It has a database of facts and rules, which are searched to give the best possible response. The rules are indexed by keywords, some rules require no keyword.

Please note that the current functionality and features of this program are very limited and they are just for accompanying the article. For future prospects it can like other AI programs, this program also has immense possibilities of improvement. The following are the improvements, which can be made in it. All the previous talking can be stored in a array of strings, so that in case of user contradicting himself/herself, ELIZA can contradict him. A database or a flat file, at least, can be used for the data and talk storage. Sessions and User-Password Pairs can be established, so that, even after the completion of one session, next time, whenever the user enters his User Name and Password, ELIZA, will get all the relevant data and previous talks, related to the user, from the database itself.

They're a combination of Artificial Intelligence, Natural Language Processing, and creativity. However you refer to them, new chat bots are allow you to create an engaging virtual personality that can perform many tasks - from assisting with common computer tasks, to serving as a virtual assistant that can help you get organized, to acting as a teacher that can play games or administer tests. Whatever you do with your chat bots, the possibilities are endless.

Ninety percent of Eliza says is found in the associated data file. These techniques will work when Eliza does not understand what the user is talking about, when the users repeat itself, when the user does not typing anything and keep on pressing enter.

The strategy has been used to respond to the request when Eliza finds out if the user has given any null input. If so, it takes the fact from the static database to respond. There are some in built responses that Eliza can recognize readily. If there is no in build sentence frame work is found, then Eliza will searches for the specific keyword to define the context. If no context is found, it will deliberately motivate the user to speak about a specific topic.

Otherwise, a response will be chosen at this time by randomly from the database.

SECTION : 3 Evalution and discussion

In order to decide how close we have come to the aims in NLP enhancement, we need to evaluate the program in advance. Three main methods were used for evaluation:

Evaluate the naturalness of ELIZA based on the comparison between human-to-chat bots conversation versus human-to-human conversation.

Evaluate the success of learning technique of the chat bots, based on the quality of responds constructed, efficiency, and the user's satisfaction on the respond.

Evaluate the ability of the chat bots in retrieving information based on the comparison with a search engine.

By comparing the dialogue transcripts generated via ELIZA with real human conversation, the strength or weakness of ELIZA as a human simulator will be illustrated according to linguistic features, and part of speech.

ELIZA played the role of a psychotherapist and would rearrange a statement or question into a new question to ask the person talking to it. As an example, if you write: "My son is a thief." ELIZA will probably match "son" to family, "is" to present tense and "thief" to negative word. The output generated maybe something like "Do you have similar problems with anybody else in your family?" Based on the generated output, ELIZA is able to filters keywords range from positive to negative structures.

SECTION : 4 Future development and Conclusions

The Loebner Prize is an annual competition in Artificial Intelligence that awards prizes to the most human-like chat bots which decided by the judge. The format of the competition is that of a standard Turing Test which was introduced by Alan Turing in 1950. Loebner Prize was begun in 1991 by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies. Winners of previous contests include Joseph Weintraub (1991, 1992, 1993, 1995), Robby Garner (1998, 1999), Richard Wallace (2000, 2001, 2004), Rollo Carpenter (2005, 2006), and etc. The Loebner Prize contest is based on the measurement on how success a chat bots simulates a human conversation or fools the user that they are chatting with a real human. Besides that, it also measure the success of the machine learning technique used in automation process.

From the starting of ELIZA to different recent chat bots or machine conversation system, the chat bots developed have four important characteristics which are able to affect the progress of future development:

Useful applications of the chat bots on different sectors.

Improvement of the human-machine conversation by continuous studying or research on chat bots language.

Simulate a conversation or fooling the users that they are actually chatting with a real human by improving the Natural Language Processing technique used.

Improvement of the implementation of Artificial Intelligence in chat bots.

In order to get the chances of winning Loebner Prize Gold Medal, accurate and efficient natural language processing is essential in developing an intelligent chat bots; first, the chat bots must provide high language processing capabilities so that they can conduct productive conversations with users. They must be able to fully understand user's questions or statements and respond accurately at each conversational turn. Second, they must be reliable as each of the integrated information must be accurate and valid.

Nowadays, some projects are proposed to investigate on the possibility of applying chat bots system in educational field such as assist in teaching courses, learning new language, or improve student understanding for some topics by answering question or discuss on the topics. Chat bots can be an agent in helping e-learning, distance learning, as well as full time learning.

In the future, conversational agents will be continually developed to support a broader range of applications in business enterprises, education, government, healthcare, and entertainment. A chat bots version which can speak in different language can also be generated, and become a tool to search for answers for questions and assess information system. There are several well known futurists believe that computers will achieve capabilities comparable to human reasoning and understanding of languages by 2020.