This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The original ELIZA was the first chatbot, developed by Joseph Weizenbaum in 1966, as described in the Lecture Notes and elsewhere. The aim of this coursework is for you to get some experience, knowledge and understanding of NLP systems, and to design some NLP functionality for an NLP program, in this case an Eliza-type chatbot program. You are asked to design aspects of a chatbot which are to include some useful NLP.
This is primarily a design project. You are to design functionality, and then describe it in your report. Your design does not require any formal design language; you are to give algorithms and explanations, using pseudo-code, structured English, and English as you see fit.
In order for you to get some idea of the working of a basic chatbot, you are provided with a system written in C++ by a former lecturer at NTU (Paul MacDonald) - the Azile chatbot. You can use the laboratory time to compile and run this chatbot, so that you can experiment with the way it works, or even use it as the basis for an enhanced chatbot. However, using the Azile chatbot is not compulsory. You may, and indeed are encouraged to, experiment with other chatbots too. Many can be accessed via the internet.
You are required to enhance the design of a basic pattern-and-response chatbot so that the program would carry on a more convincing conversation, should your designs be implemented. You should do this by using NLP techniques to achieve specific goals which you can choose. You must choose at least two goals.
For example, you might choose as one of your goals the ability for the chatbot to query the user for their name. You would therefore need to think about how the chatbot would know if it didn't already know the user's name, how it would query the user, and how it would process the user's response. Clearly the last of these three things would require the extraction of the user's name from the text string entered. There are many things to consider in your design, such as: Did they just type their name? How many words are in their name? Which is the surname, and which is the first name? How does this differ in different human cultures, and how would you handle it? What if the user said "My name is X Y" or "I'm called X Y" rather than just entering their name? Do you need to parse the input, or is simple pattern matching effective? Do you need to store lists of possible first names and surnames? etc. Throughout all of this you need to reference NLP techniques as described in the lectures.
A separate goal might be to explain how your enhanced chatbot would use any user's name it had captured. When would it use the name, and how would the name be inserted into any output text?
Another example might be that you decide to de-reference "it" anaphors. Whenever a user typed the word "it", the chatbot tries to work out what "it" means. For example, if the chatbot has just told the user "The London train leaves from platform 5" and the user then says "What time does it leave?" then the chatbot has to work out that "it" means "The London train" and not "platform 5". How would you design such an ability? (See the lecture notes to get you started!)
There are very many ways in which you could use NLP techniques to enhance a basic chatbot. It's up to you to choose at least two such goals to design for. For more ideas, see the document "Paul Bowden's chatbot" on the module NOW pages. However, I want to see you apply the NLP techniques that you will be taught about in the lectures. For example, in order for the chatbot to reply "Which park has the statue?" to the user's statement "I went to the park with the statue", the chatbot would need to do PP-attachment resolution. How would you design this into a chatbot? What knowledge/data will the chatbot need? What is the algorithm for applying that K/data to solve the problem?
It's important that your design is detailed enough to be turned into code by a competent programmer. It's not sufficient to say "I would add a top-down parser so that the Subject of the sentence can be found." How would adding a parser do this? What is your algorithm (series of steps) for taking a parse tree and finding the Subject of the sentence from it? (This is, of course, a difficult problem!) You must add enough detail for a competent person, such as the marker of your report, to be able to see how (or indeed if) your design would actually work, if coded up.
You are asked to pick at least two goals. I would prefer that you picked two interesting and/or difficult goals and gave a very detailed design for each of them, than you picked five goals and only gave shallow descriptions of them.
IMPORTANT! Look at the required page ranges (given in brackets) for each of the sections below; if your section is too short, you are unlikely to achieve a good grade for the section, as it will simply not contain enough material or detail to be adequate.
Section 0 - Introduction
Since the birth of technology, humans have continued to try to mould all aspects of themselves into technology; perhaps in the hopes of achieving a somewhat god like power to quote Hermes Trismegistus(dob) - "by discovering the true nature of the gods, man has been able to reproduce it". Such behaviour goes as far back as ancient China, Greece and Egypt, it would seem to be a universally occurring pattern. For example
-- REWORD --
Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi, Hero of Alexandria, Al-Jazari and Wolfgang von Kempelen. The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion-Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it."
-- END REWORD -
As new technology is developed, so are new ways to exploit such technology to turn it into some kind of artificial intelligence or construct of the mind.
In the late 1500s Thomas Hobbes(DOB) also known as the "Grandfather of AI," came up with a theory that reasoning is computation. In "De Corpore" chapter one he states
"By reasoning I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract" (Hobbes 1655, 1.2).
This idea was further developed by various different philosophers throughout the years but became more concrete with the development of computers.
Ample amounts of research and work was carried out on understanding computation in the early 20th century. Such researches lead to the development of the famous Turing machine created by Alan Turing (1912-1954). This machine was able to write symbols on an infinitely long tape according to a table of rules.
Finally when technology had advanced to allow the development of more modern day computers, AI software were at the forefront of application designed for these computers. In 1952 Samuel (1959) designed an application that would allow you to play checkers against the computer. A couple years later Simon co-operated with Newell to design a program that discovered proofs in propositional logic. More AI applications were being developed which primarily focused on learning and search as the foundations. It became apparent early that one of the main problems was how to represent the knowledge needed to solve a problem. Before learning, an agent must have an appropriate target language for the learned knowledge. There have been many proposals for representations from simple feature-based representations to complex logical representations of [McCarthy and Hayes (1969)] and many in between such as the frames of Minsky (1975).
In 1950, Alan Turing published his famous article "Computing Machinery and Intelligence", which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably-on the basis of the conversational content alone-between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the Introduction to his paper presented it more as a debunking exercise:
[In] artificial intelligence ... machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained ... its magic crumbles away; it stands revealed as a mere collection of procedures ... The observer says to himself "I could have written that". With that thought he moves the program in question from the shelf marked "intelligent", to that reserved for curios ... The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of cue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent". Thus the key technique here-which characterises a program as a chatbot rather than as a serious natural language processing system-is the production of responses that are sufficiently vague and non-specific that they can be understood as "intelligent" in a wide range of conversational contexts. The emphasis is typically on vagueness and unclarity, rather than any conveying of genuine information.
Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational-even when it is actually based on rather simple pattern-matching-can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".
So to define chat bot
A chatter robot, chatterbot, chatbot, or chat bot is a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods, primarily for engaging in small talk. The primary aim of such simulation has been to fool the user into thinking that the program's output has been produced by a human (the Turing test). Programs playing this role are sometimes referred to as Artificial Conversational Entities, talk bots or chatterboxes. In addition, however, chatterbots are often integrated into dialog systems for various practical purposes such as offline help, personalised service, or information acquisition. Some chatterbots use sophisticated natural language processing systems, but many simply scan for keywords within the input and pull a reply with the most matching keywords, or the most similar wording pattern, from a textual database.
The term "ChatterBot" was originally coined by Michael Mauldin (Creator of the first Verbot, Julia) in 1994 to describe these conversational programs.
What is a conversational agent
A dialog system or conversational agent (CA) is a computer system intended to converse with a human, with a coherent structure. Dialog systems have employed text, speech, graphics, haptics, gestures and other modes for communication on both the input and output channel.
What does and does not constitute a dialog system may be debatable. The typical GUI wizard does engage in some sort of dialog, but it includes very few of the common dialog system components, and dialog state is trivial.
Examples of 3 chat bots and analysis of their capabilities
Kaptin kirk http://www.youtube.com/watch?v=YOL90I8j3jE&list=UURWuI3Rg14IiueMas7R8wBA&index=4&feature=plcp
Ask lucy http://www.youtube.com/watch?v=MOkkHo1JRp8&list=UURWuI3Rg14IiueMas7R8wBA&index=3&feature=plcp
End Examples of chat bots
(i) [explain what chatbots/conversational agents are], including a brief history including
Over the years since computer sciene evolution
(ii) [provide several (at least three, not including Eliza) examples of some real chatbots or conversational agents, with brief analyses of their capabilities].
The analysis of their capabilities should concentrate on their NLP abilities, as far as this can be determined. You do not need to explain how they achieve their NLP abilities (just what those abilities appear to be). (3 - 6 pages)
Section 1 - Description of a Basic Chatbot's Algorithm breake down azile
Here you should describe and explain the way a basic pattern-and-response chatbot works. You may choose to do this in a generic way, from an imaginary chatbot, or specifically, from a real chatbot, such as the Azile chatbot made available to you. This means explaining the overall narrative of the processing involved - i.e. what happens when you start up the program, when it is running and accepting inputs from the user, and when it closes down. Don't forget to describe the purpose and structure of any files that the program reads from or writes to. A good answer will, of course, explain the pattern matching mechanism in detail, including any variable-matching (the @-token mechanism that matches variables in the user input to @1 etc in the patterns).
(3 - 6 pages)
Section 2 - Proposed NLP Design Enhancements - The Goals
Here you will describe your two (or more) goals and their designs. This is the major part of your report and must contain all the necessary detail of your designs, as described above. It must describe the NLP techniques you propose to use in enough detail for them to be turned into code (without too many questions needing to be asked) by a competent Systems Analyst/Programmer. Do not be vague; give the algorithm (series of steps in the processing) required, together with descriptions of any data required (in files) and data structures required (in the program). There is no page limit on this section, as I want you to take as many pages as you need in order to give your designs.
(As many pages as you need).
Section 3 gives you a choice: DO ONE OF THE TWO ALTERNATIVES:
OR, instead of doing the above for Section 3, you may choose to do the following:
Section 3 - Conversations Between Two Chatbots
This alternative Section 3 does not require you to do any coding. You should use the four subheadings 3a) to 3d), given below, in your report.
You are to run a conversation between two chatbots, and evaluate that conversation in terms of its quality and NLP content where evident. You need to run two chatbots simultaneously. They don't have to talk out loud, and they don't have to have the ability to do voice input. You simply need to type the output from Chatbot 1 into Chatbot 2, and vice versa. Thus you act as the communication channel between them. You are allowed to start the conversations off yourself, for a few inputs, before putting the two chatbots together, if this results in better or more varied chatbot-to-chatbot dialogue.
I would prefer it if you used two different chatbots. However, if any kind of conversation proves impossible between the two you have chosen, you are allowed to run the same chatbot twice. (Note however, that this may backfire - some chatbots can detect when you are doing this! If this happens, you may need to revert to using two different chatbots, unless the conversation is worthwhile, from an AI/NLP viewpoint.)
3a) Example Conversation Fragment(s)
You should provide a record in your report of at least one conversation. This should contain at least thirty inputs and outputs (i.e. 15 outputs from each chatbot). The conversation should show clearly which chatbot said what. For example:
Cleverbot: I'm really tired
Alice: I'm sorry to hear you're really tired
Cleverbot: Are you?
Alice: Yes I am
â€¦and so on, for at least another 24 lines.
You may supply several such conversations, if you wish. In fact, in order to determine the chatbots' NLP capabilities, you may well need to run more than one conversation, or run one very long conversation.
In addition to giving the conversation(s), you should evaluate these conversation(s). This means that you should discuss:
3b) The Chatbots' NLP Capabilities
Determine whether the chatbots have any NLP capabilities, such as de-referencing of it-anaphors, holding a "current topic of conversation", knowledge of plural vs. singular nouns (mice vs. mouse, for example), ability to do PP-attachment resolution, ability to handle ellipsis, ability to be taught facts, and so on. Discuss these capabilities and how they are evidenced in the conversation(s). You should use conversation fragments to support your assertions.
3c) Evaluation of the Chatbots' Conversation Quality
Evaluate the overall quality of each chatbot's conversation. Are they as good as a human, or are they very poor? Do they respond appropriately, or are they merely random in their responses? Can they maintain a conversation about a topic? Support your answer with illustrative conversation fragments.
3d) Discussion of Worth of Exercise
Finally, discuss whether this chatbot-to-chatbot exercise was worthwhile. Did it reveal any particularly good or particularly poor behaviour of either chatbot? Was this exercise worth doing, and if so, why?
(up to 15 pages in total, including conversations) WEIGHTING: 2
Section 4 - Summary and Insights
If you chose to turn your goal designs into reality in Section 3, then compare the actual effectiveness of your own designs against the abilities of the real chatbots or conversational agents you discussed in the Introduction of this report.
If you chose to analyse a chatbot-chatbot conversation in Section 3, then discuss the probable effectiveness of your own designs if these were to be coded and incorporated into the chatbot(s) you used in the chatbot- chatbot conversations.
In either case, what insights has this work given you concerning chatbot design?
(3 to 6 pages)
Overall Report Quality: correct referencing (Harvard or Numeric used consistently), formatting, presentation, English, spelling, lengths of sections etc.
Your work should be submitted on A4 paper, firmly stapled at the top left hand corner or along the left hand edge. Your work should follow standard reporting requirements unless stated otherwise and should be appropriately word processed. Text must be 12 point Arial or Verdana, flush left, single spaced. Include your name and student registration number in a header. You must include references within your text and a References section at the end of your report (use the Harvard or Numeric methods for referencing); you may also include a Bibliography; you may include Appendices to hold extra useful information that you want to be seen.
You do not have to include a title page, abstract or table of contents; please do not use folders or binders of any kind.
Formative (Whilst you're working on the coursework)
You will be given informal verbal feedback regarding your design (and program, if attempted) development, and report contents, during the timetabled laboratory sessions.
Summative (After you've submitted the coursework)
You will receive specific feedback regarding your coursework submission together with your awarded grade when it is returned to you. Group feedback will also be provided. Clearly, feedback provided with your coursework is only for developmental purposes so that you can improve for the next assessment or subject-related module.
A timetable of key feedback and other dates is provided on NOW.
The separate sections of your report will each have a grade awarded to them, as given in NTU's GBA (grade-based assessment) scheme (e.g. high 2:1, low 3rd, exceptional 1st , mid 2:2 etc.) The grade you achieve in a section will reflect how well that section delivered the required material.
In addition, each section has a weighting (given above) to show you how important it is in the calculation of the overall grade of your work, i.e. the grade for your report as a whole. [Note that the weightings given above add up to 10]
Please see the assignment marking scheme document for further details.
About the ELIZA Source Code Provided
You have been provided with an ELIZA-type program, called Azile. This is a small ELIZA shell created by Paul McDonald, written in C++. There are three versions, of varying capability. They can be compiled and run using the standard Microsoft development environment. You are free to edit your copies of them to experiment with your designs.
There are also many Eliza-like programs available over the Internet, often written in JAVA, and you could look at these to help develop your ideas. Some examples and useful sources are:
Also, see "Web links" under Module Information on the module's NOW Content page.
You can also search using Google, Yahoo etc for current information.
See also Dr. Bowden's notes: "Paul Bowden's Chatbot", "NLP in real conversational agents" and "Chatbot-relevant Notes", on the module's NOW pages.