Analysing Natural Language And Artificial Intelligence English Language Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

As mentioned this planned project is to design and create an Expert Artificial Intelligence System that is capable of analysing natural English text. To do this, we need to obtain a good appreciation and understanding of the subject on hand together with the various challenges in this field. And this can only come by literature review, research and researching work that others have done so far.

Dale et. al (1998), has a very good starting point for this literature survey and indeed reveals new and surprising revelation in the field of natural language processing. Very interestingly, there are two different areas of natural language processing which require research and review. The first area in natural language is understanding what is involved in mapping from what he terms as "surface representation of linguistic material". This is expressed as human speech or written text and this has to be transformed to an computer representation of the meaning carried by that surface representation, such as car, model, year and mileage. But there is also the question of "how one maps from some underlying representation of meaning into text or speech". This delves deeper into the domain of natural language generation. In other words NLG is something which is very interwoven with natural language processing. And the next few research papers does cover the field of natural language generation.

Sethi (1986) is an outstanding research on the area of databases and Natural language. This research is interesting as he believes that Practical and usable NL interfaces are available today to be implemented. Indeed, business owners and MIS managers believe that they are the most important application of NL processing. Understanding the impact of these systems as well as developing the contingencies under which they are most effective is therefore an important issue. For example, NLP could give a website a cutting edge when it comes to meeting the competition and providing customers with a leading edge search and speech recognition technology.

Hendrix (1981) may be a bit old, but his concepts still stand and his research is a foundation of most NLP research. According to him there are three primary goals of NLP. The first of the three is "the investigation of human language understanding and generation". This involves using the computer to express, test, and further develop theories of human cognition. This is true artificial Intelligence, and this approach is part of the "cognitive modelling" aspect of artificial intelligence.

Secondly, it is the refinement and development of linguistic theories stressing generality, coverage and "linguistic soundness" of syntactic analyzers. This approach includes mathematically oriented analysis of formal languages.

Thirdly, the focus is on the construction of practical, useful natural language interfaces and processing systems - often with concrete applications in mind. This approach includes the less-than-successful early machine translation efforts as well as the much more promising, and recently proven, natural language database query systems. It is this part on database query systems which applies to this implementation, as a website like will really benefit and draw customers who have special needs and cannot type. They can just speak what they want to search, and the AI present in the NLP processing can obtain the desired results by querying the database using syntactic analyzers.

Biermann (1981) surmises that a computer programming system called the "Natural Language Computer" (NLC) is described which allows a user to type English commands while watching them executed on sample data appearing on a display screen. Direct visual feedback enables the user to detect most misinterpretation errors as they are made so that incorrect or ambiguous commands can be retyped or clarified immediately, for example an error when accepting a car make, could be prompted with, "did you mean…". A sequence of correctly executed commands may be given a name and used as a subroutine, thus extending the set of available operations and allowing larger English-language programs to be constructed hierarchically.

In addition to Biermann discussing the transition network syntax and procedural semantics of the system, special attention is devoted to the following topics in his research: the nature of imperative sentences in the matrix domain; the processing of non-trivial noun phrases; conjunction; pronominals; and programming constructs such as "if", "repeat", and procedure definition. Back in 1981, Natural language programming has seemed in recent years to be a rather remote possibility because of the slow progress in representation theory, inference theory, and computational linguistics. The NLC system suggested by Biermann is designed to compensate partially for the weakness of current technology in these areas by presenting the user with a good environment and with some well designed linguistic facilities

Schäfer (2006) presents an XML powered NLP solution which is capable of handling three XML-based integration scenarios where what he calls, multi-dimensional markup. This offers an additional dimension in the accuracy of NLP and AI. This is particularly useful if NLP is used by different nationalities as there is advancement made in the field of multilingual natural language processing (NLP). This was used to extract information, and provide a semantic web platform capable of the so-called Treebank storage of multi-dimensionally annotated texts.

Frost (2006) presents an interesting look at the Natural language interfaces using what he calls lazy functional programming. The basis and motivation of his research is that the construction of natural language interfaces to computers is a topic subject to more research by various universities at the moment. I agree with him, as the interface is the main face of the program to the user. His paper details the necessity for such interfaces and the importance of speech recognition technology. This is important because the program or code will need to output computer-oriented formal language for processing. And it is this programming that allows NLP to interact with computer applications and NLP based website systems.

His research has been dedicated to the design and implementation of natural language interfaces has involved the use of high-level declarative programming languages such as LISP and Prolog.

Ballard (1984) sums his research by presenting methods of dealing with the syntactic problems and deals with the use of natural language processors. He focuses on issues and limitations such as grammatical formalism, augmented phrase-structure rules, and even proposes a domain-specific disambiguation. He does so by implementing a pre-defined grammar files. The grammar could be educted using regex and certain rules applied to different domains. In my domain, this could be the car business domain. This approach could well work for a website such as as the language used will be very specific to the domain of cars. In fact, it is very specific to buying, selling and trading cars. His approach of predefining grammar may be perceived as a possible solution or idea in the next chapter.

Reiter & Belz (2009) provides me an idea into the investigation of the validity of some metrics for automatically evaluating natural language generation systems. These metrics are important to evaluate natural language generation systems. In their research they review previous work on natural language generation and mentions automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP. His work has been very successful with computer-generated weather forecasts. His results suggest that, at least in this domain, metrics may provide a useful measure of language quality.

This research by Collins, (2003) is highly relevant to my work, as this article describes"models for natural language parsing". Although it may sound highly mathematical, his models include probabilistic, context-free and lexicalized grammars. Finally, his work compares various NLP models, including the Treebank (which is a very popular concepts in NLP), aiming to give performance analysis on the various models. In my scenario and given the limitation, a good understand of his work can contribute to my implementation of the NLP powered system. The concept of grammars is a good one and should be included in my implementation, should time and limitations permit.

Explanation/ exploration of new ideas