Artificial Intellignce And Their Applications Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Abstract- The term paper will explain about the various programming techniques of AI and their applications.

I. ARTIFICIAL INTELLIGENCE

Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviours that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times, and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chess-player, and countless other feats never before possible. Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."

Artificial intelligence has been the subject of optimism, but has also suffered setback and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.

AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.

II. PROGRAMMING TECHNIQUES

There are various programming techniques. Some of them are discussed below.

Imperative

Object oriented

Functional

Generic

A. Imperative

In the procedural programming approach the basic mechanisms of programming, such as statements, functions, loops, and decisions are used. Classic examples of this paradigms are various search and sort algorithms as well as simple tree implementations. The use of variables during the runtime of the program defines a so-called state. If the variables are not correctly set, the functionality of the program may change. For this reason it is necessary that all variables have a well-defined value during the execution time, especially in the beginning. Many flaws related to uninitialized variables may result in severe problems (known as bugs). [1]

Successor of the unstructured programming approach

It is based upon the concept of the procedure call

Procedure calls: routines, subroutines, functions

A procedure might be called at any point

Functions can be used wherever questions arise like "how do we get a value x from y"

B. Object-Oriented

The restrictions of the imperative programming are mostly related to data structural issues. Functions and data structures cannot be combined intrinsically, the programmer has to take the burden of keeping them consistent.

Object-oriented programming focuses on the principles of data encapsulation and code reusage. With these principles very complex data structures (mostly containers) can be implemented in a way that the user need not necessarily know the internal mechanisms. Simple code resuage is enabled by different mechanisms such as the use of already implemented data structures as well as simple methods of enhancement. For object-oriented programming there are several distinct features:

Identity is the quantization of data in discrete, distinguishable entities called objects.

Classification is the grouping of objects with the same structure and behaviour into classes

Polymorphism is the differentiation of behavior of the same operation on different classes

Inheritance is the sharing of structure and behavior among classes in a hierarchical relationship.

The features identity and classification are grouped in the object-based mechanism. Various languages offer this mechanism. All features together are the object-oriented paradigm which can be used, e.g., in Java, C++, and other object-oriented languages.[2]

C. Functional

The main concept which is addressed by the functional programming paradigm is the evaluation of mathematical functions instead of changes within states of the executed commands (imperative and object-oriented paradigm). In the context of functional programming, a function can be used as an argument for a function as well as a function can return another function (higher order function concept). Associated with the avoidance of states, this paradigm is known for its absence or reduction of side-effects.

A function is said to produce a side effect if it modifies some state other than its return value. This behaviour complicates the prediction and evaluation of a program, for example the following statements (++i) - (++i) do not only calculate a value, they also change the state/value of the variable i. Therefore pure functional programming disallows side effects completely. Since pure functions do not modify state, no data may be changed by parallel function calls. Pure functions are therefore thread-safe.

In C++ there are several ways to use this paradigm:

Tempelate Functions

Function Objects

(Tempelate) meta-programming

The first two techniques are compared in the next two code snippets. The third technique is explained in the next section. The main reason why two different techniques are used in C++ is based on the fact, that declaring a function to be inline is not going to help, because compilers do not inline calls to functions passed to templates through a pointer. [3]

D. Generic

The functional programming paradigm was introduced in the last section. As it can be seen, the generic programming was already used with the template mechanism of C++. But generic programming is more than just using template for various types. If we consider the imperative (structured), object-oriented, and functional programming, it is getting clear, that one mechanism has to be introduced to use all these mechanism efficiently together: the generic programming paradigm.

Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983, permits writing common functions or types that differ only in the set of types on which they operate when used, thus reducing duplication. Software entities created using generic programming are known as generics in Ada, Eiffel, Java, C#, and Visual Basic .NET; parametric polymorphism in ML, Scala (possibly the only modern language that supports both parameterized types originating from functional languages and virtual types from the OO paradigm) and Haskell (the Haskell community also uses the term "generic" for a related but somewhat different concept); templates in C++; and parameterized types in the influential 1994 book Design Patterns. The authors of Design Patterns note that this technique, especially when combined with delegation, is very powerful but that "Dynamic, highly parameterized software is harder to understand than more static software."[4]

III. AI PROGRAMMING TECHNIQUES

Programming techniques in artificial intelligence are the major tool for exploring and building computer programs that can be used to simulate intelligent processes such as learning, reasoning and understanding symbolic information in context. Although in the early days of computer language design the primarily use of computers was for performing calculations with numbers, it was also found out quite soon that strings of bits could represent not only numbers but also features of arbitrary objects.

Operations on such features or symbols could be used to represent rules for creating, relating or manipulating symbols. This led to the notion of symbolic computation as an appropriate means for defining algorithms that processed information of any type, and thus could be used for simulating human intelligence. Soon it turned out that programming with symbols required a higher level of abstraction than was possible with those programming languages which were designed especially for number processing, e.g., Fortran.

IV. AI PROGRAMMING LANGUAGES

In AI, the automation or programming of all aspects of human cognition is considered from its foundations in cognitive science through approaches to symbolic and sub-symbolic AI, natural language processing, computer vision, and evolutionary or adaptive systems. It is inherent to this very complex problem domain that in the initial phase of programming a specific AI problem,it can only be specified poorly. Only through interactive and incremental refinement does more precise specification become possible. This is also due to the fact that typical AI problems tend to be very domain specific, therefore heuristic strategies have to be developed empirically through generate-and-test approaches (also known as rapid proto-typing).

In this way, AI programming notably differs from standard software engineering approaches where programming usually starts from a detailed formal specification.

In AI programming, the implementation effort is actually part of the problem specification process. Due to the "fuzzy" nature of many AI problems, AI programming benefits considerably if the programming language frees the AI programmer from the constraints of too many technical constructions (e.g., low-level construction of new data types, manual allocation of memory). Rather, a declarative programming style is more convenient using built-in high-level data structures (e.g., lists or trees) and operations (e.g., pattern matching) so that symbolic computation is supported on a much more abstract level than would be possible with standard imperative languages, such as Fortran, Pascal or C. Of course, this sort of abstraction does not come for free, since compilation of AI programs on standard von Neumann computers cannot be done as efficiently as for imperative languages.

However, once a certain AI problem is understood (at least partially), it is possible to re-formulate it in form of detailed specifications as the basis for re-implementation using an imperative language. From the requirements of symbolic computation and AI programming, two new basic programming paradigms emerged as alternatives to the imperative style: the functional and the logical programming style. Both are based on mathematical formalisms, namely recursive function theory and formal logic.

The first practical and still most widely used AI programming language is the functional language Lisp developed by John McCarthy in the late 1950s. Lisp is based on mathematical function theory and the lambda abstraction. A number of important and influential AI applications have been written in Lisp so we will describe this programming language in some detail in this article.

During the early 1970s, a new programming paradigm appeared, namely logic programming on the basis of predicate calculus.

The first and still most important logic programming language is Prolog, developed by Alain Colmerauer, Robert Kowalski and Phillippe Roussel. Problems in Prolog are stated as facts, axioms and logical rules for deducing new facts.

Prolog is mathematically founded on predicate calculus and the theoretical results obtained in the area of automatic theorem proving in the late 1960s.

Artificial intelligence researchers have developed several specialized programming languages for artificial intelligence. Some of them are discussed below:

A. Ipl

IPL was the first language developed for artificial intelligence. It includes features intended to support programs that could perform general problem solving, including lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, generators (streams), and cooperative multitasking.[5]

B. Prolog

PROLOG is a declarative language where programs are expressed in terms of relations, and execution occurs by running queries over these relations. Prolog is particularly useful for symbolic reasoning, database and language parsing applications. Prolog is widely used in AI today.

Prolog (PROgramming in LOGic) was created by Colmerauer and his colleagues at the University of Marseilles in the 70s. Clocksin and Mellish at the University of Edinburgh continued the work, and today their version (called "C & M syntax" or "Edinburgh syntax") is accepted as the standard. The difference between Prolog and other languages is that a Prolog program tells the computer what to do (a technique called declarative programming) while programs in other languages tell the computer how to do it (procedural programming). Prolog does this by making deductions and derivations from facts and rules stored in a database. The essence of Prolog programming is writing crisp, compact rules. The deductions and derivations, instigated by user-entered queries, are products of Prolog's built-in inference mechanism called backtracking. Prolog was originally designed for non-numeric information processing, but contemporary Prologs typically feature mathematical extensions.[6]

Prolog (PROgramming LOGic) rose within the realm of Artificial Intelligence (AI). It originally became popular with AI researchers, who know more about "what" and "how" intelligent behaviour is achieved. The philosopy behind it deals with the logical and declarative aspects. Prolog represents a fundamentally new approach to computing and became a serious competitor to LISP.

Prolog is a general purpose logic programming language associated with artificial intelligence and computational linguistics.

Prolog has its roots in formal logic, and unlike many other programming languages, Prolog is declarative: The program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.

The language was first conceived by a group around Alain Colmerauer in Marseille, France, in the early 1970s and the first Prolog system was developed in 1972 by Colmerauer with Philippe Roussel.

Prolog was one of the first logic programming languages, and remains among the most popular such languages today, with many free and commercial implementations available. While initially aimed at natural language processing, the language has since then stretched far into other areas like theorem proving, expert systems, games, automated answering systems, ontologies and sophisticated control systems. Modern Prolog environments support the creation of graphical user interfaces, as well as administrative and networked applications.

1) Syntax and Semantics of Prolog

In Prolog, program logic is expressed in terms of relations, and a computation is initiated by running a query over these relations. Relations and queries are constructed using Prolog's single data type, the term. Relations are defined by clauses. Given a query, the Prolog engine attempts to find a resolution refutation of the negated query. If the negated query can be refuted, i.e., an instantiation for all free variables is found that makes the union of clauses and the singleton set consisting of the negated query false, it follows that the original query, with the found instantiation applied, is a logical consequence of the program. This makes Prolog (and other logic programming languages) particularly useful for database, symbolic mathematics, and language parsing applications

Prolog's single data type is the term. Terms are either atoms, numbers, variables or compound terms.

An atom is a general-purpose name with no inherent meaning. Examples of atoms include x, blue, 'Taco', and 'some atom'.

Numbers can be floats or integers.

Variables are denoted by a string consisting of letters, numbers and underscore characters, and beginning with an upper-case letter or underscore. Variables closely resemble variables in logic in that they are placeholders for arbitrary terms.

A compound term is composed of an atom called a "functor" and a number of "arguments", which are again terms. Compound terms are ordinarily written as a functor followed by a comma-separated list of argument terms, which is contained in parentheses. The number of arguments is called the term's arity. An atom can be regarded as a compound term with arity zero. Examples of compound terms are truck_year('Mazda', 1986) and 'Person_Friends'(zelda,[tom,jim]).

Special cases of compound terms:

A List is an ordered collection of terms. It is denoted by square brackets with the terms separated by commas or in the case of the empty list. For example [1,2,3] or [red, green, blue].

Strings: A sequence of characters surrounded by quotes is equivalent to a list of (numeric) character codes, generally in the local character encoding or Unicode if the system supports Unicode. For example, "to be, or not to be".

2) Significant Language Features

Prolog is a rich collection of data structures in the language and human reasoning, and a powerful notation for encoding end-user applications. It has its logical and declarative aspects, interpretive natur, compactness, and inherent modularity.

Intelligent Systems - programs which perform useful tasks by utilizing artificaial intelligence techniques.

Expert Systems - intelligent systems which reproduce decision-making at the level of a human expert.

Natural Language Systems - which can analys and respond to statements made in ordinary language as opposed to approved keywords or menu selections.

Relational Database Systems [7]

3) Areas of Application

Prolog is the highest level general-purpose language widely used today. It is taught with a strong declarative emphasis on thinking of the logical relations between objects or entities relevant to a given problem, rather than on procedural steps necessary to solve it. The system decides the way to solve the problem, including the sequences of instructions that the computer must go through to solve it. It is easier to say what we want done and leave it to the computer to do it for us. Since a major criterion in the commercial world today is speed of performance, Prolog is an ideal prototyping language. Its concept makes use of parallel architectures. It solves problems by searching a knowledge base (or more correctly a database) which would be greatly improved if several processors are made to search different parts of the database.[5][7]

C. Lisp

LISP (LISt Processor) is generally regarded as the language for AI. LISP was formulated by AI pioneer John McCarthy in the late 50's. Although LISP doesn't have a built-in inference mechanism, inference processes can be implemented into LISP very easily. LISP's essential data structure is an ordered sequence of elements called a "list." The elements may be irreducible entities called "atoms" (functions, names or numbers) or they can be other lists. IBM was one of the first companies interested in AI in the 1950s. At the same time, the FORTRAN project was still going on. Because of the high cost associated with producing the first FORTRAN compiler, they decided to include the list processing functionality into FORTRAN. The FORTRAN List Processing Language (FLPL) was designed and implemented as an extention to FORTRAN.

In 1958 John McCarthy took a summer position at the IBM Information Research Department. He was hired to create a set of requirements for doing symbolic computation. The first attempt at this was differentiation of algebraic expressions. This initial experiment produced a list of of language requirements, most notably was recursion and conditional expressions. At the time, not even FORTRAN (the only high-level language in existance) had these functions.

It was at the 1956 Dartmouth Summer Research Project on Artificial Intelligence that John McCarthy first developed the basics behind Lisp. His motivation was to develop a list processing language for Artificial Intelligence. By 1965 the primary dialect of Lisp was created (version 1.5). By 1970 special-purpose computers known as Lisp Machines, were designed to run Lisp programs. 1980 was the year that object-oriented concepts were integrated into the language. By 1986, the X3J13 group formed to produce a draft for ANSI Common Lisp standard. Finally in 1992, X3J13 group published the American National Standard for Common Lisp.

Lists are essential for AI work because of their flexibility: a programmer need not specify in advance the number or type of elements in a list. Also, lists can be used to represent an almost limitless array of things, from expert rules to computer programs to thought processes to system components. Originally, LISP was built around a small set of simple list-manipulating functions which were building blocks for defining other, more complex functions. Today's LISPs have many functions and features which facilitate development efforts. Among contemporary implementations and dialects, have gained acceptance as a standard. A substantial amount of work has also been done in Scheme, a LISP dialect which has influenced the developers of Common LISP is a practical mathematical notation for computer programs based on lambda calculus. Linked lists are one of Lisp languages' major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific programming languages embedded in Lisp. There are many dialects of Lisp in use today, among them are Common Lisp, Scheme, and Clojure.

1) Syntax and semantics of Lisp

Symbolic expressions

The syntactic elements of Lisp are called symbolic expressions (also known as s-expressions). Both data and functions (i.e., Lisp programs) are represented as s-expressions which can be either atoms or lists.

Atoms are word-like objects consisting of sequences of characters. Atoms can further be divided into different types depending on the kind of characters which are allowed to form an atom.

The main subtypes are:

Numbers: 1 2 3 4 -4 3.14159265358979 -7.5 6.02E+23

Symbols: Symbol Sym23 another-one t false NIL BLUE

Strings: "This is a string" "977?" "setq" "He said: \" I'm here.\" "

Lists are clause-like objects. A list consists of an open left round bracket ( followed by an arbitrary number of list elements separated by blanks and a closing right round bracket ). Each list element can be either an atom or a list.

2) Semantic

The core of every Lisp programming system is the interpreter whose task is to compute a value for a given s-expression. This process is also called evaluation. The result or value of an s-expression is also an s-expression which is returned after the evaluation is completed. Note that this means that Lisp actually has operational semantics, but with a precise mathematical definition derived from recursive function theory.[8]

3) Evaluation

The Lisp interpreter operates according to the following three rules:

(i) Identity: A number, a string or the symbols T and nil evaluate to themselves. This means that the value of the number 3 is 3 and the value of "house" is "house". The symbol T returns T which is interpreted to denote the true value, and nil returns nil meaning false.

(ii) Symbols: The evaluation of a symbol returns the s-expression associated to it (how this is done will be shown below). Thus, if we assume that the symbol *names* is associated to the list (john mary tom) then evaluation of *names* yields that list. If the symbol color is associated with the symbol green then green is returned as the value of color. In other words, symbols are interpreted as variables bound to some values.

(iii) Every list is interpreted as a function call. The first element of the list denotes the function which has to be applied to the remaining (potentially empty) elements representing the arguments of that function. The fact that a function is specified before its arguments is also known as prefix notation. It has the advantage that functions can simply be specified and used with an arbitrary number of arguments. The empty list () has the s-expression nil as its value.

Note that this means that the symbol nil actually has two meanings: one representing the logical false value and one representing the empty list. Although this might seem a bit odd, in Lisp there is actually no problem in identifying which sense of nil is used.

4) Significant Language Features

Atoms & Lists - Lisp uses two different types of data structures, atoms and lists.

Atoms are similar to identifiers, but can also be numeric constants

Lists can be lists of atoms, lists, or any combination of the two.

Functional Programming Style - all computation is performed by applying functions to arguments. Variable declarations are rarely used.

Uniform Representation of Data and Code - example: the list (A B C D)

a list of four elements (interpreted as data)

is the application of the function named A to the three parameters B, C, and D (interpreted as code)

Reliance on Recursion - a strong reliance on recursion has allowed Lisp to be successful in many areas, including Artificial Intelligence.

Garbage Collection - Lisp has built-in garbage collection, so programmers do not need to explicitly free dynamically allocated memory.[9]

5) Areas of Application

Lisp totally dominated Artificial Intelligence applications for a quarter of a century, and is still the most widely used language for AI. In addition to its success in AI, Lisp pioneered the process of Functional Programming. Many programming language researchers believe that functional programming is a much better approach to software development, than the use of Imperative Languages (Pascal, C++, etc).

Below is a short list of the areas where Lisp has been used:

Artificial Intelligence

AI Robots

Computer Games (Craps, Connect-4, BlackJack)

Pattern Recognition

Air Defense Systems

Implementation of Real-Time, embedded Knowledge-Based Systems

List Handling and Processing

Tree Traversal (Breath/Depth First Search)

Educational Purposes (Functional Style Programming)[5][9]

D. Strips

is a language for expressing automated planning problem instances. It expresses an initial state, the goal states, and a set of actions. For each action preconditions (what must be established before the action is performed) and postconditions (what is established after the action is performed) are specified.[5]

E. Planner

Planner is a hybrid between procedural and logical languages. It gives a procedural interpretation to logical sentences where implications are interpreted with pattern-directed inference.[5]

ACKNOWLEDGEMENT

I, Abhishek Saxena, student of Btech- MBA CSE of Lovely Professional University would like to thank my faculty of Artificial Intelligence, Mr Vijay kumar Garg for his help and co-operation in completion of this term paper.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.