Our Own Rational Existence Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

We as humans are extremely fascinated with the idea of our own rational existence and the thought of replicating our own structure and mental ability has been something that has been contemplated about far earlier than the dawn of the digital computer. To create an intelligent computer or the notion of machine thinking and learning like a human has created a drive for many computer scientists to discover a way to bring this captivating task to life. All these wondrous thoughts and theories of digitally creating a human mind have produced the idea of Artificial Intelligence also known as AI. Artificial Intelligence is the branch of computer science that is concerned with the automation of intelligent behavior. Artificial Intelligence and Expert System are concurrently engaged into performing cogent and cognitive tasks in a human-like fashion. Expert Systems is essentially a sub topic under Artificial Intelligence, by definition, it is a softwarethat attempts to reproduce the performance of one or more humanexperts, most commonly in a specificproblem domain, and is a traditional application and/or subfield ofartificial intelligence. An Expert System basically duplicates an expert's knowledge and passes it through digital mediums like software and applications allowing users to simply retrieve the very knowledgeable data without having to visit an expert so to speak. In this paper, I will discuss an assortment of information in relation to Artificial Intelligence and Expert systems. Since this a rather broad subject I intend to take a general approach.

What exactly is AI?

Various scientists have different reasons about what AI really is. There two different ways to think of AI: AI thinks like a human, and AI acts like a human. These are the basic ideas of a two dimensional approach. All these approaches have been argued upon for some time now and each of them have actually helped advance one another. "A human-centered approach must be an empirical science, involving hypothesis and experimental confirmation and rationalist approach involves a combination of mathematics and engineering."

Acting Human

In order for a system to act like a human it must indeed possess various capabilities. One of the capabilities is must possess is natural language processing in order for the AI system to communicate a language successfully, and in order for an AI system to act convincingly human it must fluently know a language. It also must have a knowledge representational mechanism in order to store what it knows and hears. Some type of capability is needed because if an AI system is to act like a human it must process any type of input given to it. A human like AI system must also possess automated reasoning into order to process information and actually come up with a unique conclusion. Not all humans think alike, so not all humanly intelligence should thinking alike and should be customized with its own personality. It should also contain a machine learning mechanism so it may adapt to new circumstances and patterns.

A Brief History of AI:

Artificial Intelligence is by no means a modern concept, the idea of thinking machines can date back as far as approximately 428BC. The notion of machines thinking was not necessarily the prominent thought, but the study of what is thinking and how it works was deeply explored. Aristotle was "the first to formulate a precise set of laws governing the rational part of the mind." Meaning he, among the many other things he accomplished in his time, was the first person to officially study and attempt to understand the complexity of the human mind, how and why we make decisions as well as what makes the brain work. Beginning in 428BC was the philosophy of Artificial Intelligence, of course starting at the foundation of the concept, the very simple question of how must be measured. In addition to Aristotle, Rene Descartes (1596-1635) played an especially important role in Psychology, the study of the mind, in regards to the principle concepts of Artificial Intelligence. An important theory by Descartes was the belief that the human brain is free from the physical law, known as dualism. Other significant figures in the philosophical history of Artificial Intelligence include Francis Bacon who began the empiricism movement, John Locke, David Hume the initiator of Induction, Ludwig Wittgenstein, Bertrand Russell, Rudolf Carnap and Carl Hempel. In addition to how the mind works, another vital study of the philosophy behind Artificial Intelligence even discussed by Aristotle was the differentiation between knowledge and action, "only by understanding how actions are justified can we understand how to.

Alan Turing

In the year 1950, a great mathematician by the name of Alan Turing wrote a progressively influential paper titled "Computing Machinery and Intelligence". In this paper, he discusses an experiment commonly known as the Turing Test. The Turing Test is used to test a machine's ability to perform in an intellectual manner; it involves three individuals in order to be correctly performed. It involves a man (A), a woman (B), and an interrogator(C). The interrogator is separated from the man and woman in a closed room; however, he is allowed to ask them both questions using a terminal connection.

The main objective of this test is for the Interrogator to determine which subject is the woman. Voice and pitch are eliminated through this test which allows the man to be deceitful and lie to the interrogator to create confusion about the gender. The interrogator can ask simple questions such as "How long is your hair?" or "What kind of shoes are you wearing?" These kinds of question have the potential to help the interrogator determine whether he is talking to the man or the women. Once that is determined the man is replaced by the computer and the test is repeated. The test of whether a machine is capable of thinking is done during the interrogation, if the computer tricks the interrogator into believing it is the women then the computer and the test have passed. However, Turing always seems to ask the questions "Can machines think?" In other words, what would the result be if a computer took part in this test? A question posed by Turing states, "Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and woman?" After all, the point of the test wasn't necessarily to determine between a man or woman but to see if the computer can trick the interrogator as often as a man is able to. Seemingly, if the interrogator was not able to prove any increased ability in deciding between the woman and the man, as opposed to between the woman and a machine, then the machine would be acknowledged to have passed the test. Turing's test became the holy grail of Artificial Intelligence at that time. This test grasped the attention of many computer scientists because it was the first formal test to determine whether or not computers were capable of intelligence. When Turing first proposed this test, it was attempted by others but no computer came close to passing. At this time, it seemed impossible for this test to pass simply because computers were extremely slow and inefficient; there were no reasonable algorithms to test. With the failures of The Turing Test, excuses such as the computers were too slow became the reasoning but in truth, that was not the problem. The problems were figuring out a way to get a machine to think. One attempt was the idea of understanding how and why humans make their decisions and applying that to a computer program. However, all this accomplished was a significant loss of both money and time.

As a result of the Turing Test, scientist began exploring the idea of computers playing intellectual games such as chess or checkers against a human. Despite the accomplishment of writing programs that "demonstrate a level of competence that is on par with people," such as playing and winning a game of chess, the answer to the question "Can machines think?" remains unanswered. Machines cannot actually think for themselves, but they can in fact perform far greater calculations with superior efficiency than a human can. For instance, a computer can be program to be an advanced chess player and it can also calculate the square root of a very large number in a matter of seconds, unlike that of a human. Machines can simply outperform humans in these cases but this is not without being programmed by a human. These computations are given to the computer itself and nothing is being thought about on its own. The fact is computers and machines have the ability to do things faster and more efficient than humans and in some cases things that are impossible for a human to do, things such as complicated calculations and telling time. This does not mean or prove the concept of a computer making a decision or thinking for itself, all this has accomplished is proving computers and machines can do what it is told via programs.

Dartmouth 1956

The idea of intelligent activity being performed by a mechanical device came to be known as Artificial Intelligence in 1956 at a conference that was held at Dartmouth College. At this conference there were many different ideas and discussions about the concept and possibilities of Artificial Intelligence that took place. In fact, many of the same discussions and topics that were presented were about the same issues that computer scientists are dealing with today. Issues like complex theories, methodologies for abstraction, language design and machine learning. The ideas from modern computer science were something that was actually derived from AI itself and its birth was due to some of the topics discussed at this very conference which were:

  1. Automatic Computers
  2. How Can a Computer be Programmed to Use a Language
  3. Neuron Nets
  4. Theory of the Size of a Calculation
  5. Self Improvement(Machine Learning)
  6. Abstractions
  7. Randomness and Creativity

The conference at Dartmouth did not lead to any serious findings but it did define and give birth to subject of "Artificial Intelligence" and led it to be placed in its very own field of study.

History Summary

There are many aspects that are accredited to Artificial Intelligence dating far back but the actual concept and a scientific introduction to the notion itself did indeed begin with Alan Turing and at Dartmouth College. The tangible idea of a computer and/or machines thinking in a human-like fashion was brought up by these people and events. As a result, we now have the field of Artificial Intelligence and the concept of where to begin thinking about how to create a computational version of the human mind along with all its mysterious attributes like learning, emotion, and most of all thinking.

Rational Agents

An agent is the basic object when it comes to Artificial Intelligence. Simply put, it is basically anything that can be viewed as perceiving it environment through sensors and acting upon its environment through actuators.

It is simple to understand when you take this concept in terms of a human being. Think of the human being as an agent. A human being has 5 senses, which may be thought of as sensors, it can use to perceive its environment. Those senses are the senses of smell, sight, sound, taste, and touch. All of these senses give a human input as to how to act upon its environment. A human agent also contains actuator that allows it to act upon its environment which in this example would be a human being hands, feet, etc. Other example for an agent may be a Robot who uses built in digital sensors to tell how far it is from an object and 4 robotic arms it can use for actuators. When it comes to artificial intelligence there are many types of agents that may be implemented.

An agent must to be able to perceive its environment and by perceive we mean perceptual inputs at any given instance. When perceiving its environment an agent must use a percept sequence in which it obtains and stores all the history of its perception. In order to create an artificial intelligent system to learn it must make everything into a record. This is where "learning occurs in the system". Just like humans learn from their experiences an artificial system must also try and take into account what it has learned and apply it to other experiences. An agent's choice of action at any given instant can depend on the entire percept sequence observed up to date. An agent's behavior solely depends on the agent function which maps any percept sequence to an action. For example, if a robotic agent who used a heat sensor had walked towards a particular furnace in a building, it would then store that in its percept sequence and then perform an appropriate action the next time he walked next to that furnace in which it would probably walk away because of the rise in temperature. This is a good example of a rational agent. A rational agent uses its percept sequence to try and do the right thing. Doing the right thing may be vague in some instances and in terms of artificial intelligence the right thing is basically any actions that causes the agent to be successful. There must also be a way for an agent to determine whether the actions it is taking are successful. For this it uses performance measures in order to determine a decisive factor for success of an agent's behavior. Now we must take a look at the task environment which can be known as the problem and the rational agent the solution to the problem. A task environment is basically made up of PEAS which essentially stands for Performance, Environment, Actuators, and Sensors. PEAS is needed in order to of agent to perform accordingly and effectively. PEAS is basically responsible for the reasoning for creating an agent itself.

  • Fully Observable vs. Partially Observable: In order for an agent to be fully observable sensors must provide the complete state of the environment at any time. The advantage of having a fully observable agent is that it does not need to keep track of its internal state in order to operate effectively in the world. When an agent is partially observable is mainly due to poor and inaccurate sensors or if parts of the world itself are missing the sensors data.
  • Deterministic vs. Stochastic: When an environment is deterministic the action of the next state completely depends on the action of the previous state in order for an agent to execute accordingly. On the other hand, stochastic agent's actions does not depend on any previous state.
  • Episodic vs. Sequential: In an episodic task environment an agent's experience is essentially broken down into what is called atomic episodes. Only single actions are performed and do not depend on the previous actions that were performed. Conversely, in a sequential environment all future decisions are effectively determined by the current action taken by the agent, for instance a chess agent or automated vehicle agent. Episodic environments are much simpler then sequential environments because the agents does not have to think ahead.
  • Static vs. Dynamic: The environment is considered to dynamic when it is able to change during an agent's deliberation. Alternatively, a static environment does not change at all during an agent's deliberation; it does not need to repeatedly look at the world in order to perform an action. A dynamic environment is continuously asking the agent what it wants it to do.
  • Discrete vs. Continuous: The discrete/continuous environment can be applied to the state of the environment, to the way the time is handled, and to the percepts and actions of the agent. A discrete-state environment contains a finite number of distinct states and a discrete set of percepts and actions.
  • Single Agent vs. Multiagent: A single agent environment basically means that one agent is needed to execute an action on the given environment for instance a crossword puzzle agent. When someone works on a crossword puzzle they usually work on it on their own. On the other hand, what if you can create a chess agent? In this case you need a multiagent because more than one agent is needed to play chess.

The job of AI is to design the agent program that is to implement agent function mapping to percepts. An agent must run on some sort of computing device that has sensors and actuators. This is known as the agent's architecture. Thus an agent essentially consist of a basic formula which is:

Agent = Architecture + Program

The are basically four different type of agents that fall under all available principle of AI they are;

  • Simple Reflex Agents
  • Model Based Reflex Agents
  • Goal Based Agents
  • Utility Agents

The simplest form of agent is artificial intelligence is the simple reflex agent. This agent basically selects action based on the current percept and pays no attention to any of the previous percept history. Below is a figure and pseudo code which gives a better understanding on how simple reflex agents work.

This figure allows us to understand the relation between the percept and the action. It shows a very basic way in which an artificial intelligence agent can operate. As you can see it gets basic information from the current state and conditionally performs on action based on the current environment and pays absolutely no attention to the current environment. It asks the sensors "What is the World like now?" and simply acts upon it. It then performs an action with its actuator and is executed on the basis of a predetermined condition which is not in part of the percept history. Simple reflex agents have the admirable property of being simple, but they turn out to be very limited in intelligence. An extreme downside to simple reflex agent is that they tend to get stuck in infinite loops when the environment is partially unobservable. For instance, imagine if there was a robotic arm that lifted coins in two locations, let's call these locations A and B. Now let's look at this working in a correct state of action. Location A has a coin so the robotic arm picks it up and simply sets it down in location B. Now imagine there are no coins in none of the location and the robotic arm is programmed to check the other location if no coin is present in the current location. This simple reflex agent would definitely keep going back and forth looking a coin that isn't there and will surely be stuck in an infinite loop.

Next, another type of reflex agent that is available in artificial intelligence is model based artificial intelligence. A model based reflex agents basically aids the downside of having a partially observable environment. The agent should maintain at least some form of internal state that depends on the percept history and thereby reflects some of the unobserved aspects of the current state. The figure below shows the basic idea of a model based reflex agent.

This figure essentially displays the flaws that a model based reflex agent corrects from the inconsistent simple reflex model. Even though I may have a partially observed environment, it can still perform more actually because it takes into account past data. This simply eliminates a degree of failure when dealing with a reflex agent. It is basically like a human learning from life's experiences so to speak. For example, do you remember the first time you got burned by a hot stove? I'm sure you probably learned a good lesson and carefully checked the temperature of the stove see if it was one or what not. Same thing goes for a model based reflex agent, it checks the current state of the environment then sets down some rules on what do when the agent is that particular environment and then acts upon the given rule.

Sometime knowing the current state of an environment is simply not enough to perform an action correctly. For instance, imagine you are in an automated vehicle agent who is simply parked and you want to go somewhere specific. There is no way the automatic vehicle can perform an action based on the current state considering the car is parked. This specific type of agent needs a certain goal in order for it to work properly. It essentially needs a precise goal. The type of agent that is needed to perform such a task is goal-based agent. Below is figure that displays this specific concept.

As shown on the figure you can see that before it performs an action it is given a goal and not a condition like in a model based reflex agent. This type of agent first takes a look at the current state of the environment and takes look a what will happen if it performs a certain action is then given a specific goal in which is act upon. Sometimes goal-based action selection is straightforward, goal satisfaction results from immediately from a sing action.

Sometimes a goal-based agent may be too black and white and are not complex enough to handle more complicated tasks. Goals just provide a crude binary distinction between "happy" and "unhappy" states, whereas a more general performance should allow a comparison of different world states according to exactly how happy they would make the agent if it could be achieved. This is where we must use a utility function which maps a state or a sequence of states on real numbers, which describes the associated degree of happiness. For example, you might not always be in a happy or unhappy state. There are days that are extremely average and you do not feel any particular emotion or you might feel a general degree of happiness or unhappiness. You could probably set you happiness level in a scale from 1 to 10, this concept is based on the same idea for a utility based agent. A utility agent basically takes in to account its current environment and decides to act on an action that simply makes it happier. This is done by a utility based agent in order to make better rational decision that will engage in a better performance.

Finally, we must now discuss learning agents. Learning agents consists of four conceptual components which are the learning element, performance element, critic, and problem generator. The learning element is in charge of making relative improvements, and completely depends on the design of the performance element in order to perform effectively. It also takes advice from the critic in order to properly modify the element allowing it to make more rational decisions in the future. The performance element is responsible for selected the external actions of an agent, it simply takes in the percepts and performs an action. The critic tells the learning agent how well the agent is doing with respect to fixed performance standard. The critic is a very important to a learning agent because it is the rational conscious. It bases actions on a set performance and tells the learning agent whether the result of its action is good thing. For instance, take the good example of a chess agent who has checkmated it's opponent, it takes that performance standard lets the agent know that what the agent did was rational or not. The problem generator is essentially in charge of generating new experiences for the agent that it deems to be new and informative. If the choice was up to the performance element it would probably do the action that seemed best. However, that is not a good way for a learning agent to actually learn and grow artificially. It needs to experience new problems and learn to adapt to new situation just like any human would. The problem generator is very important because it can allow the learning element to find better actions in the end. Below is general figure displaying how a learning agent works.

As you can see, the critic takes in data about the environment from its sensors and the performance standard and decides whether it is rational or irrational. It then sends the data to the learning element in order for it make the necessary improvements. Next the learning element sends changes to the performance element and retrieves back some knowledge. Simultaneously, the learning element sends new exploratory data to the performance element in order to gain access to new experiences and finally the performance elements sends out the data to the actuators to perform the given action.

Recently we discussed the four general types of agents in artificial intelligence. Now I would like to discuss a very complex goal-based agent called a problem solving agent. Problem solving agents decide what do to by finding sequences of actions that lead to their desired states. Problem solving agents tend to organize their behavior by using goals. Problem solving agents use goal formulation which is based on the present situation and the agent's performance measure, and doing this is known to be the first actual step to solving a problem. Next, the problem solving agent decides what actions and state to think about, this is called problem formulation. An agent will have several immediate options with unknown value and can decide what to do by first examining different possible sequences of action that lead to states of known value, and choosing the best sequence. In order to look for a certain sequence the problem solving agents needs to do what we call a search. A solution which is the form of action is then calculated and then returned by a search algorithm. Then the problem solving agent then performs an execution once a proper solution is found. The problem solving agent essentially takes three basic steps which are formulate, search, and finally execute. Below is a pseudo code that shows the entire process of how a problem solving agent works.

When using a problem solving agent we use several basic components to define a problem. The first component we must consider is the initial state. The initial state is simply the state from where the agent begins. For example if you have an automated driving agent and the agent currently located a parking garage at the UTPA campus then the initial state of the automated driving agent would be at that very location. Next we take a look at the successor function which is used to return a set of order pairs (action,successor). The first field displays an action that shows the agent in state x, and the second field shows the successor of state x when the action is applied. Here are some examples of an automated driving agent ordered pairs that is return from a successor function:

{(Go(Mission),In(Mission)}, {(Go(McAllen),In(McAllen)}, {(Go(Edinburg),In(Edinburg)}

In these examples it displays the results from a successor function when shows the automated driving agent how to get to Edinburg. In the first field of the set we the action which means "go to Mission" and in the next field we see the result of the action which is actually the successor and simply means "in Edinburg". Then as you can see it then does a similar process to get to McAllen with the successor function until it ultimately gets to Edinburg. Primarily together the initial state and the successor function implicitly define the state space of the problem - the set of all states reachable from the initial state. So in the case of an automated driving agent the state space could basically be anywhere it can drive to. So anywhere in North America could be the state space of the automated driving agent.

In a problem solving goal based agent, a set of goals is given to it and must perform them accordingly the goal test is implemented to check if the current state it is in is actually a goal which was specified initially. Now we must take into the account another component which is the path cost. For instance, in an automated driving agent there might different ways to get to specific location or goal. The path cost is used to see which path is more cost efficient. The automated driving agent might consume less fuel if it were to go through path B as opposed to path A. Another factor to take into account is time. Sometimes time may be the deciding factor to get to the goal rather than monetary cost, it just depends the agent itself. This is where the agent must use the optimal solution in order to perform the most cost efficient action which in result has the lowest path cost from all the solutions available.

When dealing with problem solving agents there are two basic types of problems used which include; toy problems and real world problems. Toy problems are used to indicate a variety of problem solving methods; they are essentially used in order for researchers to examine various algorithms. A real world problem is one in which the solution coincides with real life decisions and are important to people. An example of a toy problem is an 8-puzzle which consists of a 3x3 tile board that contains a blank space along with eight number tiles. This is a common child's toy which often depicts a picture of some sort, the object is to arrange the tiles in such a way that the picture is complete by moving each tile one by one into the blank space that the movement of the previous tile has created. The formulation for this toy problem is as follows;

  • States: 3x3 board using eight tiles and one blank space.
  • Initial State: In this problem any state can be elected as the initial state, therefore the blank space can be arranged anywhere on the board.
  • Successor Function: Displays the states that result from trying the four moves that may be used which include up, down, left, or right.
  • Goal Test: Checks to see if the current state matches the actual goal.
  • Path Cost: The cost of each step in this toy problem is equal to one.

An example of a real world problem would consist of a route solving problem. Route finding algorithms are used in a variety of applications, such as routing in computer networks, military operations planning, and airline travel planning systems.

  • States: Every state is displayed by the current time and a particular location.
  • Initial State: The initial state is contained by the problem.
  • Successor Function: The successor returns the states from any late departures or arrivals which are inconsistent with the current time plus within airport transit time from one airport to another.
  • Goal Test: Determines whether the flight arrived at a specified time.
  • Path Cost: Path cost depends on anything that may affect monetary cost, airport customs procedures, and general flight times.

Recently we have discussed how to create problem for the problem solving agents, now we must learn to search for the solution for any given problem an agent might have. To do this we a search tree in order to search for the proper solutions to a problem. The root of the search tree is known as the search node which resembles the initial state of the environment. The first thing that must done when the search tree is still on the search to is to see whether the search node the goal of the search, if it is it then expands accordingly to look for other solutions. The search tree using expanding to expand the current in which it applies it to the successor function to the current which results in the spawn of news states, this is called generating. This process is ultimately done until an actual solution is found to the problem or no other solution is available. The search tree is by no means to be confused with the state space. The state space can only depicts of certain amount state and the search tree can provide and seemingly infinite number of paths. There is any array of ways to depict a node. In the example below we will represent a node as a data structure with five components in order to provide a better understanding of what exactly a node really is.

  • States: the state is the state space that displays where the node is.
  • Parent Node: The node in a search tree that provides new nodes.
  • Action: the action that was applied to the parent node to provide a brand new node.
  • Path Cost: The amount it cost to go from the current node to the new existing node which is indicated by the parent's points. This is usually represented by g(n).
  • Depth: the number of steps along the path from the initial state.

Knowing the difference between the state and the node is very important because it can cause some serious confusion. A node is a bookkeeping data structure used to represent the search tree and a state corresponds to the configuration of the world. Two different nodes may contain the exact same world state. All collection of nodes that have indeed been generated but not yet expanded upon are process called fringe. Each element of fringe is a called a leaf node which is a node with simply no successors in a tree.There is only two forms of output in a problem solving algorithm, it is either a failure or a solution. An evaluation of the algorithm's performance is done in four different ways:

  • Completeness: This checks to see if the algorithm is guaranteed to find a solution.
  • Optimality: This checks to see whether or not the optimal solution was found.
  • Time Complexity: This returns the time to see how long it takes the algorithm to find a solution.
  • Space Complexity: Measures how much memory is required to perform the search.

Time and space complexity are always considered with respect to some measure of the problem difficulty. In order to measure time we must look at the nodes generated during the search. The maximum number of nodes stored in memory is used in terms of space.

Now that we have discussed different problem solving search agents and the different ways they are assessed, we must now focus on the different search strategies used in a problem solving agent's search tree. The first type of search that will be discussed is called Breadth-first search. Breadth-first search simply expands the root node first, and then all of the root node's successors are expanded followed by their successors. This means that all nodes must be at a certain level before they are capable of reaching the next level. "Breadth-first search can be implemented by calling the search tree with an empty fringe that is first-in-first-out (FIFO) queue, assuring that the nodes visited first will be expanded first." At the first level the root of the search tree generates b nodes which in return generate more b nodes which gives a total of b² at the second level.

Breadth-first search is only used when all step costs are equal this is because it expands the shallowest unexpanded node. Instead of expanding the shallowest node, uniform-cost search expands the node in with the lowest path cost. Uniform-cost search only cares about the total cost, it does not care about the number of steps a path has.

Another search that can be used is called Depth-first search. Depth-first search expands the deepest node and the current fringe of the search tree. The search then moves to the lowest level of the search tree. This is where the nodes have no successors. Depth-first search implements a last-in-first-out (LIFO

In addition to Depth-first search Depth-limited search must too be discussed. The problem of unbounded trees can be alleviated by supplying Depth-first search with a pre-determined depth limit which is l. This means that the nodes at depth l have no successors. The Depth-limited search solves infinite path problem. However, sometimes the depth-limited search can be based on the knowledge of the problem. "Depth-limited search can be implemented as a single modification to the general tree search algorithm or to the recursive depth-first search algorithm." Below in the figure we show pseudocode for recursive depth-limited search.

The next search that will be discussed is called the Iterative deepening depth-first search. Iterative deepening depth-first search is used to find the best depth limit. This is done by steadily increasing the limit until a certain goal is found. A goal is found when a depth-limit reaches the depth of the shallowest goal node.

Expert Systems

"An expert system is an interactive computer-based decision tool that uses both facts and heuristics to solve difficult decision problems based on knowledge acquired from an expert."

(The Fundamentals of Expert Systems)

An expert system is essentially a system that produces the same knowledge as an expert would. For instance, imagine if you were in 3rd world country and you had no means of getting to good doctor in due time to help your crisis; you could simply use someone as an expert system to diagnose your symptoms. So for example if you were having extreme nausea, a rash, and pale skin, you enter that information into the expert system and then it would return an accurate diagnosis to your problem in this case it would be something along the lines of a bacterial poisoning. One successful expert system that has been implemented is a system called MYCIN which in used in the medical field in to order to correctly and efficiently diagnose infectious blood diseases. Another proven expert system that is well known is called XCON which is used to configure computer systems. There many different types of expert systems available, a list of the different types are as follows:

  • Interpreting and identifying
  • Predicting
  • Diagnosing
  • Designing
  • Planning
  • Monitoring
  • Debugging and testing
  • Instructing and training
  • Controlling

There is a certain hierarchical process in to creating an expert system. The chart below displays a great description to the process of a good expert system.

Expert systems are normally exceptionally specific to a domain. For instance, a diagnostic expert system for testing various types of cancer must essentially perform all the necessary diagnostic processes just as well human expert, in this case a doctor. The developer of such a system must limit their scope of the system to just what is needed to solve the target problem. Special tools or programming languages are often needed to accomplish the specific objectives of the system.

In order to create an expert system are special programming languages that must be used. The special programming languages that are generally used to program experts systems are known as LISP and PROLOG. They make the programming process easier for the programmers. In order for an expert system to work correctly, it must contain certain characteristics which include:

  • Efficient mix of integer and real variables
  • Good memory-management procedures
  • Extensive data-manipulation routines
  • Incremental compilation
  • Tagged memory architecture
  • Optimization of the systems environment
  • Efficient search procedures

In order to properly create an expert system multifarious decisions must engross an obscure combination of factual and heuristic knowledge. The expert system must to be able to retrieve and successfully use heuristic knowledge. The given knowledge must be controlled in an available format that tells the difference among data, knowledge, and control structures. This is the reason that expert systems are prearranged in three distinct levels:

  1. Knowledge base
  2. Working memory
  3. Inference engine

These 3 things may definitely come from different sources. For instance, let's say we want to obtain an expert system that returns the type of pathogen in a certain person blood. The inference engine can get brought from a company who specializes in system hardware, the knowledge base can come from an array of hematologist, and the working memory can come from the end users of the expert system. A knowledge base is not to be confused for a traditional data base. The traditional data base environment simply works with data that has a static relationship between the elements in the problem domain. A knowledge base is formed by actual knowledge engineers who must translate the knowledge of real human experts into rules and strategies.

The knowledge base has the problem-solving rules that might be used by a human expert who might use them in solving problems for a problem domain. The knowledge base is usually stored in terms of if-then conditional rules. The working memory represents relevant data for the current problem being solved. The inference engine is the control mechanism that organizes the problem data and searches through the knowledge base for applicable rules. Below is a data flow diagram that shows how data is entered from the user and the expert itself into the knowledge base and the other components of the expert system.

Well made expert systems are projected to grow as it learns from the user's feedback. Knowledge from an expert is integrated into the knowledge base in order to make the expert system smarter. The dynamism of the application environment for expert systems depends on the individual dynamism of the components. This can be classified as follows:

  • Working memory: Very Dynamic. The contents of the working memory is most dynamic because changes with each problem or situation.
  • Knowledge base: Moderately dynamic. The knowledge base does not change unless a new piece of information comes up that that shows signs of a change in the problem solution procedure.
  • Inference engine: Least dynamic. Changes are done to the expert system only if necessary to fix bugs or augment the inferential process.

Some might wonder why there is a reason for an expert system, the pros simply outweigh the cons. Stop to think for seconds on how undependable a human can be. Humans can get sick, get tired, have difficulty processing large amounts of data rapidly, have trouble storing large amounts of data properly, forget details that can be very crucial, and humans can simply die and cease to function. There are also major benefits to having an expert systems implemented. They can increase the probability, frequency, and consistency of making good, decisions, help distribute human expertise, facilitate real-time, low-cost expert-level decisions by the nonexpert, enhance the utilization of most of the available data, permit objectivity by weighing evidence without bias and without regard for the user's personal and emotional reactions, permit dynamism through modularity of structure, free up the mind and time of the human expert to enable him or her to concentrate on more creative activities, and finally encourage investigations into the subtle areas of a problem.


In summation, many computer scientists alike have not even began to scratch the surface on Artificial Intelligence and Expert Systems. I feel there are good things that are to come of this extremely interesting fields and the results will rid any doubt of creating a extraordinary human-like agent that can feel, think, and most of all learn like we can. The thought artificially creating a human life or a machine that can think rationally has been in the back of many people's minds for centuries and we have finally come to verge of a serious break through. However, there is no telling when technology will catch up to actually create artificial intelligence being like in Steven Spielberg's movie AI or have highly functional robots like Johnny 5 from the move Short Circuit. On different note, I feel the expert systems will be something that will be implemented more rapidly in the very near future. The implementation of more advanced expert system will surely become something that we will see great advances in before artificial intelligence. does. Expert system will be knowledgeably saving businesses large amounts of time and money and others will be saving lives. Great things are still to come in terms of computer science as a whole and live changing advances in artificial intelligence are experts system will be sure to take your breath away in the not too distance future.

Works Cited

  1. Fogel, D. B. (2002). Blondie24: Playing at the Edge of AI. San Fransisco,CA: Morgan Kaufman Publishers.
  2. Luger, G. F. (2008). Artificial Intelligence. Boston: Pearson Addison Wesley. Russell, S., & Norvig, P. (2003). Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Pearson Education Inc.
  3. The Fundamentals of Expert Systems. (n.d.). Retrieved November 13, 2008, from http://media.wiley.com/product_data/excerpt/18/04712933/0471293318.pdf