This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Evaluation of complex systems are generally complicated and time-consuming. Evaluation is needed nearly for all engineering tasks and the obstacles related with evaluation are increased proportional with complexity. New techniques can be used to automate manual evaluation and to overcome the obstacles related with evaluation that cannot be solved (or very difficultly solved) with conventional computing. In this study, a methodology was developed to handle the heuristic knowledge of experts for evaluation purposes. In this method, the knowledge was represented as reference model of evaluation objectives, production rules, measures, methods and parameters. "Common Evaluation Process" and "Common Evaluation Model", which simplify, speed up evaluation process and decrease evaluation cost, were proposed and developed.
A hybrid expert-fuzzy system, called "INtelligent Evaluation System" (INES), which can be used for evaluation of complex systems was developed. To define a process and develop a system that simplifies and speeds up the evaluation can save time, decrease cost and provide reusability. As evaluation of complex systems includes uncertainty in some aspects, fuzzy logic was incorporated with expert system for reasoning.
INES was implemented successfully for evaluation of Air Defense System, which is a complex system used to protect some region from all air threats.
Keywords: Intelligent Evaluation System, Complex Systems, Evaluation, Common Evaluation Process, assessment, air defense system, expert system, fuzzy logic
Evaluation of complex systems is generally complicated and time-consuming (Ã-ztemel & Ã-ztürk, 2003). Existing evaluation systems are domain dependent and generally do not provide explanation on how the system reaches the evaluation results. Elicited new evaluation information cannot be updated to the system easily. Expertise is needed for evaluation process. The complexity of accomplished tasks by systems is increasing day by day. Evaluation is needed nearly for all engineering tasks and the obstacles related with evaluation are increased proportional with complexity. New techniques can be used to automate manual evaluation and to overcome the obstacles related with evaluation that cannot be solved (or very difficultly solved) with conventional computing. Domain knowledge is necessary for evaluation purposes. Besides finding out the knowledge and formulating it, there is not a structured approach developed so far helping the developers to make the required assessment. "A number of different evaluation methodologies exist while there is no general evaluation methodology" (Hornung, 1995). To define such a process and methodology that simplifies and speeds the evaluation of complex systems can obviously save time, decrease cost, and provide reusability.
One purpose of the study is to develop "Common Evaluation Process" and "Common Evaluation Model" for simplifying and speeding up evaluation of complex systems. One of the other purpose of the study is to develop a fuzzy rule-based tool, so called INtelligent Evaluation System (INES), which can be used for evaluation of complex systems and synthetic environments such as simulators.
Nearly all evaluation systems were developed according the expertise of experts. Generally, experts have heuristic evaluation knowledge of domain. Existing evaluation systems do not provide enough reusability, knowledge share, automated evaluation for handling the heuristic knowledge of experts. For this reason, another aim of this study is developing a methodology to handle the heuristic knowledge of experts from different domains and information from different sources for evaluation purposes. The evaluation knowledge was represented as reference model of evaluation objectives, production rules, measures, methods and parameters.
Evaluation is a general term referring to the collection and processing of data, information and knowledge in order to compare events, which have taken place to a set of normative criteria or goals.
"Evaluation is the systematic assessment of the worth or merit of some object" (Trochin, 2004).
Generally, evaluation comprises;
Evaluation Objectives indicating what is going to be evaluated. For example: air defense system performance evaluation etc,
The data indicating the type of data and their precision (if applicable), units (if applicable),
The rules, measures or methods to perform evaluation objectives,
Evaluation results, the output of the execution of rules, measures, methods and the parameters with data,
User interface to present the evaluation results to the user.
Some examples of evaluation that could be found in the literature are as follows:
Collaborative Virtual Environments Performance Evaluation (Oliveira et al., 1999),
Distributed Fuzzy Qualitative Evaluation System (DFQES) for complex distributed evaluation scenarios (Yuen & Lau, 2006)
Software Evaluation (Vlahavas et al., 1999),
Evaluation of personnel (Humpert et al., 1996)
Simulation Based Training Scenarios Evaluation (Gregory, 1998).
3 Common Evaluation Process (CEP)
In this study, Common Evaluation Process (CEP) was developed for simplifying and speeding up the evaluation of complex systems, Synthetic Environments (SEs), etc. During the development of CEP, SEEP (Synthetic Environment Evaluation Process) (Ã-ztürk et al., 2004), SEDEP (Synthetic Environment Development and Exploitation Process) (Ford & Peyronnet, 2001), FEDEP (Federation Development and Execution Process) (IEEE Std. 1516.3, 2003), ISD (Instructional System Development) model (TRADOC Regulation 350-70, 1999) and engineering procedures were considered.
CEP has four steps as shown in Fig. 1 and is explained below. CEP can be used iteratively, which means it may be initiated several times for a particular system, program, project, human and that successive iterations build on the information already available. There are also feedback loops where it may be necessary to revisit an earlier step as a result of actions performed in later ones.
The description of CEP steps is as follows:
Step 1 Define Evaluation:
The purpose of this step is to determine evaluation definition, which includes user evaluation objectives, evaluation criteria (rules), evaluation measures, evaluation methods, evaluation parameters, questionnaires and checklists
wherever applicable. Evaluation objectives are the goals of the user for performing evaluation. User evaluation objectives can be elicited from the user needs/requirements or system requirements.
User evaluation objectives can be defined hierarchically as main goals and their sub-goals. Sub-goals are a set of goals to accomplish the main evaluation objective. In this step, the evaluation criteria (rules) related with the evaluation objective(s) should also be defined. Evaluation
Fig. 1 Common Evaluation Process (CEP)
parameters indicate the type of data, their precision (if applicable) and units (if applicable) used in rules and methods. Evaluation rules are criteria used to assess the collected parameters or calculated evaluation measures. Evaluation parameters are variables needed for applying rules or calculating the result of methods. The results of methods are defined as measures in order to simplify the evaluation rules and provide reusability. Evaluation methods are the algorithms for analyzing the collected parameters or/and calculating measures used in the rules. Questionnaires and checklists are used to collect related evaluation parameters or measures values in some situations.
A simplified typical evaluation definition is given below:
Main evaluation objective is Air Defense (AD) System Evaluation
Evaluation sub-objective is to evaluate Hit Ratio of the AD system
Evaluation measure is Hit Ratio
Evaluation method is the ratio between number of missiles hit and total number of missiles launched
Evaluation parameters are number of missiles hit and total number of launched
Evaluation Rules for this example are
If Hit Ratio is greater than 85 then Hit Ratio is sufficient
If Hit Ratio is smaller than (or equal to) 85 then Hit Ratio is insufficient
A 100% hit ratio is not essential because the AD System usually should be able to launch a second AD missile, if necessary.
Step 2 Design Evaluation:
The purpose of this step is to design evaluation rules, measures, methods and parameters that have to be applied in the evaluation execution (Step 3) step by software means. Commercial or government off-the-shelf (COTS/GOTS) analysis tools and other post-processing tools are often applicable here. Specialized tools developed for a specific environment can also be used in this step.
A general representation for the developed evaluation definition above is as follows:
Evaluation Measure is Hit_Ratio
Evaluation Method is (SUM of missiles Hit / SUM of missiles launched) * 100
Respective evaluation rules are
if Hit_Ratio<85 then Hit_Success=sufficient
If Hit_Ratio>= 85 then Hit_Success=insufficient
Note that the representation of the rules, measures, methods and parameters can be changed according to the development environment such as C++, Pascal or Prolog.
Step 3 Evaluate Execution:
In this step, the execution of rules, algorithms and other data reduction or collection methods to transform output data into parameters on a given problem is required. Suitable questionnaires and checklists can also be used to collect evaluation parameters or measures.
Evaluation execution of the example given above can be as follows:
SUM of missiles Hit = 9,
SUM of missiles launched =10
Hit_Ratio= (SUM of missiles Hit / SUM of missiles launched) * 100 = 90
Fired evaluation rule is
If Hit Ratio is greater than 85 then Hit Ratio is sufficient
Step 4 Generate Evaluation Results:
The purpose of this step is to assess the results of execution, to generate feedback to the user and to keep the history of evaluation results.
The users of systems and synthetic environments require timely feedback on their performance for effective training and mission rehearsal (Haines, 1998).
The results are fed back to the user so that he/she can decide if the evaluation objectives have been met, or if further work is required.
Commercial or government off-the-shelf (COTS/GOTS) statistical and graphical tools are often applicable in this step. Specialized tools developed for a specific domain and environment can also be used in this step.
4 Common Evaluation Model (CEM)
Hierarchical representation is one of the methodologies to represent complex data. The importance of hierarchical data representation is so much that we could represent any present day system models based on it. Metadata representation in the hierarchy could tell a lot about the data itself (Rambhia, 2002). Hierarchical representation shows graphically the relationships of the problem and solution and can deal with more complex situations in a compact form. The Common Evaluation Model developed in this study is a knowledge representation of the CEP and is shown in Fig. 2. In this model, the CEP and the relationship between evaluation objectives, rules, measures, methods and parameters are taken into account. The hierarchical structure represents the relations between objectives, which end up in a hierarchy from high-level to low-level objectives. High-level objectives are the main branches of the tree whereas the low level objectives (sub objectives) are stored as lower
Fig. 2 Common Evaluation Model
level branches. Evaluation objectives describe the goals of the evaluation to be performed. These can be derived from the user's needs and user/system requirements.
Each evaluation objective has related evaluation rules and different evaluation objectives can use the same evaluation rules in order to prevent duplication as shown in Fig.2 and in Fig.3. In the same way, each evaluation rule is related with evaluation measures (or parameters) and different evaluation rules can use the same evaluation measures (or parameters) in order to prevent duplication of measures (or parameters). As similar, each evaluation measure is related with evaluation methods and different evaluation measures can use the same evaluation method in order to prevent duplication of methods. Each evaluation method or rule has one or more related evaluation parameters and different evaluation methods (or rule) can use the same related evaluation parameter in order to prevent duplication of parameters. In evaluation of simple systems, the evaluation knowledge can be defined by using evaluation objectives, rules and
Fig. 3 An instance of Common Evaluation Model
parameters (shown as "OR" in Fig.2). In evaluation of complex systems, the evaluation knowledge can be defined by using evaluation objectives, rules, measures, methods and parameters.
As an instance of CEM, the relationship between Evaluation Objective 1 and related rules, measures, methods and parameters are shown in bold in Fig. 3.
5 Why AI Based Evaluation?
Artificial Intelligence (AI) is the branch of computer science that is concerned with the automation of intelligent behavior (Russell and Norvig, 1995). There are several AI techniques developed and used successfully in industrial problems such as expert systems, fuzzy logic, neural networks, genetic algorithms, intelligent agents, robotics, computer vision, natural language processing. AI technology can be beneficial to overcome the obstacles related with evaluation that cannot be solved (or very difficultly solved) with conventional computing, which are given below:
Systems and Synthetic Environments are becoming more complex day by day, and it is difficult to evaluate them manually or using conventional computing. For example, there are no well-accepted methods for storing and manipulating simulation execution logs that can be used for evaluation (Volant, 2001).
Expertise is needed for evaluation process. But there are very few Subject Matter Experts (SMEs) being able to evaluate systems and SEs efficiently, especially for complex tasks. An SME is individual who, by virtue of position, education, training, or experience, is expected to have greater than normal expertise or insight relative to a particular technical or operational discipline, system or process (Pace, 1999).
Fallesen reported an experiment that was designed to determine differences in information usage by tactical planners indicated that 78% of critical facts identified by the experts were missed by the non-experts (Mulgung et al., 2000). The knowledge of SMEs should be transferred and saved to the computers.
Evaluation Process is ill defined by nature and changes according to the SMEs. Generally, evaluation is made via the subjective observations of a SME (Rigg, 2000).
As new information can be elicited over time, updating the required knowledge for evaluation should be done easily without changing source code.
It is important to provide evaluation results with an understanding of the source of the problem instead of only judgements on outcome (Bass, 1998). This may help evaluators, instructors, and trainees to understand where to focus future training and evaluation.
There is a need to objectively evaluate systems, simulation based training scenarios (Gregory, 1998), etc. But manual methods are generally subjective by nature.
Existing evaluation systems are domain dependent. A common evaluation methodology or system can provide reusability.
6 INtelligent Evaluation System (INES)
INES was developed according to the "Common Evaluation Model" and the requirements of evaluation tools. INES is a rule-based tool including a special designed Expert System and Fuzzy Logic Toolbox to
assist the user in the evaluation definition phase by providing information on which criteria, measures, methods, parameters and questionnaires need to be used in the evaluation.
allow a direct access to captured knowledge of Subject Matter Experts. This increases confidence on the knowledge utilized in the evaluation process and improve the idea transfer and knowledge transfer among the evaluators
execute evaluation definition and generate evaluation results
present, and save results in textual and graphical form with the reason of inferencing
reduce complexity associated with the evaluation
reduce time and cost required to accomplish the evaluation tasks
model the uncertainty about overall evaluation and provide reasoning on linguistic variables.
Evaluators can use INES for evaluating of complex systems, Synthetic Environments, training, mission rehearsal and simulation based acquisition.
An iterative process was used to develop INES using Borland Delphi 5 and Matlab 6.5. With this process, firstly a small prototype was developed and enhanced by time.
INES (INtelligent Evaluation System) is mainly constructed of three components. These are:
INES Knowledge Base (KB) for storing the domain knowledge.
INES Expert System (ES) Inference Engine for performing reasoning in accordance with the evaluation objectives defined by the user.
Fuzzy Logic (FL) for doing overall assessment of results generated by ES. FL was used to model the uncertainty about overall evaluation and provides reasoning on linguistic variables.
The INES Knowledge Base was separated from the INES Inference Engine. This is an important feature of ES, which makes design, and development of intelligent systems much easier. Besides, this separation allows the user to populate and improve the level of knowledge in the knowledge base easily when more knowledge is available over time. The main components of INES and their relationship are shown in Fig. 4.
6.1 Knowledge Base
Detailed knowledge about the respective area (the "domain") is necessary for developing the AI tools. Therefore, it is necessary to collect required knowledge, to transform elicited knowledge into a machine-readable format, and to store the related knowledge in a structured way inside the so-called "Knowledge Base".
INES KB provides knowledge for the INES Inference Engine to make selections, and reasoning.
"Knowledge Base Editor" is used in order to collect required knowledge and transform elicited knowledge into a machine-readable format.
Knowledge base contains the knowledge and expertise of SMEs for performing evaluation. The Knowledge Base of INES contains knowledge about:
Evaluation Objective Definition: The information about the title, state, definition of evaluation objectives and relationship of evaluation objectives each other.
Evaluation rules: The knowledge about the criteria that is used for assessments such as successful/unsuccessful.
Evaluation measures: The knowledge about the variables used in evaluation rules.
Evaluation methods: The algorithms to calculate measures used in the rules.
Fig. 4 The Main Components of INES
Evaluation parameters: The data about the variables used in measures or methods.
The relation between evaluation objectives, rules, measures, methods and parameters is shown in Fig. 2.
In this study, "Knowledge Base Editor" was developed in order to collect required knowledge and transform elicited knowledge into a machine-readable format.
6.2 INES Inference Engine (IE)
The core of the INES is its Inference Engine, which is also known as the control structure or the rule interpreter. The INES IE handles the knowledge stored in the Knowledge Base and generates evaluation results for the user's evaluation objectives.
INES IE uses the rules stored in the knowledge base. Finding a rule and executing it to generate knowledge or decisions is called rule firing. Various inferencing mechanisms are already developed. Three main strategies are backward chaining, forward chaining and hybrid strategies. There are also various strategies developed for searching and rule firing. The details of these strategies can be found in (Luger & Stubblefield, 1989; Russell & Norvig, 1995).
Backward Chaining Strategy is used for INES's inferencing. In this strategy, the IE starts with a goal and tries to seek for the knowledge and domain facts satisfying the goal in question. The direction of inferencing is from the goal to the related facts of the domain.
INES' Inference Engine working mechanism is as follows. User enters keyword(s) and Inference Engine of INES searches the KB for possible evaluation objectives using Depth-first search algorithm. The solution of evaluation objectives is presented in a tree-like structure with the successor and the predecessor of the solution. After getting users selections, INES Inference Engine performs the analysis of user evaluation objectives, finds necessary information for the evaluation execution such as criteria (rules), measures, methods, questionnaires, parameters and their relationships according to the selected evaluation objectives and put the collected data from the exercise to the related methods, measures, rules in order to calculate the results of evaluation.
ES Inference Engine was developed according to the activity diagram shown in Fig. 5. Each activity represents a group of "actions" in a workflow. The brief explanation of INES IE activity examples are as follows:
Read evaluation keywords: This activity receives evaluation keywords from the user in order to present the user the possible evaluation objectives from Evaluation Knowledge Base (KB).
Search evaluation objectives tree: This activity searches evaluation keywords in the evaluation objectives tree.
Search evaluation knowledge base for keywords: This activity searches evaluation keywords in the evaluation knowledge base.
Fig. 5 Activity diagram of INES Inference Engine
Generate evaluation objectives results: This activity generates and presents the results of search in a hierarchical form.
Edit Knowledge Base: This activity allows user to update, modify and add knowledge into Knowledge Base when the user cannot find his evaluation objectives in the results of search.
Select evaluation objectives among results: This activity receives user's evaluation objectives selections from the user.
Find evaluation rules related with the selected evaluation objectives: This activity finds evaluation rules related with the selected evaluation objectives from evaluation KB.
6.3 INES Fuzzy Logic Module
INES Fuzzy Logic Module consists of three main components as shown in Fig. 4:
ï‚· Fuzzifier maps the output of INES Expert System evaluation results to fuzzy sets.
Fuzzy Inference Engine generates fuzzy outputs corresponding to fuzzified inputs, with respect to the fuzzy rules. Fuzzy Rules are in "IF...THEN..." form and combines inputs to outputs. Fuzzy rules for evaluation of Air Defense System are shown in and Table 2.
Defuzzifier maps output fuzzy sets into fuzzy evaluation results.
In the following sub-section, the evaluation of an Air Defense (AD) System using INES is given.
6.4 Air Defense(AD) System Evaluation
Air Defense Systems are used to protect some region from all air threats especially from guided munitions such as missiles as shown in Fig. 6.
System performance measurement and evaluation process must be viewed as an essential supportive component of effective information system functioning and improvements of the system (Dominick, 1987).
The performance of the AD system is determined by the interception capability. AD system's interception capability of the threat, which is generally a cruise missile, was investigated in this application.
The following evaluation objectives are identified:
Evaluate hit ratio of the AD system
Evaluate AD system reaction time
Evaluate damage level in the sheltered area
The performance of AD System is evaluated by applying several rules that use various measures. The measures for AD System evaluation are Reaction_Time, Hit_Ratio, Destroyed_Sheltered _Area_Ratio, and Overall_Performance.
The information about "Hit_Ratio" is given in Section 3. An important characteristic of the AD System is the reaction time. The reaction time is the total time needed from detection of an object by the AD radar to the launching of the AD missile.
The method for calculating the Reaction time is as follows:
Reaction time [ms] = d+i+c+a
d: time between detection and identification of the object
i: time between identification and classification of the object
c: time between classification and allocation to a suitable AD weapon system
a: time between allocation and launching the AD missile
Typical reaction times of AD Systems range between 5 and 8 seconds. Realistic rules could be:
If the reaction time is less than or equal 5000ms then the reaction time is very good.
If the reaction time is greater than 5000ms and less than 8000ms then the reaction time is good.
If the reaction time is greater than or equal 8000ms then the reaction time is insufficient.
Fig. 6 Air Defense System (adopted from (Ozturk, 2006))
Fig. 7 Two dimensional damage model of sheltered area and cruise missile (adopted from (Ozturk, 2006))
6.5 Evaluation of the damage level in the sheltered area
Damage caused by the cruise missile as a function of the distance at which it explodes from the sheltered area is evaluated.
Two dimensional damage model of sheltered area and cruise missile is shown in Fig. 7. There; R is the distance between the center of sheltered area and Destruction Center of Cruise Missile, RSA is the radius of Sheltered Area, RCM is Destruction Radius of Cruise Missile, ï¡ and ï¢ are the angle to calculate circle segment of the Sheltered Area and the Cruise Missile.
The following rules are used to evaluate the Destroyed_Sheltered_Area (DSA)
If R is greater equal than (RSA+RCM) then Defended Area is undamaged
If R is smaller than RSA then Defended Area is damaged
Else If Destroyed_Area_Ratio is less than 10% then Damage on Defended Area is Acceptable
Else If Destroyed_Area_Ratio is greater equal 10% then Damage on Defended Area is Unacceptable
Fig. 8 Screenshot of INES ES for AD System evaluation
For calculation of Destroyed_Area_Ratio (DAR),
An example screenshot of INES for AD System evaluation result and the reason for "Destroyed Sheltered Area" evaluation objective is shown in Fig. 8..
Overall Performance is calculated as a function of Reaction_Time, Hit_Ratio, Destroyed_ _Area _Ratio.
Note, it was assumed that RCM and R (cruise missile destroy range) is equal each other in this model (see Fig. 9). The height of the cruise missile explosion can also be taken into account. Then
Fig. 9 The effective destruction in the sheltered area (adopted from (Ozturk, 2006))
RDR : cruise missile destroy range
h : altitude of detonation
RCM : effective radius of destruction
INES is capable of comparing the evaluation results graphically.
6.6 Overall Evaluation of AD System
The overall evaluation of AD System was done in two ways:
Using INES Expert System
Using INES Fuzzy Logic
Using INES Expert System: The overall evaluation was calculated as a function of "reaction time", "hit ratio" and "damaged ratio of the sheltered area".
Using INES Fuzzy Logic: The results of ES are assessed by Fuzzy Logic (FL) Module by parameter passing from INES ES to INES FL.
The rules of INES Fuzzy Logic for AD System evaluation are shown in Table 1 and Table 2 with
Table 1 The rules of INES Fuzzy Logic for AD System Evaluation
(If Hit Ratio is bad)
Table 2 The rules of INES Fuzzy Logic for AD System Evaluation
(If Hit Ratio is good)
Fig. 10 Overall results of INES ES and Fuzzy Logic Module for different membership functions
"Hit Ratio", "Reaction Time" and "Destroyed Area" as inputs and "Overall Performance" as output. For example, the meaning of the table for the underlined and bold cell is
If (Hit Ratio is bad) AND (Destroyed Area is little) AND (Reaction Time is Very Good), then (Overall Performance is very good).
The meaning of table for other cells is defined in similar way.For defining membership functions, several methods such as common sense, neural network, and genetic algorithms can be used. In this study, membership functions are defined according to the nature of the problem, expertise knowledge about domain and using common sense. Several types (gaussian, triangle and trapezoid) of membership functions were tried and compared with ES results. There is not much difference between the overall performance results generated by INES ES and Fuzzy Logic for gaussian, triangle and trapezoid membership functions for different values of "Reaction Time", "Hit Ratio" and "Destroyed Sheltered Area" as shown in Fig.10.
The "Gaussian" membership function for Overall Performance is shown in Fig. 11.
Fig. 11 AD System Fuzzy Evaluation output variable "Performance"
Table 3 The Comparison of INES with Some Similar Tools
Resembling Previous Studies
EDST +EDT +EET
Evaluation of Systems, Synthetic Environments
Evaluation of Synthetic Environments
Automatic analysis and assessment of trainee and team performance
Evaluation Of Pilot Performance
Software problem solving and software attribute assessment
Evaluation Objectives, measures, criteria, methods, production and fuzzy rules
Evaluation Objectives, criteria, methods and rules
scenario-specific actions and action-related judgement rules
Pilot Evaluation Objectives & indexes, rules
Multiple criteria, Software attributes
Rule and fuzzy logic based
Action-Related Judgement Rules
Updating without source code change
Explanation of Results
Maintenance and update
Inference Engine and Knowledge Bases are separated
Control integrated with information
Fig. 12 Overall result of AD System evaluation
The overall evaluation result of AD System using fuzzy logic is shown in Fig. 12. All parts of the fuzzy inference process are simultaneously displayed. There are 18 rows and each row shows the fuzzy inference process of a rule. For example, Row 11 shows the following rule;
If (Reaction Time is average) AND (Hit Ratio is good) AND (Destroyed Area is little), then (Overall Performance is very good).
The right bottom rectangle shows the output for this instance, which is the combination of output of rules. The centroid calculation, which one of the most popular defuzzification method, was used for generating the crisp output. This method returns the center of area under the output curve.
6.7 Comparison of INES with Similar Tools
Similar previous studies found in the literature are as follows:
â€¢ SIMULTAAN PASS is a sub-system of SIMULTAAN project, that advises the Scenario Manager in choosing the best scenario that leads to achieving the training objectives (Arend & Jansen, 2000).
â€¢ Performance Evaluation System was developed in WaSiF project that was used for pilot performance evaluation (Ã-ztemel et al., 2003).
â€¢ EDST+EDT+EET was developed in RTP11.13 project that can be used for Synthetic Environment evaluation (Lemmers et al., 2003).
â€¢ ESSE (Expert System for Software Evaluation) (Vlahavas, 1999).
The comparison of INES with similar previous tools is shown in Table 3.
INES has the following advantages with respect to similar tools and manually evaluation:
INES can perform evaluation of complex systems, Synthetic Environments, automatically according to the INES ES Knowledge Base easily.
Fuzzy Logic (FL) was integrated to the INES for doing overall assessment of results generated by INES ES in the highest level. FL was used to model the uncertainity about overall evaluation and provides reasoning on linguistic variables.
INES can be used to handle the heuristic knowledge of experts from different domains and information from different sources about evaluation.
INES explains how the system reaches evaluation results.
INES allows the user to add new knowledge and to modify existing ones without source code changing.
INES can reduce the time required to accomplish evaluation tasks.
INES can be used to access the captured knowledge of SMEs. By this way, each expert can benefit from others' knowledge.
In this study, it was shown that AI technology could be beneficial to decrease evaluation cost and evaluation time of complex systems. AI can also provide some other functionalities for evaluation purposes such as understanding the reason of inferencing, updating required knowledge without source code changing and simplifying evaluation process.
It was shown (provided) that expert systems and/or hybrid expert fuzzy systems could be used to overcome the obstacles related with evaluation that cannot be solved (or partially solved) by conventional computing and manual evaluation.
Common Evaluation Process (CEP), which can be used for evaluation of complex systems, was developed. CEP is a domain independent process for evaluation purposes. SEDEP, FEDEP, STEP, SAT processes, which includes evaluation steps were investigated and taken into account during the development of CEP.
A methodology was developed to handle the heuristic knowledge of experts from different domains and information from different sources for evaluation purposes. The knowledge was represented as reference model of evaluation objectives, production rules, measures, methods and parameters.
The Common Evaluation Model (CEM), which is a knowledge representation of the CEP, was developed. CEM shows the relation between evaluation objectives, rules, measures, methods and parameters. Using "Reference Model of Evaluation Objectives" and "Common Evaluation Model" decreases the number of evaluation rules that is necessary to perform evaluation to the related application. CEM also simplifies the representation of evaluation knowledge.
A hybrid Expert-fuzzy System, which is called INES (INtelligent Evaluation System), was developed based on "Common Evaluation Process", "Common Evaluation Model" and evaluation needs. Before development of INES, AI techniques including expert systems, fuzzy logic, neural networks, genetic algorithms, intelligent agents and conventional programming were investigated and compared with respect to achieving high level requirements of Evaluation Systems.
Fuzzy Logic was utilized for evaluation purposes and integrated to the INES expert system. As the evaluation includes uncertainty in some aspects, Fuzzy Logic was used for reasoning. However, it was realized that fuzzy logic could be used to perform high level (abstract) evaluation, instead of low-level evaluation In other words, fuzzy logic can be more beneficial and more easily used for overall evaluation of main objective instead of all aspects of evaluation. A lot of parameters for evaluation are required and writing a lot of rules for these parameters in fuzzy logic is not an efficient method. As more rules are needed for complex systems, it becomes increasingly difficult to relate these rules to the system. The capability to relate the rules typically diminishes when the number of rules exceeds approximately 15 (Lakhmi & Martin, 1998). Therefore, fuzzy system was used at an abstract level.
INES can be used in real time applications, because the evaluation time of INES is below 1 sec (between 0,4-0,7 sec in fact and can be decreased with optimization) and is enough for many systems' real time requirement of evaluation.
INES was implemented successfully for evaluation of Air Defense System, which is a complex system used to protect some region from all air threats.
The paper is based on the PhD study named as "Hybrid Expert System Approach For Evaluation Systems" and the project named as "Realising the Potential of European Networked Simulation" (Dumetz, 2002), which is carried out in Common Eurepean Priority Area (CEPA) 11 of Western European Armament Group (WEAG).
The author wishes to thank Prof. Dr. Ercan Ã-ztemel, Burak S. Soyer, SavaÅŸ Ã-ztürk, Ali Görçin and Dr. Ali Gürbüz from Marmara Research Center, Prof. Dr. CoÅŸkun Sönmez from Istanbul Technical University, Col. Ziya Paligu from Turkish MoD and Kemal Kiran for their valuable support.