# Simulated Ant Colony Optimization In Parameter Computer Science Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Until, the late 60s, the attention of scientists and engineers was directed towards hardware reliability mechanical, electronic systems. From the 70s, with the permanent growth of software, applications became the center of many studies. The possibility to create complex dependencies and better cost/price ratios compared to hardware led to a wide range of software applications. Today, computers are used in everyday life, in industry, banks, large systems like power distribution, traffic, water supply, etc. Computers are used even in life critical applications in hospitals; they control air traffic and airplane flight, where failures could lead to catastrophes and loss of many lives. On the one hand, there is our increasing dependence on software; on the other hand, software systems are becoming more and more complex and thus harder to develop and maintain. Software functionality is becoming crucial from the aspects of reliability, safety of human lives and security issues as well. Specification, evaluation and verification of this quality characteristic are important issues for both developers and users of the system.

This paper is divided into three sections. Section I discusses about Software Reliability and Software Reliability Models. Section II discusses about Existing Parameter Estimation Methods and the Section III discusses about new ionic method of Ant Colony Optimization and the Modified Ant Colony Optimization Methods.

## Software Reliability

Software Reliability is a probability of failure-free operation for a specified time in a specified environment for a given purpose. This indicates that quite different things depending on the system and the users of that system. In other words, reliability is a measure of how well system users think it provides the services they require. It cannot be defined objectively since reliability measurements which are quoted out of context. It requires operational profile for its definition[1]. ie., the operational profile defines the expected pattern of software usage. It must consider fault consequences. ie., all the faults are not equally serious, often System is perceived as more unreliable if there are more serious faults.

http://www.ece.cmu.edu/%7Ekoopman/des_s99/sw_reliability/Image2.gif

## 1.1 Software Reliability Models

Software reliability models are statistical models which can be used to make predictions about a software system's failure rate, given the failure history of the system. The models make assumptions about the fault discovery and removal process. These assumptions determine the form of the model and the meaning of the model's parameters.

Software reliability models are an important tool in quality management and release planning. There is a large number of different models that often exhibit strengths in different areas. For all the Models there are some assumptions we can state based on the software failure process. Before applying a model, we should take care in determining whether the model, including its underlying assumptions, is appropriate for their situation. We can see some of the assumptions for software reliability models [2].

1. Program failures occur independently

2. During testing, the software is operated in a similar manner as the anticipated operational usage.

3. The set of inputs per test run is randomly selected

4. All failures are observed

5. A Detected Fault is Immediately Corrected.

6. The Total Number of Faults in the Program is Finite

7. No Fault Occurs at the start of the test

8. No New Faults are introduced during the fault removal Process

9. Failure Rate decreased with test time

10. Failure Rate is Proportional to the Number of Remaining Faults

11. Reliability is a Function to calculate the number of remaining faults.

12. Test Effort is used as a basis for failure rate

## 1.2 Basic Reliability Models

The basic and logarithmic models are two fundamental reliability models widely used. In these models, failure intensity Î» as a measure of reliability is expressed as a function of execution time Ï„, i.e., Î»(Ï„). Intuition behind the model: as the cumulative failure count increases, the failure intensity decreases. In a Basic Model, there is a decrease in failure intensity after a failure and fixing the corresponding failure is constant. In logarithmic model, the decrease in failure intensity after a failure and fixing the corresponding failure is smaller than the previous decrease.

## Modeling Parameters

Î¼ is the mean number of failures observed

Î» is the mean failure intensity

Î»0 is the initial failure intensity observed at the beginning of system-level testing (ST)

Î½0 is the total number of system failures expected to be observed over infinite time

Ï´: decrease in failure intensity in the logarithmic model

Figure 1 Two Basic Software Reliability Models

## 1.3 Software Reliability Model Framework

Software Reliability Models can be classified in two ways, one is based on Failure History and the other one is Data Requirements. Based on Failure History, we can have TBF, FC, FS and IDB Models. Based on Data Requirements we can have Empirical and Analytical Models [3].

## LEVEL 0

## SOFTWARE RELIABILITY MODELS

## LEVEL 1

## BASED ON FAILURE HISTORY

## BASED ON DATA REQUIREMENTS

## LEVEL 2

## TBF

## FC

## FS

## IDB

## EMPIRICAL MODELS

## ANALYTICAL MODELS

## ANALYTICAL MODELS

## LEVEL 3

## STATIC MODELS

## DYNAMIC MODELS

## LEVEL 4

## ERROR DOMAIN

## MODELS

## DATA DOMAIN MODELS

## DATA DOMAIN MODELS

## LEVEL 5

## DISCRETE TIME MODELS

## CONTINUOUS TIME MODELS

## LEVEL 6

## RANDOM TIME MODELS

## FIXED TIME MODELS

## IDT MODELS

## IIE MODELS

Figure 2 : Framework of Software Reliability Models

## TBF â†’ Time Between Failure Model, It is assumed that th time between (i-1)th and ith failures is a random variable.

FC â†’Fault Count Model, the random variable of interest is the number of faults occurring during specified time intervals. FS â†’Fault Seeding Model, To this known number of faults are seeded, the program is tested and observed number of seeded and indigenous fault is counted. IDB â†’ Input Domain Based Model, set of test cases is generated from the input covering the operational profile of the input IDT â†’ Independently Distributed inter-failure model, the inter failure times are assumed to have identical and independent probabilistic behavior for each models.

IIE â†’Identical Error Behavior Model

## Parameter Estimation Methods:

The most well known methods of Parameter Estimation is Maximum Likelihood and Least Squares Methods. Similarly many other methods are also available to resolve the problem of better estimation accuracy.

## 2.1 Maximum Likelihood Method

The Popular method to estimate the parameters is the Maximum Likelihood method. In this method, parameters can be estimated by solving the Maximum Likelihood equations. Unfortunately, all the time these equations do not have a solution. We should derive the conditions for each model which if satisfied, assure a unique solution of the Maximum Likelihood equations. When the conditions for the software failure data are not satisfied, or when the Maximum Likelihood Method seems unstable, we would be able to estimate the known model parameters. Recursive Estimation Algorithm can be used for unknown model parameters, we can have a recursive application of the Maximum Likelihood Method.

2.2 Least Square Methods: Least Square Estimation methods are available to evaluate the set of parameters with the highest probability of being correct for a given set of experimental data. In this method, least squares are taken to estimate parameters by minimizing the squared discrepancies between observed data and their expected values. Compared to ordinary sampling plans, the first failure-censored sampling plan has an advantage of saving both test-time and resources.

## Ant Colony Optimization

Ant Colony Optimization is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. This method will work under the principle of Least Squares. It is a member of ant colony algorithms family, in swam intelligence methods. It has some meta heuristic optimizations. First, the algorithm was used to search for an optimal path in a graph. Depending on the behavior of ants seeking a path between their colony and a source of food. The original idea has been diversified to solve a wider class of numerical problems, more number of problems have been emerged based on various aspects of the behavior of ants.

## 3.1 Origin of ACO

Initially, ants wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find the same path, they are likely not to keep travelling at random, but to instead follow the trails, returning it, if they eventually find food. Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and bak again, the more time the pheromones have to evaporate. For a short path, ants get marched over more frequently, since the pheromone density becomes higher on shorter paths than longer ones. Pheromone evaporation also has the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained. So, when the ants finds a good (food) path from the colony to a food source, other ants are more likely to follow a single path. The idea of the ACO algorithm is to mimic this behavior with simulated ants walking around the graph by representing the problem to solve.

http://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Aco_branches.svg/400px-Aco_branches.svg.png

Figure 3 Ant Colony

## 3.2 Parameter Estimation method based on Simulated Ant Colony Optimization

Changes should be made to the ant colony algorithm which has been used originally in search of

discrete best networks, to suit to the parameter estimation problem. In this paper, we improve the

algorithm based on the characteristics of SRGMs [6]. The algorithm is explained as follows:

We made some changes in the existing Ant Colony Optimization and driven the new Simulated Ant Colony Optimization as follows:

We made a Solution Space from Failure Data which is calculated from Musa data Set from the DACS Web Site.

Based on the characteristics of the SRGMs, the improvements made to the algorithm is explained below.

Step -1 : For solving the Parameter Estimation issues, it is converted into Optimization Problem with the help of Least Square's Principal. The resulting function is as mentioned below.

Min J = âˆš 2

The Euclidean distance between the actual failure number and the predicted failure number is represented as J in the Function. J (Fitness)

The Time t gives the failure occurrences; the total time, the software runs is represented by T. The total discovered failure number in time t, is m(t), the predicted failure number by SRGM are expected failure number by time is Âµ(t).

Step 2: The initial position of each ant is set and solution space has been divided and the other parameters a and b have values ranging from 0<a<a max, 0<b<b max respectively. This value domain is divided into two n1& n2. Let the number of ants living in these areas is taken as m randomly.

Step 3: As long as searched time is less than N max each ant automatically can search for best path.

The probability of choosing local searching or global searching by the ant k is Pk

Pk = r k / Î¹ k

Where r k = Î¹ k - Î¹ best

Î¹ k current path of ant k

If Pk < P0 the an chooses local search. Else global search is chosen. New fitness K is calculated and compared with current fitness J, if it is better then the ant's position is updated else not. To update Î¹ k = (1 - Ï ) Î¹ k + J k

Ï - rate of volatilization rate of pheromone. The following algorithm gives the pseudo code of the algorithm.

## Simulated Ant Colony Optimization is given below

Begin

Set the Initial Value of Î± - the number of equal interval space to split

Set the initial value of Î² - the number of solution space to discard at each stage

Divide the solution space into Î± equal interval. [ Nsum = Solution Space / Î± ]

Set count = 1

While Nsum > 1

## {

Choose Ant search in each equal interval space

Calculate fitness Jk as (1);

Sort the solution space based on Jk

Discard the the Î² higher solution space interval

Nsum = Nsum - 1;

Count = Count + 1;

If Nsum = 1

## {

Nsum = (count / 2) + 1

Count = 0

## }

Exit(0)

Endif

## }

End While

Output the position of the Ant in the final solution space which has the best fit result.

End

## 3.3 Comparative Performance of ACO and Simulated Modified ACO

For this Parameter Estimation Method, we have taken six classic Software Reliability Models were selected to calculate estimation accuracy. G-O Model, Delay S-Shaped Model, Weibull Model, M-O Model, Jelinski-Moranda Model and Dunane Models were taken for experiment evaluation.

## Type

## SRM

## Âµ(t)

Finite Failure Poisson

G-O Model

a(1-e-bt)

Finite Failure Poisson

Delay S Shaped Model

a(1-(1+bt)e-bt

Finite Failure Binomial

Weibull Model

a(1-(e-Î²t) Î±)

Infinite Failure Poisson

M-O Model

a ln (l+bt)

Binomial

JM Model

N(1-exp(-Ï†t)

Infinite Failure

Dunane Model

Î±tÎ²

Table 1. Model Form of Software Reliability Growth Models

Failure data have taken from the DACS Website to estimate the accuracy of Software Reliability Models. Compare to existing Ant Colony Optimization, the proposed ACO is giving the better result of more than 10% as estimation accuracy should be the lower value.

Figure 4 Estimation Accuracy by ACO and Modified ACO for Project 1

Figure 5 Estimation Accuracy by ACO and Modified ACO for Project 2

Figure 6 Estimation Accuracy by ACO and Modified ACO for Project 40

Figure 7 Estimation Accuracy by ACO and Modified ACO for Project 27

Figure 8 Estimation Accuracy by ACO and Modified ACO for Project 14C

Figure 9 Estimation Accuracy by ACO and Modified ACO for Project SS1

## 3.5 Observations

The Performance of the algorithm is dependent on the size of the solution space and the parameter Î±. If the solution space involves into two variables whose values 0<a<Na, 0<b<Nb, then the solution space can be written as N = Na * Nb. Time Complexity can be written as follows:

## T = N / Î± + N / Î± + N / Î± + â€¦â€¦..

## T = N ( 1/ Î± ) = T = N / Î±

Space Complexity can be written as follows:

## O (m + (m - Î± ) )

## m = number of ants

## Î± = number of intervals

## Advantageous of Proposed Method

The procedure can be used for any kind of growth model. The proposed method is comparatively giving better Estimation Accuracy than the existing Ant Colony Optimization.

## Conclusion:

An important aspect in reliability engineering is estimating parameters of Software Reliability Growth Model. Many of these parameters are difficult to estimate because of their nonlinear nature. In this paper we have proposed one Software Reliability Framework and overviewed some of the Parameter Estimation Methods especially Ant Colony Optimization and the Modified Simulated Ant Colony Optimization Methods. Based on the results, we found that Modified Simulated Ant Colony Optimization Method is giving better estimation accuracy.