This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The significance of software testing process and its affects on software quality cannot be taken too lightly. Software testing is a fundamental component of software quality assurance and represents a review of specification, design and coding. The greater visibility of software systems and the cost associated with software failure are motivating factors for planning, through testing.
A number of rules that act as testing objectives are:
* Testing is a process of executing a program with the aim of finding errors.
* A good test case will have a good chance of finding an undiscovered error.
* A successful test case uncovers a new error.
Software maintenance is an activity, which includes enhancements, error corrections, optimization and deletion of obsolete capabilities. These modifications in the software may cause the software to work incorrectly and may affect the other parts of the software, as developers maintain a software system, they periodically regression test it, hoping to find errors caused by their changes.
To do this, developers often create an initial test suite, and then reuse it for regression testing. Regression testing is an expensive maintenance process directed at validating modified software. Regression Test Selection techniques attempt to reduce the cost of regression testing by selecting tests from a program's existing test suite.
The simplest regression testing method, retest all, it is one of the conventional methods for regression testing in which all the tests in the existing test suite are re-run. This method, however, is very expensive and may require an unacceptable amount of time to execute all tests in test suite. An alternative method, regression test selection, reruns only a subset of the initial test suite. In this technique instead of rerunning the whole test suite, we select a part of test suite to rerun if the cost of selecting a part of test suite is less than the cost of running the tests that regression test selection allows us to exclude. Of course, this approach is unsatisfactory as well - test selection techniques can have significant costs, and can abandon tests that could disclose faults, possibly reducing fault detection effectiveness. 
To reduce the time and cost during on testing process, another approach, Test Cases Prioritization in a testing procedure can be favorable for engineers and customers.
In Test Case Prioritization techniques, test cases are executed in such a way, that maximum objective function like rate of fault detection can be achieved.
In section 2 of this paper, we have described different types of Regression Test Selection techniques and we discussed various categories of these types point out by various authors then moving into the details of selective and prioritizing test cases for regression testing.
In this section, we also describe several techniques for prioritizing test cases and we evaluate their ability to improve rate of fault detection, according to various authors.
In the next section, we in particular describe the Regression Test Selection techniques and Test Case Prioritization problems. Subsequent sections present our analysis and conclusions
2. Regression testing
During a software development life cycle, regression testing may start in development phase of system after the detection and correction of errors in a program. Many modifications may occur during the maintenance phase where the software system is corrected, updated and fine-tuned.
There are three types of modifications, each arising from different types of maintenance. According to , corrective maintenance, commonly called "fixes", involves correcting software failures, performance failures, and implementation failures in order to keep the system working properly. Adapting the system in response to changing data requirements or processing environments constitutes adaptive maintenance. Finally, perfective maintenance covers any enhancements to improve the system processing efficiency or maintainability.
Based on of modification of specification authors identify two type of regression testing, Progressive regression testing involves a modified specification. In corrective regression testing, the specification does not change.
Corrective regression testing
Progressive regression testing
* Specification is not changed
* Involves minor modification to code (e.g., adding and deleting statements)
* Usually done during development and corrective maintenance
* Many test cases can be reused
* Invoked at irregular intervals
* Specification is changed
* Involves major modification (e.g., adding and deleting modules)
* Usually done during adaptive and perfective maintenance
* Fewer test cases can be reused
* Invoked at regular intervals
Table 1: Differences between Corrective and Progressive Regression Testing
According to , table 1 lists the major differences between corrective and progressive regression testing.
Regression testing is defined  as "the process of retesting the modified parts of the software and ensuring that no new errors have been introduced into previously tested code".
There are various regression testing techniques as given by various researchers are: (I) Retest all, (II) Regression Test Selection and (III) Test Case Prioritization. Retest-All Technique reuses all tests existing in test suite. It is very expensive as compared to other techniques. In this report our main focus on Regression Test Selection and Test Case Prioritization.
Let P be a procedure or program, let P' be a modified version of P, and let T be a test suite for P. A typical regression test proceeds as follows:
1. Select T' C T, a set of tests to execute on P'.
2. Test P' with T', establishing P"s correctness with respect to T'.
3. If necessary, create T", a set of new functional or structural tests for P'.
4. Test P' with T", establishing P"s correctness with respect to T".
5. Create T"', a new test suite and test history for P', from T, T', and T".
Although each of these steps involves important problems, in this report we restrict our attention to step 1 which involves the Regression Test Selection problem.
2.1. REGRESSION TEST SELECTION
Regression Test Selection technique is less expensive as compare to retest all technique. Regression Test Selection techniques reduce the cost of regression testing by selecting a subset of an existing test suite to use in retesting a modified program.
A variety of regression test selection techniques have been describing in the research literature. Authors  describe several families of techniques; we consider five most common approaches often used in practice.
1) Minimization Techniques:
These techniques attempt to select minimal sets of tests from T that yield coverage of modified or affected portions of P. One such technique requires that every program statement added to or modified for P' be executed (if possible) by at least one test in T.
2) Safe Techniques:
These techniques select, under certain conditions, every test in T that can expose one or more faults in P'. One such technique selects every test in T that, when executed on P, exercised at least one statement that has been deleted from P, or at least one statement that is new in or modified for P'.
3) Dataflow-Coverage-Based Techniques:
These techniques select tests that exercise data interactions that have been affected by modifications. One such technique selects every test in T that, when executed on P, exercised at least one definition use pair that has been deleted from P', or at least one definition-use pair that has been modified for P'.
4) Ad Hoc / Random Techniques:
When time constraints prohibit the use of a retest-all approach, but no test selection tool is available, developers often select tests based on "hunches", or loose associations of tests with functionality. One simple technique randomly selects a predetermined number of tests from T.
5) Retest-All Technique:
This technique reuses all existing tests. To test P', the technique "selects" all tests in T.
According to , Test Selection techniques are broadly classified into three categories.
1) Coverage techniques:
These consider the test coverage criteria. These find coverable program parts that have been modified and select test cases that work on these parts.
2) Minimization techniques:
These are similar to coverage techniques except that they select minimum set of test cases.
3) Safe techniques:
These do not focus on criteria of coverage, in contrast they select all those test cases that produce different output with a modified program as compared to its original version.
Regression test selection identifies the negative impact of modifications applied to software artifacts throughout their life cycle. In traditional approaches, code is modified directly, so code-based selective regression testing is used to identify negative impact of modifications. In model-centric approaches, modifications are first done to models, rather than to code. Consequently, the negative impact to software quality should be identified by means of selective model-based regression testing. To date, most automated model based testing approaches focus primarily on automating test generation, execution, and evaluation, while support for model-based regression test selection is limited .
Code-based regression test selection techniques assume specification immutability, while model-based techniques select abstract test cases based on model's modifications. Thus, in model based Regression Test Selection techniques, the existing test suite can be classified into following three main types:
1) Reusable test cases:
Reusable test cases are test cases from the original test suite that are not obsolete or re-testable. Hence, these test cases do not need to be re-executed.
2) Re-testable test cases:
Test cases are re-testable if they are non-obsolete (model-based) test case and they traverse modified model elements.
3) Obsolete test cases:
Test cases are obsolete if their input had been modified.
Regression Test Selection techniques may create new test cases that test the program for areas which are not covered by the existing test cases.
Model based Regression test suite selection that utilizes Unified Modeling Language (UML) based Use Case Activity Diagrams (UCAD). The activity diagrams are commonly employed as a graphical representation of the behavioral activities of a software system. It represents the functional behavior of a given use case. With behavior slicing we can built our activity diagram. This diagram gives us qualitative regression tests. Using behavior slicing each use case divided into a set of 'unit of behavior' where each unit of behavior represents a user action.
An activity diagram has mostly six nodes:
1. Initial node
2. User Action node
3. System Processing node
4. System Output node
5. Condition node
6. Final node
2.3. TEST CASE PRIORITIZATION
The main purpose of test case prioritization is to rank test cases execution order to detect fault as early as possible. There are two benefits brought by prioritization technique. First, it provides a way to find more bugs under resource constraint condition and thus improves the revealed earlier; engineers have more time to fix these bugs .
Zengkai Ma and Jianjun Zhao  propose a new prioritization index called testing-importance of module (TIM), which combines two prioritization factors: fault proneness and importance of module. The main advantages of this prioritization approach are twofold. First, the TIM value can be evaluated by analyzing program structure (e.g., call graph) alone and it also can be evaluated by incorporating program structure information and other available data (e.g., source code changes). Therefore, this approach can be applied to not only regression testing but also non-regression testing. Second, through analyzing program structure, we can build a mapping between fault severity and fault location. Those test cases covering important part of system will be assigned high priority and executed first.
As a result, the severe faults are revealed earlier and the system becomes reliable at fast rate. The main contributions of authors  are:
* They propose a new approach to evaluate the testing importance for modules in system by combining analysis of fault proneness and module importance.
* They develop a test case prioritization technique, which can provide test cases priority result by handling multiple information (e.g., program structure information, source code changes) and can be applied to both new developed software testing and regression testing.
* They implement Apros, a tool for test case prioritization based on the proposed technique, and perform an experimental study on their approach. The result suggests that Apros is a promising solution to improve the rate of severe faults detection.
Authors consider a sample system, which consists of six modules: M1-M6 and there exist some call relationships between each module. A test suite includes six test cases T1-T6 that covers the M1-M6 respectively. Some modules are dependent on each other. They are finding fault proneness and fault severity by using TIM from this system. They conclude the prioritization result (T3, T6, T4, T2, T5, and T1) on the bases of analyzing structure of system. For calculating this result they had developed some formulas and equation. 
They also did some experiment with two Java programs along JUnit test cases: xml-security and jtopas. They select three sequential versions of the two java programs and apply newly developed software testing and the regression testing. They perform some experiment for finding fault proneness and severe fault. They also introduce the importance of any module using weight fact.
Authors  explore value-driven approach to prioritizing software system test with the objective of improving user-perceived software quality. Software testing is a strenuous and expensive process. Research has shown that at least 50% of the total software cost is comprised of testing activities.They conclude that, their approach of prioritization of test cases is work effectively with regression and non-regression testing by analyzing the program structure.
They make a reach on prior TCP which have two goals: (1) to improve customer confidence on software quality in a cost effective way and (2) to improve the rate of detection of severe faults during system-level testing of new code and regression testing of existing code.
They present a value-driven approach to system-level test case prioritization called the Prioritization of Requirements for Test (PORT). PORT based on following four factors.
1) Requirements volatility
Is based on the number of times a requirement has been changed during the development cycle.
2) Customer priority
Is a measure of the importance of a requirement to the customer?
3) Implementation complexity
Is a subjective measure of how difficult the development team perceives the implementation of requirement to be.
4) Fault proneness
Of requirements (FP) allows the development team to identify the requirements which have had customer-reported failures.
They claim in research paper, Prioritization of Requirement Test (PORT) has great impact on finding severe fault at system level. They are emphasis on Customer priority in TCP for improve the fault detection.
Today software industries are working on neutral manner. They set neutral value to all requirements use cases, test cases and defects. To improve the customer satisfactions in software engineering world they are presenting a value-driven approach for system level testing. In these days Regression Test Case Prioritization techniques use structural coverage criteria to select the test cases. They are leading their ideas from structure level to code level TCP for both new and Regression tests.
This Paper has two main objectives: 1). Find severe faults earlier 2). Improve customer confidence on particular system.
Researchers describe several techniques  for prioritizing test cases and they empirically evaluate their ability to improve rate of fault detectionââ‚¬"a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during regression testing can provide earlier feedback on a system under regression test and let developers begin debugging and correcting faults earlier than might otherwise is possible.
Their results indicate that test case prioritization can significantly improve the rate of fault detection of test suites.
Furthermore, their results highlight tradeoffs between various prioritization techniques.
Test case prioritization can address a wide variety of objectives. In practice, and depending upon the choice of objective, the test case prioritization problem may be intractable: objectives, an efficient solution to the problem would provide an efficient solution to the knapsack problem . Authors consider nine different test case prioritization techniques.
T1: No prioritization
One prioritization "technique" that authors consider is simply the application of no technique; this lets us consider "untreated" test suites.
T2: Random prioritization
Random prioritization in which authors randomly order the tests in a test suite.
T3: Optimal prioritization
An optimal ordering of test cases in a test suite for maximizing that suite's rate of fault detection. In practice, of course, this is not a practical technique, as it requires knowledge of which test cases will expose which faults.
T4: Total branch coverage prioritization
We can determine, for any test case, the number of decisions (branches) in that program that were exercised by that test case. We can prioritize these test cases according to the total number of branches they cover simply by sorting them in order of total branch coverage achieved.
T5: Additional branch coverage prioritization
Total branch coverage prioritization schedules test cases in the order of total coverage achieved. However, having executed a test case and covered certain branches, more may be gained in subsequent test cases by covering branches that have not yet been covered. Additional branch coverage prioritization iteratively selects a test case that yields the greatest branch coverage.
T6: Total fault-exposing-potential prioritization
Statement- and branch-coverage-based prioritization consider only whether a statement or branch has been exercised by a test case. This consideration may mask a fact about test cases and faults: the ability of a fault to be exposed by a test case depends not only on whether the test case reaches (executes) a faulty statement, but also, on the probability that a fault in that statement will cause a failure for that test case. Although any practical determination of this probability must be an approximation, we wished to determine whether the use of such an approximation could yield a prioritization technique superior in terms of rate of fault detection than techniques based on simple code coverage.
T7:Additional fault-exposing-potential (FEP) prioritization
Analogous to the extensions made to total branch (or statement) coverage prioritization to additional branch (or statement) coverage prioritization, we extend total FEP prioritization to create additional fault-exposing-potential (FEP) prioritization. This lets us account for the fact that additional executions of a statement may be less valuable than initial executions. In additional FEP prioritization, after selecting a test case t, we lower the award values for all other test cases that exercise statements exercised by t.
T8: Total statement coverage prioritization
Total statement coverage prioritization is the same as total branch coverage prioritization, except that test coverage is measured in terms of program statements rather than decisions.
T9: Additional statement coverage prioritization
Additional statement coverage prioritization is the same as additional branch coverage prioritization, except that test coverage is measured in terms of program statements rather than decisions. With this technique too, we require a method for prioritizing the remaining test cases after complete coverage has been achieved, and in this work, we do this using total statement coverage prioritization.
2.3.1. Search Algorithms for Test Case Prioritization
There are many search techniques for test case prioritization, which are being developed and unfolded by various researchers in the field.
1) Greedy algorithm:
Works on the next best search philosophy. It  minimizes the estimated cost to reach a particular goal. Its advantage is that it is cheap in both execution time and implementation. The cost of this prioritization is O(mn) for program containing m statements and test suite containing n test cases.
2) Additional Greedy algorithm:
This algorithm  uses the feedback from previous selections. It selects the maximum weight element from the part that is not already consumed by previously selected elements. Once the complete coverage is achieved, the remaining test cases are prioritized by reapplying the Additional Greedy algorithm. The cost of this prioritization is O(mn2) for program containing m statements and test suite containing n test cases.
3) Hill Climbing:
It is one of the popular local search algorithms with two variations; steepest ascent and next best ascent. It is very easy and inexpensive to execute. However, this has cons of dividing O(n2) neighbors and is unlikely to scale. Steps of algorithm are explained in .
4) Genetic Algorithms (GAs):
Is a search technique  based on the Darwin's theory of survival of the fit test? The population is a set of randomly generated individuals. Each individual is representing by variables/parameters called genes or chromosomes. The basic steps of Genetic Algorithm are (1) Encoding (2) Selection (3) Cross over (4) Mutation.
In this paper we discussed about Regression test selection and Test Case Prioritization Selection. Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we check the same module functionality as before, but we use different tests. Any test can be reused, and so any test can become a regression test. Regression testing naturally combines with all other test techniques. Therefore we use Test Case Prioritization technique inside Regression Testing. Test prioritization makes strengthen our regression testing for finding more severe fault in earlier stages.
In this paper we discussed about different factor of prioritization. Customer priority has a great impact on PORT. Our view about both test case selection is, First version of test suite which developed by developer should have concrete test cases. Also at the same stage we should perform some prioritization. With earlier prioritization of test cases we can reduce our cost, time, effort and maximize customer satisfaction
 Todd L. Graves, Mary Jean Harrold, Jung-Min Kim, Adam Porters, Gregg Rothermel, "An Empirical Study of Regression Test Selection Techniques",
Proceedings of the 1998 (20th) International Conference on Software Engineering, 19-25 April 1998 Page(s):188 - 197.
 Leung, H.K.N., White, L., "Insights into Regression Testing", Proceedings., Conference on Software Maintenance,
16-19 Oct. 1989 Page(s):60 - 69.
 K.K.Aggarwal & Yogesh Singh, "Software Engineering Programs Documentation, Operating Procedures," New Age International Publishers, Revised Second Edition - 2005.
 Naslavsky L., Ziv H., Richardson D.J., "A Model-Based Regression Test Selection Technique", ICSM 2009. IEEE International Conference on Software Maintenance, 20-26 Sept. 2009 Page(s):515 - 518.
 Gorthi R.P., Pasala A., Chanduka K.K.P., Leong, B., "Specification-Based Approach to Select Regression Test Suite to Validate
 Changed Software", APSEC '08. 15th Asia-Pacific Software Engineering Conference, 3-5 Dec. 2008, Page(s):153 - 160
 Zengkai Ma, Jianjun Zhao, "Test Case Prioritization based on Analysis of Program Structure", APSEC '08. 15th Asia-Pacific Software Engineering Conference, 3-5 Dec. 2008, Page(s):471 - 478
 Srikanth H., Williams L., Osborne J., "System Test Case Prioritization of New and Regression Test Cases," 2005 International Symposium on Empirical Software Engineering, 17-18 Nov. 2005, Page(s):10 pp.
 Rothermel G., Untch R.H., Chengyun Chu, Harrold M.J., "Test Case Prioritization: An Empirical Study", (ICSM '99) Proceedings. IEEE International Conference on Software Maintenance, 30 Aug.-3 Sept. 1999, Page(s):179 - 188
 Zheng Li, Mark Harman, and Robert M. Hierons, "Search algorithms for regression test case prioritization," IEEE Trans. On Software Engineering, vol 33, no.4, April 2007.