Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Factors Affecting Web Applications Maintenance

Info: 5444 words (22 pages) Dissertation
Published: 12th Dec 2019

Reference this

Tagged: Computer Science

Chapter 1

1.1 Introduction

Software engineering [PRE01] is the process associated with industrial quality software development, the methods used to analyze, design & test computer Software, the management techniques associated with the control & monitoring of Software projects & the tools used to support process, methods, & techniques. In Software Development Life Cycle, the focus is on the activities like feasibility study, requirement analysis, design, coding, testing, & maintenance.

Feasibility study involves the issues like technical/economical/ behavioral feasibility of project. Requirement analysis [DAV93] emphasizes on identifying the needs of the system & producing the Software Requirements Specification document (SRS), [JAL04] that describes all data, functional & behavioral requirements, constraints, & validation requirements for Software.

Software Design is to plan a solution of the problem specified by the SRS document, a step in moving from the problem domain to the solution domain. The output of this phase is the design document. Coding is to translate the design of the system into code in a programming language. Testing is the process to detect defects & minimize the risk associated with the residual defects. The activities carried out after the delivery of the software comprises the maintenance phase.

1.2 Evolution of Software Testing Discipline

The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. The term software engineering was first used at a 1968 NATO workshop in West Germany. It focused on the growing software crisis. Thus we see that the software crisis on quality, reliability, high costs etc. started way back when most of today’s software testers were not even born.

The attitude towards Software Testing [BEI90] underwent a major positive change in the recent years. In the 1950’s when Machine languages were used, testing was nothing but debugging. When in the 1960’s, compilers were developed, testing started to be considered a separate activity from debugging.

In the 1970’s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline. Over the last two decades there has been an increased focus on better, faster and cost-effective software. Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice.

Now to answer, What is Testing? we can go by the famous definition of Myers [MYE79], which says, Testing is the process of executing a program with the intent of finding errors. According to Humphrey, software testing is defined as, the execution of a program to find its faults. Testing is the process to prove that the software works correctly [PRA06].

Software testing is a crucial aspect of the software life cycle. In some form or the other it is present at each phase of (any) software development or maintenance model. The importance of software testing and its impact on software cannot be underestimated. Software testing is a fundamental component of software quality assurance and represents a review of specification, design and coding. The greater visibility of software systems and the cost associated with software failure are motivating factors for planning, through testing. It is not uncommon for a software organization to spend 40-50% of its effort on testing.

During testing, the software engineering produces a series of test cases that are used to rip apart the software they have produced. Testing is the one step in the software process that can be seen by the developer as destructive instead of constructive. Software engineers are typically constructive people and testing requires them to overcome preconceived concepts of correctness and deal with conflicts when errors are identified.

A successful test is one that finds a defect. This sounds simple enough, but there is much to consider when we want to do software testing. Besides finding faults, we may also be interested in testing performance, safety, fault-tolerance or security. Testing often becomes a question of economics. For projects of a large size, more testing will usually reveal more bugs. The question then becomes when to stop testing, and what is an acceptable level of bugs. This is the question of good enough software.

Testing is the process of verifying that a product meets all requirements. A test is never complete. When testing software the goal should never be a product completely free from defects, because it’s impossible. According to Peter Nielsen, The average is 16 faults per 1000 lines of code when the programmer has tested his code and it is believed to be correct. When looking at a larger project, there are millions of lines of code, which makes it impossible to find all present faults. Far too often products are released on the market with poor quality. Errors are often uncovered by users, and in that stage the cost of removing errors is large in amount.

1.3 Objectives of Testing

Glen Myers [MYE79] states a number of rules that can serve well as testing objectives:

  • Testing is a process of executing a program with the intent of finding an error.
  • A good test is one that has a high probability of finding an as yet undiscovered error.
  • A successful test is one that uncovers an as yet undiscovered error.
  • The objective is to design tests that systematically uncover different classes of errors & do so with a minimum amount of time & effort.

Secondary benefits include

  • Demonstrate that Software functions appear to be working according to specification.
  • That performance requirements appear to have been met.
  • Data collected during testing provides a good indication of Software reliability & some indication of Software quality.
  • Testing cannot show the absence of defects, it can only show that Software defects are present.

1.4 Software Testing & Its Relation with Software Life Cycle

Software testing should be thought of as an integral part of the Software process & an activity that must be carried out throughout the life cycle.

Each phase in the Software lifecycle has a clearly different end product such as the Software requirements specification (SRS) documentation, program unit design & program unit code. Each end product can be checked for conformance with a previous phase & against the original requirements. Thus, errors can be detected at each phase of development.

  • Validation & Verification should occur throughout the Software lifecycle.
  • Verification is the process of evaluating each phase end product to ensure consistency with the end product of the previous phase.
  • Validation is the process of testing Software, or a specification, to ensure that it matches user requirements.

Software testing is that part of validation & verification associated with evaluating & analysing program code. It is one of the two most expensive stages within the Software lifecycle, the other being maintenance. Software testing of a product begins after the development of the program units & continues until the product is obsolete.

Testing & fixing can be done at any stage in the life cycle. However, the cost of finding & fixing errors increases dramatically as development progresses.

Changing a Requirements document during the first review is inexpensive. It costs more when requirements change after the code has been written: the code must be rewritten. Bug fixes are much cheaper when programmers find their own errors. Fixing an error before releasing a program is much cheaper than sending new disks, or even a technician to each customer’s site to fix it later. It is illustrated in Figure 1.1.

The types of testing required during several phases of Software lifecycle are described below:

Requirements

Requirements must be reviewed with the client; rapid prototyping can refine requirements & accommodate changing requirements.

Specification

The specifications document must be checked for feasibility, traceability, completeness, & absence of contradictions & ambiguities.

Specification reviews (walkthroughs or inspections) are especially effective.

Design

Design reviews are similar to specification reviews, but more technical.

The design must be checked for logic faults, interface faults, lack of exception handling, & non-conformance to specifications.

Implementation

Code modules are informally tested by the programmer while they are being implemented (desk checking).

Thereafter, formal testing of modules is done methodically by a testing team. This formal testing can include non-execution-based methods (code inspections & walkthroughs) & execution-based methods (black-box testing, white-box testing).

Integration

Integration testing is performed to ensure that the modules combine together correctly to achieve a product that meets its specifications. Particular care must be given to the interfaces between modules.

The appropriate order of combination must be determined as top-down, bottom-up, or a combination thereof.

Product Testing

The functionality of the product as a whole is checked against its specifications. Test cases are derived directly from the specifications document. The product is also tested for robustness (error-handling capabilities & stress tests).

All source code & documentation are checked for completeness & consistency.

Acceptance Testing

The Software is delivered to the client, who tests the Software on the actual h/w, using actual data instead of test data. A product cannot be considered to satisfy its specifications until it has passed an acceptance test.

Commercial off-the-shelf (or shrink-wrapped) Software usually undergoes alpha & beta testing as a form of acceptance test.

Maintenance

Modified versions of the original product must be tested to ensure that changes have been correctly implemented.

Also, the product must be tested against previous test cases to ensure that no inadvertent changes have been introduced. This latter consideration is termed regression testing.

Software Process Management

The Software process management plan must undergo scrutiny. It is especially important that cost & duration estimates be checked thoroughly.

If left unchecked, errors can propagate through the development lifecycle & amplify in number & cost. The cost of detecting & fixing an error is well documented & is known to be more costly as the system develops. An error found during the operation phase is the most costly to fix.

1.5 Principles of Software Testing

Software testing is an extremely creative & intellectually challenging task. The following are some important principles [DAV95] that should be kept in mind while carrying Software testing [PRE01] [SUM02]:

Testing should be based on user requirements: This is in order to uncover any defects that might cause the program or system to fail to meet the client’s requirements.

Testing time & resources are limited: Avoid redundant tests.

It is impossible to test everything: Exhaustive tests of all possible scenarios are impossible, because of the many different variables affecting the system & the number of paths a program flow might take.

Use effective resources to test: This represents use of the most suitable tools, procedures & individuals to conduct the tests. Only those tools should be used by the test team that they are confident & familiar with. Testing procedures should be clearly defined. Testing personnel may be a technical group of people independent of the developers.

Test planning should be done early: This is because test planning can begin independently of coding & as soon as the client requirements are set.

Test for invalid & unexpected input conditions as well as valid conditions: The program should generate correct messages when an invalid test is encountered & should generate correct results when the test is valid.

The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found.

Testing should begin at the module: The focus of testing should be concentrated on the smallest programming units first & then expand to other parts of the system.

Testing must be done by an independent party: Testing should not be performed by the person or team that developed the Software since they tend to defend the correctness of the program.

Assign best personnel to the task: Because testing requires high creativity & responsibility only the best personnel must be assigned to design, implement, & analyze test cases, test data & test results.

Testing should not be planned under the implicit assumption that no errors will be found.

Testing is the process of executing Software with the intention of finding errors.

Keep Software static during test: The program must not be modified during the implementation of the set of designed test cases.

Document test cases & test results.

Provide expected test results if possible: A necessary part of test documentation is the specification of expected results, even though it is impractical.

1.6 Software Testability & Its Characteristics

Testability is the ability of Software (or program) with which it can easily be tested [PRE01] [SUM02]. The following are some key characteristics of testability:

  • The better it works, the more efficient is testing process.
  • What you see is what you test (WYSIWYT).
  • The better it is controlled, the more we can automate or optimize the testing process.
  • By controlling the scope of testing we can isolate problems & perform smarter retesting.
  • The less there is to test, the more quickly we can test it.
  • The fewer the changes, the fewer the disruptions to testing.
  • The more information we have, the smarter we will test.

1.7 Stages in Software Testing Process

Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are composed of procedures & functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation.

The most widely used testing process consists of five stages that are illustrated in Table 1.1.

Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed back from later stages to earlier parts of the process. The iterative testing process is illustrated in Figure 1.2 and described below:

Unit Testing: Unit testing is code-oriented testing. Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components.

Module Testing: A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures & functions. A module encapsulates related components so it can be tested without other system modules.

Sub-system (Integration) Testing: This phase involves testing collections of modules, which have been integrated into sub-systems. It is a design-oriented testing & is also known as integration testing.

Sub-systems may be independently designed & implemented. The most common problems, which arise in large Software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces.

System Testing: The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems & system components. It is also concerned with validating that the system meets its functional & non-functional requirements.

Acceptance Testing: This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors & omissions in the systems requirements definition (user-oriented) because real data exercises the system in different ways from the test data.

Acceptance testing may also reveal requirement problems where the system facilities do not really meet the user’s needs (functional) or the system performance (non-functional) is unacceptable.

1.8 The V-model of Testing

To test an entire software system, tests on different levels are performed. The V model [FEW99], shown in figure 1.3, illustrates the hierarchy of tests usually performed in software development projects. The left part of the V represents the documentation of an application, which are the Requirement specification, the Functional specification, System design, the Unit design.

Code is written to fulfill the requirements in these specifications, as illustrated in the bottom of the V. The right part of the V represents the test activities that are performed during development to ensure that an application corresponding to its requirements.

Unit tests are used to test that all functions and methods in a module are working as intended. When the modules have been tested, they are combined and integration tests are used to test that they work together as a group. The unit- and integration test complement the system test. System testing is done on a complete system to validate that it corresponds to the system specification. A system test includes checking if all functional and all non-functional requirements have been met.

Unit, integration and system tests are developer focused, while acceptance tests are customer focused. Acceptance testing checks that the system contains the functionality requested by the customer, in the Requirement specification. Customers are usually responsible for the acceptance tests since they are the only persons qualified to make the judgment of approval. The purpose of the acceptance tests is that after they are preformed, the customer knows which parts of the Requirement specification the system satisfies.

1.9 The Testing Techniques

To perform these types of testing, there are three widely used testing techniques. The above said testing types are performed based on the following testing techniques:

Black-Box testing technique

Black box testing (Figure 1.4) is concerned only with testing the specification. It cannot guarantee that the complete specification has been implemented. Thus black box testing is testing against the specification and will discover faultsofomission, indicating that part of the specification has not been fulfilled. It is used for testing based solely on analysis of requirements (specification, user documentation).

In Black box testing, test cases are designed using only the functional specification of the software i.e without any knowledge of the internal structure of the software. For this reason, black-box testing is also known as functional testing. Black box tests are performed to assess how well a program meets its requirements, looking for missing or incorrect functionality. Functional testing typically exercise code with valid or nearly valid input for which the expected output is known. This includes concepts such as ‘boundary values’.

Performance tests evaluate response time, memory usage, throughput, device utilization, and execution time. Stress tests push the system to or beyond its specified limits to evaluate its robustness and error handling capabilities. Reliability tests monitor system response to represent user input, counting failures over time to measure or certify reliability.

Black box Testing refers to analyzing a running program by probing it with various inputs. This kind of testing requires only a running program and does not make use of source code testing of any kind. In the security paradigm, malicious input can be supplied to the program in an effort to cause it to break. If the program breaks during a particular test, then a security problem may have been discovered.

Black box testing is possible even without access to binary code. That is, a program can be tested remotely over a network. All that is required is a program running somewhere that is accepting input. If the tester can supply input that the program consumes (and can observe the effect of the test), then black box testing is possible. This is one reason that real attackers often resort to black box techniques. Black box testing is not an alternative to white box techniques. It is a complementary approach that is likely to uncover a different type of errors that the white box approaches.

Black box testing tries to find errors in the following categories:

  • Incorrect or missing functions
  • Interface errors
  • Errors in data structures or external database access
  • Performance errors, and
  • Initialization and termination errors.

By applying black box approaches we produce a set of test cases that fulfill requirements:

  • Test cases that reduce the number of test cases to achieve reasonable testing
  • Test cases that tell us something about the presence or absence of classes of errors.

The methodologies used for black box testing have been discussed below:

1.9.1.1 Equivalent Partitioning

Equivalence partitioning is a black box testing approach that splits the input domain of a program into classes of data from which test cases can be produced. An ideal test case uncovers a class of errors that may otherwise before the error is detected. Equivalence partitioning tries to outline a test case that identifies classes of errors.

Test case design for equivalent partitioning is founded on an evaluation of equivalence classes for an input condition [BEI95]. An equivalence class depicts a set of valid or invalid states for the input condition. Equivalence classes can be defined based on the following [PRE01]:

  • If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
  • If an input condition needs a specific value, one valid and two invalid equivalence classes are defined.
  • If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
  • If an input condition is Boolean, one valid and invalid class is outlined.

1.9.1.2 Boundary Value Analysis

A great many errors happen at the boundaries of the input domain and for this reason boundary value analysis was developed. Boundary value analysis is test case design approach that complements equivalence partitioning. BVA produces test cases from the output domain also [MYE79].

Guidelines for BVA are close to those for equivalence partitioning [PRE01]:

  • If an input condition specifies a range bounded by values a and b, test cases should be produced with values a and b, just above and just below a and b, respectively.
  • If an input condition specifies various values, test cases should be produced to exercise the minimum and maximum numbers.
  • Apply guidelines above to output conditions.
  • If internal program data structures have prescribed boundaries, produce test cases to exercise that data structure at its boundary.

White-Box testing technique

White box testing (Figure 1.5) is testing against the implementation as it is based on analysis of internal logic (design, code etc.) and will discover faultsofcommission, indicating that part of the implementation is faulty. Designing white-box test cases requires thorough knowledge of the internal structure of software, and therefore the white-box testing is also called the structural testing. White box testing is performed to reveal problems with the internal structure of a program.

A common goal of white-box testing is to ensure a test case exercises every path through a program. A fundamental strength that all white box testing strategies share is that the entire software implementation is taken into account during testing, which facilitates error detection even when the software specification is vague or incomplete. The effectiveness or thoroughness of white-box testing is commonly expressed in terms of test or code coverage metrics, which measure the fraction of code exercised by test cases.

White box Testing involves analyzing and understanding source code. Sometimes only binary code is available, but if you decompile a binary to get source code and then study the code, this can be considered a kind of white box testing as well. White box testing is typically very effective in finding programming errors and implementation errors in software. In some cases this activity amounts to pattern matching and can even be automated with a static analyzer.

White box testing is a test case design approach that employs the control architecture of the procedural design to produce test cases. Using white box testing approaches, the software engineering can produce test cases that:

  • Guarantee that all independent paths in a module have been exercised at least once
  • Exercise all logical decisions
  • Execute all loops at their boundaries and in their operational bounds
  • Exercise internal data structures to maintain their validity.

There are several methodologies used for white box testing. We discuss some important ones below.

1.9.2.1 Statement Coverage

The statement coverage methodology aims to design test cases so as to force the executions of every statement in a program at least once. The principal idea governing the statement coverage methodology is that unless a statement is executed, we have way of determining if an error existed in that statement. In other words, the statement coverage criterion [RAP85] is based on the observation that an error existing in one part of a program cannot be discovered if the part of the program containing the error and generating the failure is not executed. However, executed a statement once and that too for just one input value and observing that it behaves properly for that input value is no guarantee that it will behave correctly for all inputs.

1.9.2.2 Branch Coverage

In branch coverage testing, test cases are designed such that the different branch conditions are given true and false values in turn. It is obvious that branch testing guarantees statement coverage and thus is a stronger testing criterion than the statement coverage testing [RAP85].

1.9.2.3 Path Coverage

The path coverage based testing strategy requires designing test cases such that all linearly independents paths in the program are executed at least once. A linearly independent path is defined in terms of the control flow graph (CFG) of the program.

1.9.2.4 Loop testing

Loops are very important constructs for generally all the algorithms. Loop testing is a white box testing technique. It focuses exclusively on the validity of loop constructs. Simple loop, concatenated loop, nested loop, and unstructured loop are four different types of loops [BEI90] as shown in figure 1.6.

Simple Loop: The following set of tests should be applied to simple loop where n is the maximum number of allowable passes thru the loop:

  • Skip the loop entirely.
  • Only one pass thru the loop.
  • Two passes thru the loop.
  • M passes thru the loop where m < n.
  • N-1, n, n+1 passes thru the loop.
  • Nested Loop: Beizer [BEI90] approach to the nested loop
  • Start at the innermost loop. Set all other loops to minimum value.
  • Conduct the simple loop test for the innermost loop while holding the outer loops at their minimum iteration parameter value.
  • Work outward, conducting tests for next loop, but keeping all other outer loops at minimum values and other nested loops to typical values.
  • Continue until all loops have been tested.

Concatenated loops: These can be tested using the approach of simple loops if each loop is independent of other. However, if the loop counter of loop 1 is used as the initial value for loop 2 then approach of nested loop is to be used.

Unstructured loop: This class of loops should be redesigned to reflect the use of the structured programming constructs.

1.9.2.5 McCabe’s Cyclomatic Complexity

The McCabe’s Cyclomatic Complexity [MCC76] of a program defines the number of independent paths in a program. Given a control flow Graph G of a program, the McCabe’s Cyclomatic Complexity V(G) can be computed as:

V(G)=E-N+2

Where E is the number of edges in the control flow graph and N is the number of nodes of the control flow graph.

The cyclomatic complexity value of a program defines the number of independent paths in the basis set of the program and provides a lower bound for the number of test cases that must be conducted to ensure that all statements have been executed at least once. Knowing the number of test cases required does not make it easy to derive the test cases, it only gives an indication of the minimum number of test cases required.

The following is the sequences of steps that need to be undertaken for deriving the path coverage based test case of a program.

  • Draw the CFG.
  • Calculate Cyclomatic Complexity V(G).
  • Calculate the basis set of linearly independent paths.
  • Prepare a test case that will force execution of each path in the basis set.

1.9.2.6 Data Flow based Testing

The data flow testing method chooses test paths of a program based on the locations of definitions and uses of variables in the program. Various data flow testing approaches have been examined [FRA88] [NTA88] [FRA93]. For data flow testing each statement in program is allocated a unique statement number and that each function does not alter its parameters or global variables. For a statement with S as its statement number,

DEF(S) = {X| statement S contains a definition of X}

USE(S) = {X| statement S contains a use of X}

If statement S is if or loop statement, its DEF set is left empty and its USE set is founded on the condition of statement S. The definition of a variable X at statement S is live at statement S, if there exists a path from statement S to S’ which does not contain any condition of X.

A definition-use chain (or DU chain) of variable X is of the type [X,S,S’] where S and S’ are statement numbers, X is in DEF(S), USE(S’), and the definition of X in statement S is live at statement S’.

One basic data flow testing strategy is that each DU chain be covered at least once. Data flow testing strategies are helpful for choosing test paths of a program including nested if and loop statements

1.9.3 Grey-Box testing technique

Grey box testing [BIN99] designs test cases using both responsibility-based (black box) and implementation-based (white box) approaches. To completely test a web application one needs to combine the two approaches, White-box and Black-box testing. It is used for testing of Web based applications. The Gray-box testing approach takes into account all components ma

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Computer Science"

Computer science is the study of computer systems, computing technologies, data, data structures and algorithms. Computer science provides essential skills and knowledge for a wide range of computing and computer-related professions.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: