A critical quality attribute

Published:

1. Introduction

Performance is a critical quality attribute of every software system. Developing a complex system whose performance criteria are significants requires strict awareness on performance in the early steps of the development.

As Clements cited, "Performance is largely a function of the frequency and nature of inter-component communication, in addition to the performance characteristics of the components themselves, and hence can be predicted by studying the architecture of a system." [3]. So, A bad choice in term of system architecture may affect any future application and cause defects in performance, regardless of the algorithms or the system components used. This deficiency in performance may force redesigning the whole system. Therefore the performance analysis in the first steps of development seems to be unavoidable.

This paper aims to present an efficient approach to overcome the software performance problems. This paper considers the use of Software Performance Engineering process as the best way to optimize performance. It addresses also the advantages of the proactive approach to performance on which the SPE is based, compared to the reactive approach. This paper gives also the general steps of the SPE process. The first coming section defines the necessity of the performance quality. The next section defines the SPE approach, and the third one compare between the reactive approach ("fix-it-later" approch) which is the old one, and the proactive approach (SPE). The fourth section discusses the characteristics and measures of software performance and the fifth section presents the SPE process then the last section is about conflicts and tradeoff of performance with other quality attributes.

2. The necessity of performance

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

As we know the performance is a critical quality of software. Performance problems costs the software industry a huge economical loss annualy taking into account the income loss, productivity decrease, material costs, and loss of trust of customers.

Nowadays, the development organizations are being asked to implement more complex software solutions with the minimum resources. This means they must perform the upgrade of applications to include new infrastructure and to improve response time or throughput, or both.

Failing to meet performance objectives can lead to a business failure of a project. This performance failure can be caused by many factors like:

  • Inexperienced developers
  • Lack of familiarity with the technology,
  • Inappropriate schedule

However, the main cause of performance failures is the use of reactive approach to performance during the development process (the reactive approach is described more in the 4th section).

A good example of a "performance failure" due to a poor performance management is when the goal is to complete a transaction in 5 seconds; but after implementing the software and performing some tests, the best response time for transactions is more than 30 seconds.

3. Overview of SPE:

SPE or Software Performance Engineering is the field which studies the performance management of applications. Defined by Smith in 1990, Software performance engineering (SPE) is a method that provides a systematic, quantitative approach to constructing software systems that meet performance objectives [1]. Using SPE, we can detect problems early in the development and make use of quantitative methods to analyze the needs of hardware solutions to achieve the requirements of the software.

The method divides the performance analysis into two parts: the Software Model (SM) and Machinery or System Model (MM). This method is very weighty to use and the result is often far from that expected. Understanding the software architecture is very important to achieve performance objectives more quickly and at a lower cost. The two types of models provide information for architecture evaluation.

The Software model uses an execution graph (or EG). The software execution model is similar to UML models of software. it uses an execution graph (or EG) of performance [5]. The graph represents the software and is composed of nodes. These nodes can be basic, cyclic, conditional or consist of a separation-join nodes which are related to the functional components of the software. The basic node is simply a given input that generate an output data, after an eventual processing. The cyclic node is a loop in the program execution. The conditional node is an exclusive choice for the rest of the program. The separate and join nodes allow concurency between two sequences of operations that must be executed in parallel. Edges of the graph represent the control flow. In general, Software models are sufficient to identify performance problems due to poor architectural decisions [5].

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

If the software model shows that there are no problems, then we start solving the system model which takes the results obtained by solving the software model as its input and provides the folowing output [5]:

  • refinements of the performance requirements
  • more accurate measures that report the resource conflicts
  • sensitivity of performance parameters in the workload composition
  • classification of resources
  • comparative data to improve performance
  • scalability effects on performance
  • effects of the software on performance objectives of other systems

The system model uses a network model view of files (EQNM for Extended Queuing Network Model). EQNM consists of the system components, system topology (connection between components) and the parameters of different components (which will be given by the analysis of SM).

The nice figure1 in the appendix shows the performance balance a software should have between System model (MM) and Software model (SM).

4. Reactive vs proactive approaches to performance:

Before the SPE approach to analyse the software performance, organizations used the "fix-it-later" approach. This approach is a Software development philosophy invented by Robert Fuller in 1980 [3]. It's a reactive approach in which the performance problems detection is done until the testing phase. Those problems, were often solved by adding additional hardware, refining the software, or both.

Using a reactive approach was often the cause of software performance failures during the development process. Some organizations used it because it was faster and less costy. They ignored the analysis of performance untill having a problem while testing the software, and then they try to "refine" the software to meet the performance goals. However, when reaching this testing stage, the achievement ofn these criteria could be infeasible. The proactive approach is created to resolve this problem.

The following quote shows the need for the proactive approach:

"Changes in the system architecture to improve performance should as a rule be postponed until the system is being (partly) built. Experience shows that one frequently makes the wrong guesses, at least in large and complex systems, when it comes to the location of the bottlenecks critical to performance. To make correct assessments regarding necessary performance optimization, in most cases, we need something to measure ..." [Jacobson et al. 1999][6].

The proactive approach reduces the number of possible performance incidents and problems, while increasing the satisfaction of the end user. It's a preventive approach that anticipates the critical performance problems and provides techniques to identify and deal with these problems early in the software development process. With this approach, the software failure caused by the late discovery of performance problems will be avoided [2].

5. measures to characterize the performance of a system

As for every project, there are some objectives to expect and some constraints to be followed. The SPE provids the information needed to build software that meets performance requirements on time and within a budget. It provides an engineering approach for further refinement to quantify the software performance. There are two main dimensions to software performance: responsiveness and scalability.

Responsiveness is measurement of the time the system takes to respond to a particular action. The measurement could be about the speed of the response or the number of events the system can process within a given duration. Scalability is the ability of a system to continue to meet its response time or throughput objectives as the demand for the software functions increases. Scalability is the ability to manage important tasks with flexibility or to be expanded without difficulty [3].

For example, from the user side, performance of a web application can be defined as an answer to the question "how long the page takes to be loaded?." There are two important steps to get to quantify the performance of a web application

  • Response time
  • Throughput

The response time is the time it takes for a user to perform an operation: for the validation of a transaction, it's the time between the clic on the validation button and the display of the next page.

Throughput is the number of transactions that may occur within a given time. It is usually measured in transactions per second (TPS). However, before starting the execution of the tests, you must be clear about what kind of performance is expected from the website. For example, you want answers to questions like:

  • How long takes a transaction on the Web?
  • How long a user must wait before loading a page?
  • How many users the website should support?
  • What types of user traffic do you expect: there periods of low activity and high activity?
Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

The performance measures types are generally divided into two classes: performance tests and implementation monitoring. The performance tests, which are done in development phase, during the implementation and in maintenance phase. The implementation monitoring is mainly done in production.

6. SPE process

From the previous sections we conclude that performance tests must be made and implemented throughout the development cycle. It is recommended to include performance measures from the very first steps, which means going through the function and nature of inter-component communication, in addition to the performance characteristics of the components themselves, and studying the basic architecture of the system[3].

The SPE process includes the following steps[7] [3]:

i. Identify the performance test environment: if possible, the test environment should be identical to the production environment. For this, we must understand:

  • The purpose of the application
  • The expected behavior of users
  • The logical architecture of the application (ex. N-tier).
  • The physical architecture of the application (ex. web servers, database, etc..)
  • The network architecture of the application

ii. Identify the acceptance criteria for performance by:

  • Determining the objectives of the performance tests (upgrading, tuning, etc..)
  • Estimating the target value of resource use and limits of tolerance (For example CPU 75%, 1000 transactions / hour, etc..)
  • Deducing the metrics to use (CPU usage, response time, memory usage, etc..)

iii. Define scenarios (use cases): it starts by identifying the critical use cases that are important to the operation of the system and for each critical use case, the key performance scenarios that are executed frequently, or that are critical to the perceived performance of the system. Then identify the behaviors of users and their common mistakes (during testing it is important to simulate the common user errors), and also identify the time for system responsivness (max, min, average)[7].

iv. Build performance models: execution graphs are used to represent software processing steps in the performance model. The sequence diagram representations of the key performance scenarios are translated to execution graphs.

v. Configure the test environment: by configuring the test tools used and the execution environment of the application. In this step we determine the software resource requirements and we add the computer resource requirements.

vi. Run Tests: in this step, test results are validated by verifying that the tests actually works and checking that there are no problems that can distort the results (network, disk, etc..). The system responsivness is measured and system baselines determined to evaluate the improvements brought about by changing a single parameter (memory, JDBC connection, etc..)[3].

  • Tests must be significant
  • Do not repeat the same transaction with the same data.
  • Do not generate too complex tests

vii. Analyze results: this is both the most important and the most difficult part of performance engineering. The analysis should determine the requirements to respect and those to ignore. This analysis will allow the system to meet the performance requirements. The analysis of results determines if the performance acceptance criteria -which have already been set- have been met. If not, what are the problems and whose responsibility it is to solve them. Then we write a report about the interpretation of results. [7]

viii. validate the models: the model validation step is the last step in performance engineering process, it determines whether the model predictions are a projection of the the software's performance criteria set in the previous steps. This means answering the question "Are we building the right model ?" [7]. Therefore, it is particularly important to detect any model exception as soon as possible.

7. Conflicts and tradeoffs:

The task of achieving performance objectives is not done in isolation. There are many other critical software qualities that should be respected in parallel with performance objectives. Those qualities include mainly: availability, security, usability and modifiability. There could be conflicts among those qualities' objectives when the architectural decisions have contrasting effects on quality attributes. For example, the active redundancy tactic for increasing availability would have a negative impact on system performance (decrease performance). [7]

Even if it is difficult to have an architecture where there is no conflict between the different quality attributes, having a compromise between them is still a possible task. The identification of such compromises has no negative impact on respecting the quality objectives. Obtaining precise quality requirements and prioritizing them is often the most difficult part of the process. As we know software architecture is a representation of the key properties of a software system. The key to succesfully design such a software architecture is mainly related to finding the optimal combination of the architectural features, components and modules that can meet the functional requirements with respect to the critical quality attributes.

The development environments, the increasingly heterogeneous technologies and languages, the external maintenance of the system... That's enough to make the relationships between the different stakeholders more and more complex for software development companies. This complex relationship indeed makes it hard to reach a good level of service that respects measures on availability, performance, scalability, security and other aspects of software quality. Indeed, the use of a proactive aproach such as the SPE to study the management of the software quality attributes makes it possible to find a compromise between the different stakholders with respect to the software's critical qualities and with a minimum cost. A software problem cost 100 times less expensive to correct when found in the specifications than if found in the production.

8. Conclusion

In the recent years, many models have been proposed for predicting the different characteristics of software quality (performance, reliability, usability, availability...). However, many of them (if not all) have limited applicability. Indeed, in most cases, these models have been obtained by empirical studies of implementing data models. It becomes very difficult for an organization to choose the model that best applies to its projects.

This paper presents an engineering approach to cover performance validation in a system. In this paper, the Software Performance Engineering approach has been discussed. To present the SPE, first the paper gave a description of the reason for which the performance is often one of the critical objectives of software and then defined the SPE as a proactive approach and presented its advantages compared with the reactive approaches. The different characteristics of performance were then cited and the steps defining the SPE process have been presented. The last section in this paper was about the conflicts and tradeoff that can be between performance and other quality attributes while applying the SPE process.

9. References:

  1. Smith, C.U. (1992) Performance Engineering of Software Systems, Addison-Wesley, Reading, MA.
  2. The Business Case for Software Performance Engineering, Lloyd G. Williams, and Connie U. Smith, March, 2002
  3. P.C. Clements: Coming Attractions in Software Architecture, No.CMU/SEI-96-TR-003, Software Engineering Institute, Carnegie Mellon University, February 1996.
  4. Software Performance Services, by Connie U. Smith. Division of L&S Computer Technology, Inc. . http://www.perfeng.com/speis.htm
  5. Software Performance Engineering, in UML for Real: Design of Embedded Real-Time Systems, Luciano Lavagno, Grant Martin, Bran Seliced., Kluwer, 2003.
  6. Jacobson I., Booch G., Rumbaugh J., "The Unified Software Development Process", Addison-Wesley Object Technology Series, 1999.
  7. Beyond Performance Testing Series Part 2: A Performance Engineering Strategy http://www.perftestplus.com/resources/BPT2.pdf