Improving the Software Development Process Using Testability Tactics

3589 words (14 pages) Essay in Computer Science

23/09/19 Computer Science Reference this

Disclaimer: This work has been submitted by a student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

Improving the Software Development Process Using Testability Tactics

Abstract:

Need and building stable software application is becoming more and more important considering that software applications are becoming pervasive in human daily lives. The software is adequately tested to give greater confidence in its ability to perform as expected. Software testing becomes a hectic task as the size and complexity/line of code of software application on a tester’s ability to manipulate increases, therefore, the next step is to make the testing task easier and more effective. In other words, improving on the testability of the software. Software testability is the extent to which a software system or a unit under test supports its own testing. To support and improve software testability, many techniques and metrics have been proposed by both practitioners and researchers in the last several years. Software testability is the tendency of software code to reveal existing defects/faults during testing. Improvement of testability can be achieved through applying tactics in practice that improve on a tester’s ability to manipulate, observe and interpret the results from the execution of tests.

  1. Introduction:

    1. Background

  Building reliable software is an important issue considering that software applications are now used in all kinds of environments and every industry, including where human life depends on the software application correct functioning. Testing plays important role for any software to be more reliable. Software is considered more reliable and stable if it has a low probability of failure while it is being used. Failure here is defined as any deviation from expected behavior of application. These failures only occur if faulty code in the software is run/executed. Identification of these faults in the software can be done through formal proofs (static analysis) or through selective or exhaustive testing. Formal proofs (Static Analysis) are complicated to perform and exhaustive testing is not feasible as large number of execution paths that exist even in software application. Selective testing also known as dynamic testing is the most common method of improving confidence in the reliability of software.

Software testing is a fundamental activity to ensure quality of software systems. However, not all software systems are testable. In the traditional waterfall software development life cycle, software testing was done after a software system had already been built. More recently, in agile development model, defined software development methodologies and/or processes recommend that testing be done at various stages throughout the development cycle.

 Test driven advancement (TDD), another as of late characterized programming improvement philosophy, designers begin with advancement of unit tests before the utilitarian code is executed. Subsequent to making unit tests, the engineer composes the useful code important to guarantee that all the recently made unit tests are passed. In the event where unit tests have all passed and the designer can’t think about any more to make, at that point the development work is viewed as completed.

 Testing may uncover failures yet does not really help with pointing out where the issues in the software application are found. The size of software application also affects testing. As applications get larger, testing becomes more expensive. As per recent researches, it is estimated that cost of software testing can range from 40% to 80% of the entire development costs. Software testing can be viewed as an economic problem that is driven by either a predefined reliability target or by resource constraints.

1.2  Software Testability

Software testability is the extent to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context. If the testability of the software artifact is high, then finding faults in the system is easier. A lower degree of testability results in increased test effort, and thus in less testing performed in a given fixed amount of time, and thus less chances for findings software defects.

Software testability definition as per ISO standard 25010:2011 – “Degree of effectiveness and efficiency with which test criteria can be established for a system, product or component and tests can be performed to determine whether those criteria have been met.” This definition covers two major aspects – facilitation of testing and facilitations of revealing faults (test effectiveness). Testability can be thought of as a characteristic or property of a piece of software that makes it easier to test functionality. These characteristics are controllability and observability.

Besides being a characteristic of software, testability can be defined in terms of the software development process, that is, relative to the testing phase, for example, integration test, relative activity of the test phase. Robert V. Binder [5] one of the researchers talks about six major factors that result in testability in the software development life cycle process. The factors are: characteristics of the design documentation, characteristics of the implementation, presence of a test suite, presence of test tools, built-in test capabilities and the software development process capability/maturity.

Software structure and behavior can be documented in form of requirements specifications, detailed design models, pseudo-algorithms and architectural views. The clearness and brevity of this documentation can be a facilitator for testability. Software application is implemented with a variety of programming languages. The features available in these languages make for easier or more difficult testing depending on how they are used.

Building test capabilities into software application means that certain features, that are not necessarily in the functional specifications of the software, are added to the software to ease testing. The test suite is a collection of test cases and plans to use them. Test suite can make testing of a software application much easier by making collection feature or functionality wise. The test suite is valuable because it can be reused for regression tests every time a system has been modified. Also helps to identify important test for automation purpose. With all new open source tools available in market helps to execute test scripts, log state, record execution traces, performance information and, report test results. Also makes a lot easier to handle testing.

1.3  Software Testability Tactics

The goal of tactics for testability is to ease testing when an increment of software development is completed. There are two categories of tactics for testability. The first category deals with adding controllability and observability to the software application. The second deals with limiting complexity in the system’s design.

Control and Observe System State

Controllability is concerned with the ease of manipulating a software module in terms of feeding values to its inputs and, consequently, placing the component in a desired/expected state. The inputs come from user interaction with the software’s user interface or from other interactions of components within or external to an application.

Observability is the ability to view the reactions of software components to the inputs that are given input in and being able to watch the changes to the internal states of the software application. Typically, software outputs would provide observability. Since, there are some erroneous internal states or interactions that do not surface as failures and are difficult to re-create and solve. There are many ways where we can improve software observability. They are logging facilities, tracing facilities, code instrumentation, and using assertions.

Control and observe system state category of testability tactics provides internal information into software application. Tactics cause a component to maintain some sort of state information, allow testers to assign a value to that state information, and/or make that information accessible to testers on demand. Some of tactics and its state is discuss below.

  1. Specialized interfaces

Control or capture variable values for a software module either through a test harness or normal execution. A set and get method for important variables, modes, or attributes, report method that returns the full state of the object, reset method to set the internal state (for example, all the attributes of a class) to a specified internal state are some of examples of specialized interfaces. Specialized testing interfaces and methods needs to be clearly identified and kept separate from the access methods and interfaces, so that can be removed if not required.

  1. Record/Playback

Faults are difficult to re-create. All information crossing an interface should be recorded which allows to playback the same software application in case of fault. Record/playback refers to both capturing information crossing an interface and using it as input for subsequent testing.

  1. Localize state storage.

Best way is to store all the state information of a system at single location. But practically, the state is buried or distributed. The state can be fine-grained, even bit level, or coarse-grained to represent broad abstractions or overall operational modes. The choice of granularity depends on how the states will be used in testing. The best way of state storage is to use a state machine.

  1. Abstract data sources.

Similar to controlling a program’s state, easily controlling its input data makes it easier to test. Data plays important role for any interfaces and testing code all different sets of possible data increases software application reliability.  Abstracting the interfaces gives you a chance to substitute test information more effortlessly. One of common example is database connection. If you have a database of customer transactions, you could design your architecture so that it is easy to point your test system at other test databases, or possibly even to files of test data instead, without having to change your functional code. This is common practice used in many organizations to point code to test and production environment.

  1. Sandbox:

Sandboxing referring to create that instance of software application for testing in which there is no overhead of reverting application back to normal state. Separate environment is setup to enable experimentation isolated from real world and that is unconstrained by the worry about having to undo the consequences of the experiment. A common form of sandboxing is to virtualize resources. Using a sandbox, you can build a version of the resource whose behavior is under your control.

  1. Executable assertions

Assertions are usually validation state or variable is having desired state or value. They are hand-coded and inserted at desired locations to indicate where and when a program is in a faulty state. Assertions are defined in terms of specific data declarations, and they must be placed where the data values are referenced or modified.

Limit Complexity

When more than one system interacts with each other, we usually term this software as complex software. Complex software is harder to test as there will be integration issues. This is because, by the definition of complexity, its operating state space is very large, and it is more difficult to re-create an exact same state in a large state space. Most of the fault found in complex software is due to integration issues. Testing is not just about making the software fail but about finding the fault that caused the failure so that it can be removed.

  1. Limit structural complexity.

 Avoiding or resolving cyclic dependencies between components/modules, reducing dependencies between components and encapsulating dependencies on the external environment are ways to limit structural complexity of software. Simplify the inheritance hierarchy if system is following object-oriented architecture. Limit the number of classes from which a class is derived, or the number of classes derived from a class.

Identify software application requirement whether complete data consistency is required at all times or not. If systems that require complete data consistency at all times are often more complex than those that do not. If requirements allow it, consider building your system under the “eventual consistency” model, where your data will not reach to a consistent state immediately. This makes system design simpler, and therefore easier to test. In a layered style, we can test lower layers first, then test higher layers with confidence in the lower layers. This architectural style lead to testability.

  1. Limit nondeterminism

Another important aspect is limiting behavioral complexity. Nondeterminism is a very pernicious form of complex behavior. The tactic involves finding all the sources of nondeterminism, for example, unconstrained parallelism and weeding. Some sources of nondeterminism are unavoidable for instance, in multi-threaded systems.

  1. Case Study: Smart Home System

    1. Overview Architecture

Smart Home System is designed to perform basic operations like playing music, light preference of room based on person identified entering the room. This project is design to research more on how we can integrate hardware and software more efficiently. Here infrared sensors are placed to detect human entering into the room and then based on event detection image is taken by raspberry pi camera. After image in captured by raspberry pi, image is processed with the help of python OpenCV to detect human in image and remove rest of noise from the image. This image is then converted into Base64 string and sent to over to the webserver. Now to connect backend with frontend i.e. raspberry pi, Rest API is developed. Backend receives the image from front end then send to Google AutoML API which is google machine learning API to identify name of person. Google AutoML API model is trained with more 1000+ images of all person using this application. This API will return in response name of person along with confidence level. Detected name of person is then passed over to front-end along with preference of person. Raspberry PI will perform action based on user preference. Preference of all the user is stored in Real-time database (Firestone). Firestone is Google Cloud database. Website is developed in React which uses JavaScript as programming language.

2.2   Applying Testability Tactics

Control and Observe State

 In this system, we have applied common control and observe state tactics. Specialize interface is developed where we can test only backend system where only image is send to and we are observing the result and then doing integration testing with front end. Record/Playback is used to test website and various assertion are implemented. Assertion help to validate health of website. We have deployed this website with the help of Google IAM. So, we can revert website to older version with just one click. Sandbox is created for development purpose where there are no worries of revert of version if something goes wrong. 

 Limit Complexity

 We have isolated frontend and backend so that we can limit complexity. Components are identified i.e. capturing image, integrating sensor with raspberry pi, website frontend, connecting system with Google AutoML API, etc. Also implemented error handling conditions which helps system not to land into non-deterministic state.  

2.3   Recommendation and Conclusion Remarks

Applying testability tactics is very useful for systems where hardware and software interact with each other. It is also uncovering any last-minute surprises or system unusual behavior.  I recommend following testable tactics at architecture level.

  1. Case Study: Common Wealth Management Onboarding System

    1. Overview Architecture

Common Wealth Management Onboarding System gives facility to Chase Customer to open digital brokerage account to do online trading of Stocks, ETF, Mutual Funds, etc via chase.com. There are multiple system and webservices which talk with each other to create records for new or existing clients. Different system is used to store different data of client. Client start with flow from chase.com and page wise data is stored in BWS. Due to this client can store retrieve its existing applications. Now as client submits the application Vator middle layer (Soap Service) takes application to IAAWF. Operations team can view all the information of client and can approve or reject application submitted by client. If application is approved, IAAWF will call EAP Service and create record in NACS database and BMO Job process is triggered which sends account to CIS and IVault for record purpose. If application is rejected, no call is made to EAP and CIS. BMO Job also notify to BWS which will send e-mail to client regarding approval and rejection of application. If approved account will be visible in chase.com. KYC database
is maintained to uniquely identify customer and information is updated with necessary.  

3.2   Applying Testability Tactics

Control and Observe State

In this system, integration plays vital role as many systems talk with each other to update information. Testability architecture is key here. End-to-End testing is difficult in lower environment as each component have different development team and each component have different set of requirements. Specialized Interfaces are developed to that if one component is not in place, we can test application with its stubs. Only required data is transited from one component to other rest is abstracted. Sandbox is created for every major release which is useful for testing in development environment. Observing state of application is important here and data should not be lost in transition. All these components talk via webservice where field mapping is major concerns. Each component automation testing is different with its own validation/assertion checks.

Limit Complexity

 Each component/module have its own development and testing team which helps to focus on small piece of code. Best Practices are followed so that in case of failure system will not reach into non-deterministic state.

3.3   Recommendation and Conclusion Remarks

Following testability tactics really helps for larger application where multiple components interact with each other. I recommend following testability tactics for large system having multiple interactions.

4.  Conclusion and Future Work

4.1 Conclusion

Testability is concerned with the ease of testing software; it is defined by the keywords – controllability, observability and limit complexity. Software that has these three properties is said to be testable. Controllability and observability can become operational through strategies that are used in the software development process to make testing easier. Since architecture is a more stable asset in a software development process, considering testability at an architectural level can ensure longer term focus on testability. Architectural patterns need to be compared at the design level with a choice of testing strategies, requirements need to be reviewed, enhanced and refined by testers, and detailed designs need to be analyzed for inappropriate relations between classes.

4.2 Future Work

 For the design and implementation, having built-in test capabilities enhanced with a testing framework would greatly advantageous for testability. Built-in testing in software applications involves instrumenting the production code of the software with some test code. For better controllability and observability of the software, this test code is being tested. Techniques for built in testing should include using assertions, having set and reset methods, and having logging facilities in the software application. Another method to perform built-in testing, is to have a built-in test infrastructure responsible for exercising tests. This infrastructure should provide facilities like which allows third party test tools access to the software internals, execute test commands, etc. Just software application owner need to make sure it should not affect the performance and behavior of the application.

5.  References:

  • Bruntink and van Deursen [Bruntink 06] write about the impact of structure on testing.
  • Len Bass, Paul Clements, and Rick Kazman. Software architecture in practice. Addison-Wesley Professional, 2nd edition, April 2003.
  • Benoit Baudry, Yves Le Traon, Gerson Sunye, and Jean-Marc J ´ ez´ equel. Measuring ´ and improving design patterns testability. In Proceedings of the 9th International Symposium on Software Metrics (METRICS ’03), pages 50–59. IEEE Computer Society, 2003.
  • K. Beck and E. Gamma. Junit: A cook’s tour, August 1999. JavaReport. [4] Kent Beck and Cynthia Andres. Extreme Programming Explained: Embrace Change (2nd Edition). Addison-Wesley Professional, 2004.
  • Robert V. Binder. Design for testability in object-oriented systems. Commun. ACM, 37(9):87–101, 1994.
  • Robert V. Binder. Testing object-oriented systems: models, patterns, and tools. Addison-Wesley Longman Publishing Co., Inc., 1999. 
  • Rex Black. Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional. John Wiley & Sons, Inc., 2007.
  • RICHARD A. DEMILLO, RICHARD J. LIPTON, AND FREDERICK G. SAYWARD. Hints on Test Data Selection: Help for the Practicing Programmer. IEEE Computer, 11(4):34{41, April 1978.
  • RICHARD A. DEMILLO AND A. J. OFFUTT. Constraint-Based Automatic Test Data Generation. IEEE Trans. on Software Engineering, 17(9):900{910, September 1991.
  • R. S. FREEDMAN. Testability of Software Components. IEEE Transactions on Software Engineering, SE-17(6):553{564, June 1991.
  • D. GRIES. The Science of Programming. SpringerVerlag, 1981.
  • RICHARD G. HAMLET. Probable Correctness Theory. Information Processing Letters, pages 17{25, April 1987.
  • Jeffrey M. Voas. Quality time: How assertions can increase test effectiveness. IEEE Software, 14(2):118–122, 1997.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: