Although reusable software components

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.


Testing is one of the most important features in software project, also important in software reuse field. Although reusable software components are fully individual tested to make security evidence before been reused, they should still be tested after imported into the software programs each time the components been reused. Ignoring the severity of the reusable software components testing, the result of the program will be failed or damaged by increasing the percentages of security risk. For instance, the Ariane 5 implements a SRI that was well performed and tested on the Ariane 4 [1]. However, after the Ariane 5 launched less than one minute, the rocket exploded and the program ended up with a failure [2]. During the investigating of the failure from the recorded data, Lions, Lubeck et al. (1996) claimed in the Inquiry Board that the problem occurred is from the Inertial Reference Systems. According to data analysis from the Ariane 5 failure investigation, the time expected for the nominal behaviour after the Ariane 5 launched up is below 36 seconds. Nevertheless, compared with the expected time, the actual behaviour time is 0.7 seconds delay that leads the system's failure [1]. The Ariane 5 exploded example shows that reusable software components need to be tested in any software programs whatever they had been successfully used or tested in other programs, otherwise there are probabilities that even small chances of potential security risks could impact the software program to fail.

However, another issue of the software reuse testing will be arisen that is the effectiveness to retest the implementations or functions of reusable components in new programs, because repeat testing the same test cases or functions may not helpfully reduce the security risk of the program. Black-box testing tests functions of the software components which may not be changed unless the functions in the component have been modified. The input-output relationship analysis is one of the techniques been used to effectively reduce the test cases for black-box testing [3, 4]. During the analysis, the unchanged test cases, unchanged on domain of testing valuables, could avoid the inefficient retest. It may cause the effectiveness to document the reduced test cases for later reusing by other testers without reducing the program security. Recording and formal document the reusable components testing results and information may convenience for other testers to test the reusable components plugged in other programs.

Some of small software engineering organizations in the world may not include maturity models of testing abilities, so that the security quality of software reuse are not been guaranteed. However, the Second-Party and Third-Party testing organizations can offer the reusable component testing by using the industry standard testing procedures. Although there are several types of the industry standards, they provide similar procedures to test the reusable software components to ensure the level of security qualities is reached [5].


Vulnerability is one of the challenges in software component reuse technology that affects the quality of software security. The software vulnerabilities exist when the program's implementation or specification failures occurred. Most reusable software components are well developed and tested, thus the components themselves include few flaws. Beizer mentioned in his research that there is average from one to three errors per a hundred lines of code in some good quality of software source codes [6]. However, integrating well developed reusable software components may increase the number of flaws in the large and complex software program, such as buffer overflow. Different software programs store various sizes of data into buffer that may over the memory's capability, so that extra data information will be stored in a nearby memory. This situation will lead incorrect results produced from programs, and weaken the security features of software programs. There are some tools that can solve the buffer overflow problem which will discuss in section ().

The reusable software components are commonly used and well known by programmers. Unfortunately, program attackers and hackers are also familiar with these reusable software resource codes, and they well understand those reusable software components and purposively detect the weaknesses from those reusable components. The more software components are reused, the higher probabilities of chance are obtained to trigger program vulnerabilities of reusing software components.

Code audits are the processes that can automatically detect program vulnerabilities. However, audits only can expose the general vulnerabilities, and also audits are used for limited program languages. And audits cost expensive which demand many labour working hours [7].

Currently developing software, the good habit for reducing the program vulnerability is to record and document all previous cases of vulnerabilities. However, the taxonomies of these recorded vulnerabilities are hard to differentiate [8]. A new tool, named Vulture, is introduced in "Mining software archives" at Saarland University [9]. The principle of Vulture is mapping some previous database of the vulnerabilities with each software components source code in the program. The history of vulnerability database is from some collection tools such as Bugzilla that is developed to collect vulnerabilities' information. From the Vulture analysis, whether the recorded vulnerabilities existing in software reusable components in the program can be predicted [9]. However, there are some unknown vulnerabilities existed perhaps, which cannot be explored by the tool currently. This could lead further work on mining the software components' unknown vulnerabilities.

In the study case of the Mozilla project, the figure (1) shows an analysis result by the Vulture. Each box sizes represent the size of each component, and the white colour in box represents no vulnerabilities. Other colours represent that some vulnerabilities are existed in the component. The darker colour covered box, the more vulnerabilities are matched with the existing vulnerability database.

Software Programming Language

There are several software program languages been used, such as Java, C/C++, Pascal etc. The reusable software components may not be available in all software program languages. Different programming languages may have their own language structure, type, declaration, and control flow. For example, there are conflicts converting two similar programming languages C++ and Java. In Java language, Boolean valuable exists, pointer does not introduced in Java. However, Boolean valuable does not exist, the pointer type do exist in C++ language [10]. Thus it may contain some risks to import a reusable component written by one programming language into a program with different programming language. There are specific language converters developed, such as Fortran to C Converter [11], Java to C Converter [12] etc. However, these converters are not generally to any programming languages, and there is no statistics to show that the strength of these converters.

A large amount of software reusable components are developed by type-unsafe program languages in software industry [7]. This type of programming languages cannot mine and prevent general memory errors that will lead the buffer overflow and array boundary flaw occurred [13]. However, there are some tools that can detect vulnerabilities to prevent type-unsafe programming languages most focusing on C language. BOON is one of the tools to detect buffer overflow in C [14].

Another challenge in software component reuse is the vocabulary. The main program and reusable components not sharing the general vocabulary influence the communication between the new user and programmer. It could be occurred that the new user misunderstand about the aim of the reusable component by program vocabulary misleading [15].

Vulnerability detection tools

In the article written by DaCosta et al. (2003) [7], it mentions that the software functions including the inputs and outputs are easily involving high risk of vulnerabilities. The Front Line Functions (FLFs) are named in [7] for these functions that may contain vulnerabilities. The vulnerabilities from the FLF can be detected by some tools, because these vulnerabilities are commonly occurred in the program and recorded in vulnerability database. Currently, well developed component based software contains few common flaw, since some tools, such as Vulture which is introduced in the article written by Neuhaus. S et al. (2007), the FLF Finder tool etc, are developed to automatically predict or detect vulnerabilities in the software program by matching the current software components' code with the collection of databases which record and document existing vulnerabilities' history and information. If matched, Vulture will display the percentage of vulnerabilities by the next feature. Vulture also supports a diagram report that shows the density of components with security vulnerabilities distributed and located in software program. The darker colour of boxes which are instead of individual components, the higher density of the vulnerabilities is contained in the components [8]. However, the FLF Finder tool does not contain the facility to display the distribution of security vulnerabilities in the software program. Both tools allow effectively detecting the known vulnerabilities which are existed and documented publically in ad hoc database, but some unexplored or unknown vulnerabilities are still unable to be discovered by Vulture and the FLF Finder tool where the information of these unknown vulnerabilities do not exist in the database. The FLF Finder tool specifically focuses on the vulnerabilities occurred from functions inputs and outputs which narrow the software system flaw detection area. On the other hand, Vulture focuses on the area wider than the FLF Finder tool covered. Both tools support the rank of risk level of vulnerability. However, the example given in the [8] show that the prediction of the rank is different with the actual rank which is generated from the bug reports (see table).

Vulture and the FLF Finder tool are not the earliest technique tools to address the vulnerabilities in software programs. There are some other tools listed in [7]. Although these tools are not powerful as Vulture and the FLF Finder tool, they also can analyses the software code to detect weaknesses in the program in various fields. The FLF Finder tool is combined and developed from five tools which are Flawfinder, Rough Auditing Tool for Security (RATS), It's The Software, Stupid - Security Scanner (ITS4), Secure Programming Lint (Splint), and Cqual. ITS4 introduces five risk levels (low risk, moderate risk, risk, very risky, and most risky) to show the seriousness of risky functions. Flawfinder and RATS are imitated from ITS4 and created at the same time, and both development teams did not realize each other until both tools are released. After that, both agree and trend to combine the works to some advanced tools. The tools of Flawfinder and RATS can be used to well detect known vulnerabilities in the program. However, most of these tools are limited to use, because the tools only work under the C/C++ language environment. In my opinion, it leads a further work to create a general vulnerability detection tool which can work under common language environments.

The tools discussed above are detecting the vulnerabilities individual software component, and both articles only present the prevention of individual components. The flaws in software program not only exist inside of each component, but also interact between the connections with other components in the program. One of the testing technologies, named fault injection, can be introduced to work on security testing about components [16]. The model of test including six aspects which are User Interface, Memory, File System, API, Network and Register Information (figure). The various faults from these six fields will be injected into the component been tested to check whether any exception components are existed. In this approach, the tested component could be an individual component, or a set of components which connect with each others. Therefore, some of the interaction vulnerabilities may be detected during this testing approach. However, there is no confidence to show that the percentage of these interaction vulnerabilities can be detected that could lead a further research on the evidence of detecting the vulnerabilities between components in a software program. Also, the Fault Injection model did not provide a feature that differentiates a detailed level of component security. In these articles, there is no standard risk level definition to identify the importance of vulnerabilities. ITS4 bring the idea of risk level, however, it was not been used in other detection tools. Clear definition of risk level enclosed with the component will annotate the importance of vulnerabilities.


  1. LIONS. J. L, Lubeck. L, Fauquemberque. J, Kahn. G, Kubbat. W, Leveday. S. Mazzini. L, Merle. D & O'Halloran. C July 1996, 'ARIANE 5 Flight 501 Failure', the Inquiry Board.
  2. Filho. E. July, 2006, 'Component Testing', Reuse in Software Engineering Group, viewed 28th September 2009,
  3. Schroeder. P. J, Faherty. P & Korel. B September 2002, 'Generating Expected Results for Automated Black-Box Testing', Proceedings of the 17th IEEE International Conference on Automated Software Engineering on 23-27, pp. 139-148.
  4. Schroeder. P. J, & Korel. B September 2000, 'Black-Box Test Reduction Using Input-Output Analysis', ACM Transactions on Software Engineering and Methodology, pp. 173-177.
  5. Councill. W. T. July/August 1999, 'Third-Party Testing and the Quality of Software Components', Software, IEEE, Volume 16, Issue 4, pp55-57.
  6. Beizer. B. 1990, 'Software Testing Techniques', International Thomson Computer Press.
  7. DaCosta. D, Dahn. C, Mancoridis. S & Prevelakis. V 2003, 'Characterizing the 'Security Vulnerability Likelihood' of Software Function', Proceedings of the International Conference on Software Maintenance(ICSM'03), pp266-274.
  8. Neuhaus. S, Zimmermann. T, Holler. C & Zeller. A 2007, 'Predicting Vulnerable Software Components', Proceedings of the 14th ACM conference on Computer and communications security, pp 529-540.
  9. Neuhaus. S, Zimmermann. T & Zeller. A March 2009, Mining Software Archives, Saarland University, viewed 30th September, 2009, <>.
  10. Terekhov. A. A, Verhoef. C. November/December 2000, 'The Realities of Language Conversions', IEEE SOFTWARE, Vol. 17, No. 6, pp. 111-124.
  11. Feldman. S. I October 1990, 'Availability of f2c - a Fortran to C Converter', ACM SIGPLAN Fortran Forum, Vol. 9, No. 2, pp 21-22.
  12. Shaylor. N. May 1997, JCC - A Java to C converter, Adelaide University, viewed 28th September 2009, .
  13. Firesmith. D. G January-February 2003, 'Engineering Security Requirements', Journal of Object Technology, Vol. 2, No. 1, pp. 1-16.
  14. Cowan. C January-February 2003, 'Software security for open-source systems', Security & Privacy, IEEE, Vol. 1, No. 1, pp. 38-45.
  15. Monroe. R. T, Garlan. D 1996, 'Style-Based Reuse for Software Architectures', Proceedings of the 4th International Conference on Software Reuse (ICSR '96), pp. 84-93.
  16. Chen. J. F, Lu. Y. S & Xie. X. D 2007, 'Testing Approach of Component Security Based on Fault Injection', 2007 International Conference on Computational Intelligence and Security, pp. 763-767.