Software Architecture Design Approach
✅ Paper Type: Free Essay | ✅ Subject: Computer Science |
✅ Wordcount: 3494 words | ✅ Published: 22nd Jun 2018 |
- Rizwan Umaid Ali
1 Generate and Test as a Software Architecture Design Approach
1.1 About the Writer
Len Bass from the Software Engineering Institute, CMU. Published in European Conference on Software Architecture 2009.
1.2 Introduction
Software Architecture design has become a fundamental component of software development life cycle. As other components of life cycle testing the design of the architecture is important and relates directly to overall quality of the Software Application.
1.3 Problem
To make a Software Architecture a design decision process that can test the design hypothesis, test quality of it and identify issues and rank them on the basis of priority. The process will develop test case on each step of design process. This will result a sequential process in which each design will be developed and tested and thus improving the overall design quality of software system.
1.4 Design Hypothesis
Most designs are created in the context of an existing system, even it is created from scratch and not being modified. Consider this our initial hypothesis can come from following sources:
- The system we will modify or the new functionality we will add.
- A functionally similar system.
- A framework designed to provide services which will help in design process.
- A collection of legacy/open-source applications.
1.5 Establish Test Cases
After we have our initial hypothesis we have to determine how to identify if design satisfies the quality benchmark expected from the application. For this we have to establish test cases and identify three sources for it.
- Identify perspectives which can be used to generate test cases.
- Identify architecturally significant requirements.
- View specific use cases. A number of use cases can be derived by thinking about specific architectural views.
1.6 Test Procedure
Having the test cases of design hypothesis, following methods can be used to test the design and detect its shortcomings.
- Analytic models using quality attributes.
- Develop simulations of how design will support the test cases.
- Create prototype of initial design. Needs more effort but gives best result.
1.7 Test Result and Next Hypothesis
The test result will either show that the design hypothesis passes all tests and fulfills the quality requirement or there are shortcomings. The quality attributes these shortcomings relate to should be identified first. We can use two approaches to alter the design.
- Apply architectural patterns to problems detected.
- Use architectural tactics to address for specific quality attributes.
The updated/next hypothesis will go through the above process recursively until the design with required quality is achieved or the time allocated for the design process runs out.
1.8 Conclusion
This paper presents a software architecture design process where we will test, validate and update our design until it reaches the quality benchmark.
The architect of the software system can use this process to identify shortcomings and make decisions for alternative design structures.
2 SecArch: Architecture-level Evaluation and Testing for Security
2.1 About the Writer
Sarah Al-Azzani and Rami Bahsoon from University of Birmingham. Published in Software Architecture (WICSA) and European Conference on Software Architecture (ECSA) in 2012.
2.2 Introduction
Software architecture models or views are evaluated for detecting problems early in the software development lifecycle. We can detect critical security vulnerabilities at this stage and get a chance to improve quality at a very low cost. This paper presents methodology for detecting security vulnerabilities caused by implied scenarios and race conditions.
2.3 Problem
Incorporating multiple views of an architecture and studying the communications between them and give ways analyze security concerns in concurrent systems. This will done by comparison between complete vs incomplete system models using two methods,
- one for detecting implied scenarios using behaviour models,
- and one for detecting race conditions using scenario diagrams.
2.4 Scenario-based specifications
Scenario-based specifications are based on procedural-flow through components. Each scenario explains a partial view of the concurrent system. The scenario-based model will have following three properties:
- the composition of scenarios from multiple component views of the software system,
- the possible continuations between multiple scenario and
- the hidden implied scenarios.
2.5 Implied Scenarios
Implied scenarios can be formed my dynamically combining two different scenarios together and provide an architectural flow for them is state representation. Below is an example of behavior model which is combining two different scenarios together. It uses an incremental algorithm for detecting inconsistent implied scenarios from sequence models.
Figure 1 behavior model example
2.6 Detecting Race Conditions
We can apply race condition scenarios to above model and identify security vulnerabilities. Below are the 3 possible cases.
· Race Condition 1: disabling the server during authentication.
· Race Condition 2: what happens when the user commits to buy an item while the server is being disabled.
· Race Condition 3: what happens when the server is disabled while the user is logging off.
Below are sequence diagrams for these three race conditions.
Figure 2 Race Conditions
2.7 Conclusion
This paper presented an incremental architecture evaluation method that merges behavior models with structural analysis for improved detection of inconsistencies. We examined the concept of implied scenarios and detection of race conditions.
The writer also compared his proposed method with current industry practices and tested the on industry projects. He found that his method can give better results. The future work will focus on generating test cases to perform live testing on the system under test.
3 Towards a Generic Architecture for Multi-Level Modeling
3.1 About the Writer
Thomas Aschauer, Gerd Dauenhauer, Wolfgang Pree from University of Salzburg. Published in European Conference on Software Architecture 2009.
3.2 Introduction
Software architecture modeling frameworks are essential for representing architecture and their views and the viewpoints they are derived from.
Conventional modeling approaches like UML do not have sufficient complexity to explain the models and meta-models (defining the models) of architecture.
3.3 Problem
General purpose meta-models are used in the conventional modeling techniques, which are not sufficient for modern software models. Model driven architecture has to use more generic approach to describe multilevel architecture.
3.4 model-driven engineering and parameter generation
Model-driven engineering (MDE) is method for managing complexities of developing large software intensive systems. The models in MDE are the main artifacts describing a system going under design process. This paper aims at developing a framework for model-driven generation of automation system configuration parameters using a testbed platform.
The configuration parameters for the automation system can be generated automatically when a testbed model includes hardware and software components.
Figure 3 Testbed configuration MDE
3.5 Presented Prototypical implementation
The below example explain the modeling approach presented in this paper.
Component is an example of the fixed meta-model elements represented as code in the environment. Different types of engines can now be either initiated using the Component, or by cloning the initial Engine and copying t to new engine.
In the example, the Engine has two attributes, Inertia and MaxSpeed. In prototypical approach each element is an instance and must provide values to these attributes. Diesel and Otto represent two kinds of engines; since they are cloned from Engine, they receive copies of the attributes Inertia and MaxSpeed, as well as their values. Italics script is used to mark such copied attributes; grey text is used to express that the attribute values are kept unchanged.
Figure 4 Meta-models example
In Figure 4 DType represents a family of diesel engines. D1 finally is a concrete, physically existing member.
3.6 Conclusion
This paper we presented applications of multi-level modeling in the domain of testbed automation systems and why conventional modeling is insufficient for our MDE requirements and how multi-level modeling can solve the representation issues. They presented an approach to represent models in much more detail with simple notations.
4 Automated reliability prediction from formal architectural descriptions
4.1 About the Writer
Jo˜ ao M. Franco, Raul Barbosa and M´ ario Zenha-Rela University of Coimbra, Portugal. Published in Software Architecture (WICSA) and European Conference on Software Architecture (ECSA) in 2012.
4.2 Introduction
Assessment of quality attributes (i.e., non-functional requirements, such as performance, safety or reliability) of software architectures during design phase so early decisions are validated and the quality requirements are achieved.
4.3 Problem
These quality requirements are most often manually checked, which is time consuming and error-prone due to the overwhelmingly complexity of designs.
A new approach to assess the reliability of software architectures. It consists in extracting and validating a Markov model from the system specification written in an Architecture Description Language (ADL).
4.4 Reliability Prediction Process
There are many different methods to achieve reliability prediction are known, each targeting diverse failure behaviours and different reliability assessment methods. The writer presented the below process for reliability prediction.
- Architecture and Module identification and their interactions.
- The Probability of Failure specified in terms of a percentage.
- Combining the architecture with the failure behaviour. Below is an example of batch sequential style state model using the Marov model.
Figure 5 Markov model example
Validation of the Process
The validation of the process presented by the writer was done in two steps:
- Validity of Reliability Prediction
- Validity with different architectural styles.
The validations were compared to previous research studies. It was found that results were similar proving that the mathematical models were accurate.
5 In Search of a Metric for Managing Architectural Technical Debt
5.1 About the Writer
Robert L. Nord and Ipek Ozkaya from the Software Engineering Institute, CMU. Published in European Conference on Software Architecture 2009.
5.2 Introduction
The technical debt is trade-off between short-term and long-term value. Taking shortcuts to optimize the delivery of features in the short term incurs debt, analogous to financial debt, that must be paid off later to optimize long-term success. This paper demonstrates a architecture focused and measurement based approach to calculate technical debt by describing an application under development.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Find out more about our Essay Writing Service
5.3 Problem
Technical debt thoroughly relays on system evaluation. An organization which has to evolve its system has to make sure if future development will not increase its debt and have a lower cost. In this paper the writer develops a metric that assists in strategically managing technical debt.
5.4 Architecture Debt Analysis
We will analyze technical debt on two different paths. Both paths have different priorities.
Path# 1: Deliver soon.
To deliver a working version of the system quickly, the plan calls for making the minimum required effort at the beginning.
Path #2: Reduce rework and enable compatibility.
Requires an investment in infrastructure during the first deliveries.
Cost compression of both paths is illustrated in the table below.
Path# 1 |
Implementation Cost |
Rework Cost |
Release 4 |
35 |
9 |
Release 3 |
27 |
10 |
Release 2 |
19 |
10 |
Release 1 |
35 |
0 |
Total Cost |
116 |
|
Path# 2 |
Implementation Cost |
Rework Cost |
Release 4 |
9 |
0 |
Release 3 |
9 |
0 |
Release 2 |
9 |
0 |
Release 1 |
67 |
0 |
Total Cost |
94 |
0 |
Table 1 Cost Comparison
We can calculate the total cost T with a function taking implementation cost and rework cost as input.
T = F( Ci, Cr)
For simplicity we consider the function sums both the cost up only. We can now compare the total cost with the cumulative cost.
Release 1 |
Release 2 |
Release 3 |
Release 4 |
||
Path# 1 |
Cumulative value |
36 |
81 |
135 |
197 |
% of total value |
18% |
41% |
68% |
100% |
|
Cost (Ci+ Cr) |
35 |
64 |
101 |
145 |
|
% of total implementation cost |
37% |
68% |
108% |
155% |
|
Path# 2 |
Cumulative value |
36 |
81 |
135 |
197 |
% of total value |
18% |
41% |
68% |
100% |
|
Cost (Ci+ Cr) |
67 |
76 |
85 |
94 |
|
% of total implementation cost |
71% |
81% |
90% |
100% |
Table 2 Cost comparison with cumulative cost
5.5 Modeling Rework
In Agile software development an important challenge is to give value to long term goals then short term. The cost of taking an architectural design decision today always has a lower cost than refactoring the design in future implementations.
An organization should have the following prospective towards its technical debt.
- Focusing on short term goals puts the organization technical jeopardy, when the debt cannot be further handled.
- Using shortcuts can give success on short term until the rework costs starts to come and the cost and timeline becomes unmanageable.
- The architectural decisions requires active follow-ups and continuous cost analysis. This is to make sure that the design decision will make an impact in future costs of development.
5.6 Conclusion
From this research we conclude that the future development of well-designed application has lower cost and is less tentative. Therefore the technical debt in lower if the architecture is well defined and fulfills quality attributes requirement.
6 Research Topic: Testing Software Architectural Changes and adapting best practices to achieve highest quality in a quantifiable manner.
6.1 Introduction
We have looked into testing methodologies and design process and possible technical debt on software architecture. We now look how our technical debt will be effected if due t future requirements the architecture have to be changed.
6.2 Proposed Research Problem
We will first Estimating Technical debt onExistingSoftware architecture and Software system. Then using Design changes and code changes for estimating technical debt and quality attributes. The prediction is made based on comparisons with similar change bursts that occurred in the Architecture. The views of software architecture will be used. This is applicable in Agile Development.
6.3 Types of changes
We can classify each type of change in architecture by analyzing the overall impact of it on the architecture and possibilities of technical debt from it. We also assign a propagation value to each type of debt so that its estimated suavity can be quantified.
- Small architectural change in one or some views.
- Low Technical Debt increase (0.10)
- Addition of new architecture. Architecture for new functionality added.
- Medium Technical Debt increase (0.30)
- Small changes in several views.
- High Technical Debt increase (0.60)
- Massive architectural change is several views.
- High Technical Debt increase (0.80)
6.4 Proposed Solution
After analyzing research papers and book ‘Software Architecture in Practice’, I can give following points on how the technical debt of new architecture can be managed.
- Compare updated architecture and see how the updates have increased the technical debt.
- Apply same test cases which were used in the initial software architecture.
- See how quality attributes are increased or decreased after the update.
6.5 Reduction of Technical Debt
To reduce the technical debt after architectural changes following strategies can be adopted.
6.5.1 Refactoring
- Apply architectural patterns to improve several quality attributes.
- Use architectural tactics to address for specific quality attributes.
6.5.2 Retaining existing Architecture Models
- Continue the existing architecture in patterns.
- Search for Modifiability tactics already used. Stick to that tactics.
7 References
[1] Len Bass: Generate and test as a software architecture design approach. WICSA/ECSA 2009 Page 309 – 312.
[2] Sarah Al-Azzani and Rami Bahsoon. SecArch: Architecture-level Evaluation and Testing for Security. In 2012 Joint Working IEEE/IFIP Conference on Software Architecture (WICSA) and European Conference on Software Architecture (ECSA), pages 51 – 60, Aug. 2012.
[3] Thomas Aschauer, Gerd Dauenhauer, Wolfgang Pree. Towards a Generic Architecture for Multi-Level Modeling. European Conference on Software Architecture 2009 Page 121 – 130.
[4] J. Franco, R. Barbosa, and M. Zenha-Rela. Automated reliability prediction from formal architectural descriptions. In 2012 Joint Working IEEE/IFIP Conference on Software Architecture (WICSA) and European Conference on Software Architecture (ECSA), pages 302 -309, Aug. 2012.
[5] R. Nord, I. Ozkaya, P. Kruchten, and M. Gonzalez-Rojas, “In search of a metric for managing architectural technical debt,” in 2012 Joint Working IEEE/IFIP Conference on Software Architecture and 6th European Conference on Software Architecture, 2012, pp. 91-100.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allDMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: