Weighted Method Per Class Information Technology Essay

5397 words (22 pages) Essay

1st Jan 1970 Information Technology Reference this

Disclaimer: This work has been submitted by a university student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Nowadays, a quality engineer can choose from a large number of object-oriented metrics. The question posed is not the lack of metrics but the selection of those metrics which meet the specific needs of each software project. A quality engineer has to face the problem of selecting the appropriate set of metrics for his software measurements. A number of object-oriented metrics exploit the knowledge gained from metrics used in structured programming and adjust such measurements so as to satisfy the needs of object-oriented programming. On the other hand, other object-oriented metrics have been developed specifically for object-oriented programming and it would be pointless to apply them to structured programming. The above figure shows the hierarchical structure of the metrics.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Find out more

CK METRICS MODEL:

Chidamber and Kemerer define the so called CK metric suite [13]. CK metrics have generated a significant amount of interest and are currently the ost well known suite of measurements for OO software [17]. Chidamber and Kemerer proposed six metrics; the following discussion shows their metrics.

Weighted Method per Class (WMC)

WMC measures the complexity of a class. Complexity of a class can for example be calculated by the cyclomatic complexities of its methods. High value of WMC indicates the class is more complex than that of low values.

Depth of Inheritance Tree (DIT)

DIT metric is the length of the maximum path from the node to the root of the tree. So this metric calculates how far down a class is declared in the inheritance hierarchy. The following figure shows the value of DIT for a simple class hierarchy. DIT represents the complexity of the behaviour of a class, the complexity of design of a class and potential reuse.

Fig. The value of DIT for the class hierarchy

Thus it can be hard to understand a system with many inheritance layers. On the other hand, a large DIT value indicates that many methods might be reused.

In short:

(DIT) assess how deep, in a class hierarchy, a class is. This metric assesses the potential of reuse of a class and its probable ease of maintenance. A class with small DIT has much potential for reuse. (i.e. it tends to be a general abstract class).

Number of Children (NOC)

This metric measures how many sub-classes are going to inherit the methods of the parent class. As shown in above figure, class C1 has three children, subclasses C11, C12, and C13. The size of NOC approximately indicates the level of reuse in an application. If NOC grows it means reuse increases. On the other hand, as NOC increases, the amount of testing will also increase because more children in a class indicate more responsibility. So, NOC represents the effort required to test the class and reuse.

In short:

(NOC) is a simple measure of the number of classes associated with a given class using an inheritance relationship. It could be used to assess the potential influence a class has on the overall design. Classes with many children are considered a bad design habit that occurs frequently. NOC helps detecting such classes.

Coupling between objects (CBO)

The idea of this metrics is that an object is coupled to another object if two object act upon each other. A class is coupled with another if the methods of one class use the methods or attributes of the other class. An increase of CBO indicates the reusability of a class will decrease. Thus, the CBO values for each class should be kept as low as possible.

Response for a Class (RFC)

RFC is the number of methods that can be invoked in response to a message in a class. Pressman [20] States, since RFC increases, the effort required for testing also increases because the test sequence grows. If RFC increases, the overall design complexity of the class increases and becomes hard to understand. On the other hand lower values indicate greater polymorphism. The value of RFC can be from 0 to 50 for a class12, some cases the higher value can be 100- it depends on project to project.

In short:

(RFC) is defined as a count of the set of methods that can be potentially executed in response to a message received by an instance of the class.

Lack of Cohesion in Methods (LCOM)

This metric uses the notion of degree of similarity of methods. LCOM measures the amount of cohesiveness present, how well a system has been designed and how complex a class is [23]. LCOM is a count of the number of method pairs whose similarity is zero, minus the count of method pairs whose similarity is not zero. Raymond [24] discussed for example, a class C with 3 methods M1, M2, and M3. Let I1= {a, b, c, d, e}, I2= {a, b, e}, and I3= {x, y, z}, where I1 is the set of instance variables used by method M1. So two disjoint set can be found: I1 Ç I2 (= {a, b, e}) and I3. Here, one pair of methods who share at least one instance variable (I1 and I2). So LCOM = 2-1 =1. [13] States ―Most of the methods defined on a class should be using most of the data members most of the timeâ€-. If LCOM is high, methods may be coupled to one another via attributes and then class design will be complex. So, designers should keep cohesion high, that is, keep LCOM low.

In short:

(LCOM) is the difference between the number of methods whose similarity is zero and the number of methods whose similarity is not zero. LCOM can judge cohesiveness among class methods. Low LCOM indicates high cohesiveness, and vice versa.

III. MOOD METRICS MODEL – (METRICS FOR OBJECT ORIENTED DESIGN)

The MOOD metrics set refers to a basic structural mechanism of the OO paradigm as encapsulation ( MHF and AHF ), inheritance ( MIF and AIF ), polymorphisms ( PF ) , message-passing ( CF ) and are expressed as quotients. The set includes the following metrics:

Method Hiding Factor (MHF)

MHF is defined as the ratio of the sum of the invisibilities of all methods defined in all classes to the total number of methods defined in the system under consideration. The invisibility of a method is the percentage of the total classes from which this method is not visible.

Attribute Hiding Factor (AHF)

AHF is defined as the ratio of the sum of the invisibilities of all attributes defined in all classes to the total number of attributes defined in the system under consideration.

Method Inheritance Factor (MIF)

MIF is defined as the ratio of the sum of the inherited methods in all classes of the system under consideration to the total number of available methods (locally defined plus inherited) for all classes.

Attribute Inheritance Factor (AIF)

AIF is defined as the ratio of the sum of inherited attributes in all classes of the system under consideration to the total number of available attributes (locally defined plus inherited) for all classes.

Polymorphism Factor (PF)

PF is defined as the ratio of the actual number of possible different polymorphic situation for class Ci to the maximum number of possible distinct polymorphic situations for class Ci.

Coupling Factor (CF)

CF is defined as the ratio of the maximum possible number of couplings in the system to the actual number of couplings not imputable to inheritance.

V. QMOOD (QUALITY MODEL FOR OBJECT-ORIENTED DESIGN)

The QMOOD (Quality Model for Object-Oriented Design) is a comprehensive quality model that establishes a clearly defined and empirically validated model to assess OOD quality attributes such as understandability and reusability, and relates it through mathematical formulas, with structural OOD properties such as encapsulation and coupling. The QMOOD model consists of six equations that establish relationship between six OOD quality attributes (reusability, flexibility, understandability, functionality, extendibility, and effectiveness) and eleven design properties.

All these are measurable directly from class diagrams, and applicable to UML class diagrams.

VI. OTHER OO METRICS

Chen et al.[9] proposed metrics are 1.CCM (Class Coupling Metric), 2.OXM (Operating Complexity Metric), 3.OACM (Operating Argument Complexity Metric), 4.ACM (Attribute Complexity Metric), 5.OCM (Operating Coupling Metric), 6.CM (Cohesion Metric), 7.CHM (Class Hierarchy of Method) and 8.RM (Reuse Metric). Metrics 1 through 3 are subjective in nature; metrics 4 through 7 involve counts of features; and metric 8 is a Boolean (0 or 1) indicator metric.

Refs

[9] Chen, J-Y., Lum, J-F.: “A New Metrics for Object-Oriented Design.” Information of Software Technology 35,4(April 1993):232-240.

[13] Chidamber, Shyam, Kemerer, Chris F. “A Metrics Suite for Object- Oriented Design” M.I.T. Sloan School of Management E53-315, 1993.

[17] Harrison, R., Samaraweera, L.G., Dobie, M.R., and Lewis, P.H: ―Comparing Programming Paradigms: An Evaluation of Functional and Object-Oriented Programs,â€- Software Eng. J., vol. 11, pp. 247-254, July 1996.

[20] Roger S. Pressman: ―Software Engineeringâ€-, Fifth edition, ISBN 0077096770.

[23] Raymond, J. A, Alex, D.L: ―A data model for object oriented design metricsâ€-, Technical Report 1997, ISBN 0836 0227.

[24] Alexander et al 2003,â€-Mathematical Assessment of Object-Oriented Design Qualityâ€-, IEEE Transactions on Software Engineering, VOL. 29, NO. 11, November 2003.

Code and Design Metrics for Object-Oriented Systems

Lindroos

Object-oriented design and development has become popular in today’s software development environment. The benefits of object-oriented software development are now widely recognized [AlC98]. Object-oriented development requires not only different approaches to design and implementation; it also requires different approaches to software metrics. Metrics for object-oriented system are still a relatively new field of study. The traditional metrics such as lines of code and Cyclomatic complexity [McC76, WEY88] have become standard for traditional procedural programs [LIK00, AlC98].

The metrics for object-oriented systems are different due to the different approach in program paradigm and in object-oriented language itself. An object-oriented program paradigm uses localization, encapsulation, information hiding, inheritance, object abstraction and polymorphism, and has different program structure than in procedural languages. [LIK00]

Software metrics are often categorized into product metrics and design metrics [LoK94]. Product metrics are used to predict project needs, such as staffing levels and total effort. They measure the dynamic changes that have taken place in the state of the project, such as how much has been done and how much is left to do. Project metrics are more global and less specific than the design metrics. Unlike the design metrics, project metrics do not measure the quality of the software being developed.

Design metrics are measurements of the static state of the project design at a particular point in time. These metrics are more localized and prescriptive in nature. They look at the quality of the way the system is being built. [LoK94]

Design metrics can be divided into static metrics and dynamic metrics [SyY99]. Dynamic metrics have a time dimension and the values tend to change over time. Thus dynamic metrics can only be calculated on the software as it is executing. Static metric metrics remain invariant and are usually calculated from the source code, design, or specification.

Why it is important to measure object-oriented metrics?

The intent of the metrics proposed is to provide help for object-oriented developers and managers to foster better designs, more reusable code, and better estimates. The metrics should be used to identify anomalies as well as to measure progress. The numbers are not meant to drive the design of the project’s classes or methods, but rather to help us focus our efforts on potential areas of improvement. The metrics can help each of us improve the way we develop the software. The metrics, as supported by tools, makes us think about how we subclass, write methods, use collaboration, and so on. [LoK94] They help the engineer to recognize parts of the software that might need modifications and re-implementation. The decision of changes to be made should not rely only on the metric values [SyY99].

The metrics are guidelines and not rules and they should be used to support the desired motivations. The intent is to encourage more reuse through better use of abstractions and division of responsibilities, better designs through detection and correction of anomalies. Positive incentives, improvement training and mentoring, and effective design reviews support probability of achieving better results of using object-oriented metrics. [LoK94]

Software should be designed for maintenance [AlC98]. The design evaluation step is an integral part of achieving a high quality design. The metrics should help in improving the total quality of the end product, which means that quality problems could be resolved as early as possible in the development process. It is a well-known fact that the earlier the problems can be resolved the less it costs to the project in terms of time-to-market, quality and maintenance.

Code and design metrics suite

Metric 1: Weighted Methods per Class (WMC)

WMC is a sum of complexities of methods of a class. Consider a Class C1 with Methods M1…Mn that are defined in the class. Let c1 …cn be the complexity of the methods [ChK94]. Then:

WMC measures size as well as the logical structure of the software. The number of methods and the complexity of the involved methods are predictors of how much time and effort is required to develop and maintain the class [SyY99, ChK94]. The larger the number of methods in a class, the greater the potential impact on inheriting classes. Consequently, more effort and time are needed for maintenance and testing [YSM02]. Furthermore, classes with large number of complex methods are likely to be more application specific, limiting the possibility of reuse. Thus WMC can also be used to estimate the usability and reusability of the class [SyY99]. If all method complexities are considered to be unity, then WMC equals to Number of Methods (NMC) metric [YSM02].

Metric 2: Depth of Inheritance Tree (DIT)

The depth of a class within the inheritance hierarchy is the maximum length from the class node to the root of the tree, measured by the number of ancestor classes. The deeper a class is in the hierarchy, the greater the number of methods it is likely to inherit, making it more complex to predict its behaviour. Deeper trees constitute greater design complexity, since more methods and classes are involved. The deeper a particular class is in the hierarchy, the greater potential reuse of inherited methods. [ChK94] For languages that allow multiple inheritances, the longest path is usually taken [YSM02].

The large DIT is also related to understandability and testability [LIK00, SyY99]. Inheritance decreases complexity by reducing the number of operations and operators, but this abstraction of objects can make maintenance and design difficult.

Metric 3: Number of Children (NOC)

Number of children metric equals to number of immediate subclasses lower to a class in the class hierarchy. Greater the number of children, greater the reuse, since inheritance is a form of reuse. Greater the number of children, the greater the likelihood of improper abstraction of the parent class. If a class has a large number of children, it may be a case of misuse of sub classing. The number of children gives an idea of the potential influence a class has on the design. If a class has a large number of children, it may require more testing of the methods in that class. [ChK94] In addition, a class with a large number of children must be flexible in order to provide services in a large number of contexts [YSM02].

Find out how UKEssays.com can help you!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

Metric 4: Coupling between object classes (CBO)

CBO for a class is a count of the number of other classes to which is coupled. CBO relates to the notion that an object is coupled to another object if one of them acts on the other, i.e., methods of one uses methods or instance variables of another. Excessive coupling between object classes is detrimental to modular design and prevents reuse. The more independent a class is, the easier it is to reuse it in another application. In order to improve modularity and promote encapsulation, inter-object class couples should be kept to a minimum. [ChK94] Direct access to foreign instance variable has generally been identified as the worst type of coupling [SyY99].

The larger the number of couples, the higher the sensitivity to changes in other parts of the design, and therefore maintenance is more difficult. A measure of coupling is useful to determine how complex the testing of various parts of a design is likely to be. The higher the inter-object class coupling, the more rigorous the testing needs to be.

Metric 5: Response For a Class (RFC)

The response set of a class is a set of methods that can potentially be executed in response to a message received by and object of that class2. RFC measures both external and internal communication, but specifically it includes methods called from outside the class, so it is also a measure of the potential communication between the class and other classes. [ChK94, AlC98] RFC is more sensitive measure of coupling than CBO since it considers methods instead of classes [YSM02].

If a large number of methods can be invoked in response to a message, the testing and debugging of the class becomes more complicated since it requires a greater level of understanding required on the part of the tester. The larger the number of methods that can be invoked from a class, the greater the complexity of the class. A worst-case value for possible responses will assist in appropriate allocation of testing time. [ChK94]

Metric 6: Lack of Cohesion in Methods (LCOM)

The LCOM is a count of the number of method pairs whose similarity is 0 minus the count of method pairs whose similarity is not zero. The larger the number of similar methods, the more cohesive the class, which is consistent with traditional notions of cohesion that measure the inter-relatedness between portions of a program. If none of the methods of a class display any instance behavior, i.e., do not use any instance variables, they have no similarity and the LCOM value for the class will be zero. [ChK94]

Cohesiveness of methods within a class is desirable, since it promotes encapsulation. Lack of cohesion implies classes should probably be split into two or more subclasses. Any measure of disparateness of methods helps identify flaws in the design of classes. Low cohesion increases complexity; thereby it increases the likelihood of errors during the development process. [ChK94]

Evaluation of metrics / Application of OO metrics

Chidamber and Kemerer who introduced the basic suite for collecting object-oriented code and design metrics tested the metrics suite with two projects. The metrics proposed in their paper were collected using automated tools developed for this research at two different organizations which will be referred to here as Site A and Site B. [ChK94]

Site A is a software vendor that uses object-oriented design in their development work and has a collection of different C++ class libraries. Metrics data from 634 classes from two C++ class libraries that are used in the design of graphical user interfaces (GUI) were collected. Both these libraries were used in different product applications for rapid prototyping and development of windows, icons and mouse based interfaces. Reuse across different applications was one of the primary design objectives of these libraries. These typically were used at Site A in conjunction with other C++ libraries and traditional C-language programs in the development of software sold to UNIX workstation users.

Site B is a semiconductor manufacturer and uses the Smalltalk programming language for developing flexible machine control and manufacturing systems. Metrics were collected on the class libraries used in the implementation of a computer aided manufacturing system for the production of VLSI (Very Large Scale Integration) circuits. Over 30 engineers worked on this application, after extensive training and experience with object orientation and the Smalltalk environment. Metrics data from 1459 classes from Site B were collected.

The data from two different commercial projects and subsequent discussions with the designers at those sites lead to several interesting observations that may be useful for managers of object-oriented projects. Designers may tend to minimise inheritance hierarchies, forsaking reusability through inheritance for simplicity of understanding. This potentially reduces the extent of method reuse within an application. However, even in minimal class hierarchies it is possible to extract reuse benefits, as evidenced by the class with 87 methods at Site A that had a total of 43 descendants. This suggests that managers need to proactively manage reuse opportunities and that this metrics suite can aid this process.

Another demonstrable use of these metrics is in uncovering possible design flaws or violations of design philosophy. As the example of the command class with 42 children at Site A demonstrates, the metrics help to point out instances where sub classing has been misused. This is borne out by the experience of the designers interviewed at one of the data sites where excessive declaration of sub classes was common among engineers new to the object-oriented paradigm. These metrics can be used to allocate testing resources. As the example of the interface classes at Site B (with high CBO and RFC values) demonstrates, concentrating test efforts on these classes may have been a more efficient utilization of resources.

Another application of these metrics is in studying differences between different object-oriented languages and environments. As the RFC and DIT data suggest, there are differences across the two sites that may be due to the features of the two target languages. However, despite the large number of classes examined (634 at Site A and 1459 at Site B), only two sites were used in this study, and therefore no claims are offered as to any systematic differences between C++ and Smalltalk environments. [ChK94]

Basili’s, Briand’s and Melo’s document ‘A Validation of Object-Oriented Design Metrics as Quality Indicators’ [BBM96] presents the results of a study in which they empirically investigated the suite of object-oriented design metrics introduced in Chidamber’s and Kemerer’s document ‘A Metrics Suite for Object Oriented Design’ [ChK94]. In their study, 6

they collected data about faults found in object-oriented classes. Based on these data, they verified how much fault-proneness is influenced by internal (e.g., size, cohesion) and external (e.g. coupling) design characteristics of object-oriented classes. From the results, five out of six Chidamber’s and Kemerer’s object-oriented metrics showed to be useful to predict class fault-proneness during the high- and low-level design phases of the life cycle. The only metric that was not appropriate in their study was LCOM. In addition, Chidamber’s and Kemerer’s object-oriented metrics showed to be better predictors than the best set of ‘traditional’ code metrics, which can only be collected during later phases of the software processes. [BBM96]

This empirical validation provides evidence demonstrating that most of Chidamber’s and Kemerer’s object-oriented metrics can be useful quality indicators. Furthermore, most of these metrics appear to be complementary indicators, which are relatively independent from each other. The obtained results provide motivation for further investigation and refinement of Chidamber’s and Kemerer’s object-oriented metrics. [BBM96]

Conclusion

Metric data provides quick feedback for software designers and managers. Analyzing and collecting the data can predict design quality. If appropriately used, it can lead to a significant reduction in costs of the overall implementation and improvements in quality of the final product. The improved quality, in turn reduces future maintenance efforts. Using early quality indicators based on objective empirical evidence is therefore a realistic objective [BMB99]. According to my opinion it’s motivating for the developer to get early and continuous feedback about the quality in design and implementation of the product they develop and thus get a possibility to improve the quality of the product as early as possible. It could be a pleasant challenge to improve own design practices based on measurable data.

It is unlikely that universally valid object-oriented quality measures and models could be devised, so that they would suit for all languages in all development environments and for different kind of application domains. It should be also kept in mind that metrics are only guidelines and not rules. They are guidelines that give an indication of the progress that a project has made and the quality of design [LoK94].

[AlC98] Alkadi Ghassan, Carver Doris L.: Application of Metrics to Object-Oriented Designs, Proceedings of IEEE Aerospace Conference, Volume 4, pages 159 – 163, March 1998.

[ChK94] Chidamber Shyam R., Kemerer Chris F.: A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, Volume 20, Number 6, pages 476 – 493, June 1994.

[LIK00] Shuqin Li-Kokko: Code and Design Metrics for Object-Oriented Systems, Helsinki University of Technology, 9 pages, 2000.

[LoK94] Lorenz Mark, Kidd Jeff: Object-Oriented Software Metrics: A Practical Guide. P T R Prentice Hall, Prentice-Hall, Inc. A Pearson Education Company, 146 pages, 1994.

[SyY99] Tarja Systä, Ping Yu: Using OO Metrics and Rigi to Evaluate Java software, University of Tampere, Department of Computer Science, Series of Publications A A-1999-9, 24 pages, July 1999.

Metricsbymark>>

Product metrics

Product metrics-also known as quality metrics- measure system quality. You can of course describe quality in many different ways, most popularly through the so-called “ilities.” In Object-Oriented Metrics (Prentice Hall, 1994), Brian Henderson- Sellers describes a number of such categories: reliability, availability, maintainability, understandability, modifiability, testability, and usability.

We generally use product metrics for providing

• guidelines that suggest local and specific prescriptive action for improving the quality of different system components,

• comparisons between existing systems, and

• comparisons between new systems and other known systems.

Keep in mind that quality metrics do not correlate ell to a project’s overall size or status measurements. System quality is a critical concern ,however, and quality metrics do provide valuable insight into specific ways that enhance system quality.

CATEGORIES OF OO METRICS

In addition to specifying process and product metrics, it is useful to group OO metrics into four categories:

• System size. Knowing, for example, how many function calls and objects to anticipate using in a system can help make more accurate estimates.

• Class or method size.Though measured in various ways, small, simple classes and methods are typically better than large, complex ones.

• Coupling and inheritance.The number and types of these relationships indicate the interdependence of classes. Clear, simple relationships are preferable to numerous, complex ones.

• Class or method internals.This metric reveals how complex classes and methods are and how well you’ve documented them in your code comments.

Unfortunately, system size metrics have no standard values against which you might compare your own system. Size depends entirely on the amount of functionality you build into your system.

Other metrics, however, do have standard values. For instance, a method’s size is fairly consistent across systems. So you might want to provide some guidance to your team about how large the methods should be. You also might want to have in place an upper limit for method size, above which you would inspect methods to determine whether and how they might be shortened.

For OO metrics other than system size, it also makes sense to talk about system level averages. For example, ask yourself whether your methods are, on average, larger than those of comparable systems. Averages provide an indication of the overall system quality and can signal trends that may affect system quality.

Class and method size

You can usually consider class or method size metrics to be equivalent to design or quality metrics because unusually large classes or methods may indicate ill-conceived abstractions or overly complex implementations. Measurements of class or method size measures that differ substantially from average values are generally good candidates for inspection or rework. Class and method size metrics include:

• LOC and function calls per class/method. These metrics are similar to the LOC system size metrics but focus on individual classes and methods.

•Number of methods per class and public method count per class. The number of methods per class indicates the total evel of functionality implemented by a class. The number of public methods indicates the amount of behavior exposed to the outside world and provides a view of class size and how complex the class might be to use.

• Number of attributes per class and number of instance attributes per class.The number of attributes in a class indicates the amount of data the class must maintain in order to carry out its responsibilities. Attributes can either be instance attributes, which are unique to each instance of an object, or class variables, which have the same value for all members of the class.

Coupling and inheritance

Coupling and inheritance metrics help measure the quality of an object model. More specifically, they help reveal the degree to which interobject dependencies exist. Ideally, objects should be independent, which makes it easy to transplant an object from one environment to another and reuse existing objects when you build new systems. The reality, of course, is that objects have interdependencies, and reusing them is rarely as simple as cutting and pasting. All too often, the number of dependencies is so large that understanding and moving the entire group of objects is more expensive than rewriting objects from scratch.

• Class fan-in. Fan-in metrics measure the number of classes that depend on a given object. If you have to couple your objects, you’ll want to use fan-in, since it centralizes dependencies, as illustrated in Figure1.

• Class fan-out. Fan-out metrics measure the number of classes on which a given class depends. You should avoid using the fan-out technique, since it represents a situation in which you spread dependencies across the system, as illustrated in Figure 2.

• Class inheritance level. The inheritance depth of a class is the number of its direct ancestors. An unnecessarily deep class hierarchy adds to complexity and can represent a poor use of the inheritance mechanism.

• Number of children per class. This metric measures the number of direct descendants of a particular class, which can indicate unnecessarily complex hierarchies.

C

Nowadays, a quality engineer can choose from a large number of object-oriented metrics. The question posed is not the lack of metrics but the selection of those metrics which meet the specific needs of each software project. A quality engineer has to face the problem of selecting the appropriate set of metrics for his software measurements. A number of object-oriented metrics exploit the knowledge gained from metrics used in structured programming and adjust such measurements so as to satisfy the needs of object-oriented programming. On the other hand, other object-oriented metrics have been developed specifically for object-oriented programming and it would be pointless to apply them to structured programming. The above figure shows the hierarchical structure of the metrics.

CK METRICS MODEL:

Chidamber and Kemerer define the so called CK metric suite [13]. CK metrics have generated a significant amount of interest and are currently the ost well known suite of measurements for OO software [17]. Chidamber and Kemerer proposed six metrics; the following discussion shows their metrics.

Weighted Method per Class (WMC)

WMC measures the complexity of a class. Complexity of a class can for example be calculated by the cyclomatic complexities of its methods. High value of WMC indicates the class is more complex than that of low values.

Depth of Inheritance Tree (DIT)

DIT metric is the length of the maximum path from the node to the root of the tree. So this metric calculates how far down a class is declared in the inheritance hierarchy. The following figure shows the value of DIT for a simple class hierarchy. DIT represents the complexity of the behaviour of a class, the complexity of design of a class and potential reuse.

Fig. The value of DIT for the class hierarchy

Thus it can be hard to understand a system with many inheritance layers. On the other hand, a large DIT value indicates that many methods might be reused.

In short:

(DIT) assess how deep, in a class hierarchy, a class is. This metric assesses the potential of reuse of a class and its probable ease of maintenance. A class with small DIT has much potential for reuse. (i.e. it tends to be a general abstract class).

Number of Children (NOC)

This metric measures how many sub-classes are going to inherit the methods of the parent class. As shown in above figure, class C1 has three children, subclasses C11, C12, and C13. The size of NOC approximately indicates the level of reuse in an application. If NOC grows it means reuse increases. On the other hand, as NOC increases, the amount of testing will also increase because more children in a class indicate more responsibility. So, NOC represents the effort required to test the class and reuse.

In short:

(NOC) is a simple measure of the number of classes associated with a given class using an inheritance relationship. It could be used to assess the potential influence a class has on the overall design. Classes with many children are considered a bad design habit that occurs frequently. NOC helps detecting such classes.

Coupling between objects (CBO)

The idea of this metrics is that an object is coupled to another object if two object act upon each other. A class is coupled with another if the methods of one class use the methods or attributes of the other class. An increase of CBO indicates the reusability of a class will decrease. Thus, the CBO values for each class should be kept as low as possible.

Response for a Class (RFC)

RFC is the number of methods that can be invoked in response to a message in a class. Pressman [20] States, since RFC increases, the effort required for testing also increases because the test sequence grows. If RFC increases, the overall design complexity of the class increases and becomes hard to understand. On the other hand lower values indicate greater polymorphism. The value of RFC can be from 0 to 50 for a class12, some cases the higher value can be 100- it depends on project to project.

In short:

(RFC) is defined as a count of the set of methods that can be potentially executed in response to a message received by an instance of the class.

Lack of Cohesion in Methods (LCOM)

This metric uses the notion of degree of similarity of methods. LCOM measures the amount of cohesiveness present, how well a system has been designed and how complex a class is [23]. LCOM is a count of the number of method pairs whose similarity is zero, minus the count of method pairs whose similarity is not zero. Raymond [24] discussed for example, a class C with 3 methods M1, M2, and M3. Let I1= {a, b, c, d, e}, I2= {a, b, e}, and I3= {x, y, z}, where I1 is the set of instance variables used by method M1. So two disjoint set can be found: I1 Ç I2 (= {a, b, e}) and I3. Here, one pair of methods who share at least one instance variable (I1 and I2). So LCOM = 2-1 =1. [13] States ―Most of the methods defined on a class should be using most of the data members most of the timeâ€-. If LCOM is high, methods may be coupled to one another via attributes and then class design will be complex. So, designers should keep cohesion high, that is, keep LCOM low.

In short:

(LCOM) is the difference between the number of methods whose similarity is zero and the number of methods whose similarity is not zero. LCOM can judge cohesiveness among class methods. Low LCOM indicates high cohesiveness, and vice versa.

III. MOOD METRICS MODEL – (METRICS FOR OBJECT ORIENTED DESIGN)

The MOOD metrics set refers to a basic structural mechanism of the OO paradigm as encapsulation ( MHF and AHF ), inheritance ( MIF and AIF ), polymorphisms ( PF ) , message-passing ( CF ) and are expressed as quotients. The set includes the following metrics:

Method Hiding Factor (MHF)

MHF is defined as the ratio of the sum of the invisibilities of all methods defined in all classes to the total number of methods defined in the system under consideration. The invisibility of a method is the percentage of the total classes from which this method is not visible.

Attribute Hiding Factor (AHF)

AHF is defined as the ratio of the sum of the invisibilities of all attributes defined in all classes to the total number of attributes defined in the system under consideration.

Method Inheritance Factor (MIF)

MIF is defined as the ratio of the sum of the inherited methods in all classes of the system under consideration to the total number of available methods (locally defined plus inherited) for all classes.

Attribute Inheritance Factor (AIF)

AIF is defined as the ratio of the sum of inherited attributes in all classes of the system under consideration to the total number of available attributes (locally defined plus inherited) for all classes.

Polymorphism Factor (PF)

PF is defined as the ratio of the actual number of possible different polymorphic situation for class Ci to the maximum number of possible distinct polymorphic situations for class Ci.

Coupling Factor (CF)

CF is defined as the ratio of the maximum possible number of couplings in the system to the actual number of couplings not imputable to inheritance.

V. QMOOD (QUALITY MODEL FOR OBJECT-ORIENTED DESIGN)

The QMOOD (Quality Model for Object-Oriented Design) is a comprehensive quality model that establishes a clearly defined and empirically validated model to assess OOD quality attributes such as understandability and reusability, and relates it through mathematical formulas, with structural OOD properties such as encapsulation and coupling. The QMOOD model consists of six equations that establish relationship between six OOD quality attributes (reusability, flexibility, understandability, functionality, extendibility, and effectiveness) and eleven design properties.

All these are measurable directly from class diagrams, and applicable to UML class diagrams.

VI. OTHER OO METRICS

Chen et al.[9] proposed metrics are 1.CCM (Class Coupling Metric), 2.OXM (Operating Complexity Metric), 3.OACM (Operating Argument Complexity Metric), 4.ACM (Attribute Complexity Metric), 5.OCM (Operating Coupling Metric), 6.CM (Cohesion Metric), 7.CHM (Class Hierarchy of Method) and 8.RM (Reuse Metric). Metrics 1 through 3 are subjective in nature; metrics 4 through 7 involve counts of features; and metric 8 is a Boolean (0 or 1) indicator metric.

Refs

[9] Chen, J-Y., Lum, J-F.: “A New Metrics for Object-Oriented Design.” Information of Software Technology 35,4(April 1993):232-240.

[13] Chidamber, Shyam, Kemerer, Chris F. “A Metrics Suite for Object- Oriented Design” M.I.T. Sloan School of Management E53-315, 1993.

[17] Harrison, R., Samaraweera, L.G., Dobie, M.R., and Lewis, P.H: ―Comparing Programming Paradigms: An Evaluation of Functional and Object-Oriented Programs,â€- Software Eng. J., vol. 11, pp. 247-254, July 1996.

[20] Roger S. Pressman: ―Software Engineeringâ€-, Fifth edition, ISBN 0077096770.

[23] Raymond, J. A, Alex, D.L: ―A data model for object oriented design metricsâ€-, Technical Report 1997, ISBN 0836 0227.

[24] Alexander et al 2003,â€-Mathematical Assessment of Object-Oriented Design Qualityâ€-, IEEE Transactions on Software Engineering, VOL. 29, NO. 11, November 2003.

Code and Design Metrics for Object-Oriented Systems

Lindroos

Object-oriented design and development has become popular in today’s software development environment. The benefits of object-oriented software development are now widely recognized [AlC98]. Object-oriented development requires not only different approaches to design and implementation; it also requires different approaches to software metrics. Metrics for object-oriented system are still a relatively new field of study. The traditional metrics such as lines of code and Cyclomatic complexity [McC76, WEY88] have become standard for traditional procedural programs [LIK00, AlC98].

The metrics for object-oriented systems are different due to the different approach in program paradigm and in object-oriented language itself. An object-oriented program paradigm uses localization, encapsulation, information hiding, inheritance, object abstraction and polymorphism, and has different program structure than in procedural languages. [LIK00]

Software metrics are often categorized into product metrics and design metrics [LoK94]. Product metrics are used to predict project needs, such as staffing levels and total effort. They measure the dynamic changes that have taken place in the state of the project, such as how much has been done and how much is left to do. Project metrics are more global and less specific than the design metrics. Unlike the design metrics, project metrics do not measure the quality of the software being developed.

Design metrics are measurements of the static state of the project design at a particular point in time. These metrics are more localized and prescriptive in nature. They look at the quality of the way the system is being built. [LoK94]

Design metrics can be divided into static metrics and dynamic metrics [SyY99]. Dynamic metrics have a time dimension and the values tend to change over time. Thus dynamic metrics can only be calculated on the software as it is executing. Static metric metrics remain invariant and are usually calculated from the source code, design, or specification.

Why it is important to measure object-oriented metrics?

The intent of the metrics proposed is to provide help for object-oriented developers and managers to foster better designs, more reusable code, and better estimates. The metrics should be used to identify anomalies as well as to measure progress. The numbers are not meant to drive the design of the project’s classes or methods, but rather to help us focus our efforts on potential areas of improvement. The metrics can help each of us improve the way we develop the software. The metrics, as supported by tools, makes us think about how we subclass, write methods, use collaboration, and so on. [LoK94] They help the engineer to recognize parts of the software that might need modifications and re-implementation. The decision of changes to be made should not rely only on the metric values [SyY99].

The metrics are guidelines and not rules and they should be used to support the desired motivations. The intent is to encourage more reuse through better use of abstractions and division of responsibilities, better designs through detection and correction of anomalies. Positive incentives, improvement training and mentoring, and effective design reviews support probability of achieving better results of using object-oriented metrics. [LoK94]

Software should be designed for maintenance [AlC98]. The design evaluation step is an integral part of achieving a high quality design. The metrics should help in improving the total quality of the end product, which means that quality problems could be resolved as early as possible in the development process. It is a well-known fact that the earlier the problems can be resolved the less it costs to the project in terms of time-to-market, quality and maintenance.

Code and design metrics suite

Metric 1: Weighted Methods per Class (WMC)

WMC is a sum of complexities of methods of a class. Consider a Class C1 with Methods M1…Mn that are defined in the class. Let c1 …cn be the complexity of the methods [ChK94]. Then:

WMC measures size as well as the logical structure of the software. The number of methods and the complexity of the involved methods are predictors of how much time and effort is required to develop and maintain the class [SyY99, ChK94]. The larger the number of methods in a class, the greater the potential impact on inheriting classes. Consequently, more effort and time are needed for maintenance and testing [YSM02]. Furthermore, classes with large number of complex methods are likely to be more application specific, limiting the possibility of reuse. Thus WMC can also be used to estimate the usability and reusability of the class [SyY99]. If all method complexities are considered to be unity, then WMC equals to Number of Methods (NMC) metric [YSM02].

Metric 2: Depth of Inheritance Tree (DIT)

The depth of a class within the inheritance hierarchy is the maximum length from the class node to the root of the tree, measured by the number of ancestor classes. The deeper a class is in the hierarchy, the greater the number of methods it is likely to inherit, making it more complex to predict its behaviour. Deeper trees constitute greater design complexity, since more methods and classes are involved. The deeper a particular class is in the hierarchy, the greater potential reuse of inherited methods. [ChK94] For languages that allow multiple inheritances, the longest path is usually taken [YSM02].

The large DIT is also related to understandability and testability [LIK00, SyY99]. Inheritance decreases complexity by reducing the number of operations and operators, but this abstraction of objects can make maintenance and design difficult.

Metric 3: Number of Children (NOC)

Number of children metric equals to number of immediate subclasses lower to a class in the class hierarchy. Greater the number of children, greater the reuse, since inheritance is a form of reuse. Greater the number of children, the greater the likelihood of improper abstraction of the parent class. If a class has a large number of children, it may be a case of misuse of sub classing. The number of children gives an idea of the potential influence a class has on the design. If a class has a large number of children, it may require more testing of the methods in that class. [ChK94] In addition, a class with a large number of children must be flexible in order to provide services in a large number of contexts [YSM02].

Metric 4: Coupling between object classes (CBO)

CBO for a class is a count of the number of other classes to which is coupled. CBO relates to the notion that an object is coupled to another object if one of them acts on the other, i.e., methods of one uses methods or instance variables of another. Excessive coupling between object classes is detrimental to modular design and prevents reuse. The more independent a class is, the easier it is to reuse it in another application. In order to improve modularity and promote encapsulation, inter-object class couples should be kept to a minimum. [ChK94] Direct access to foreign instance variable has generally been identified as the worst type of coupling [SyY99].

The larger the number of couples, the higher the sensitivity to changes in other parts of the design, and therefore maintenance is more difficult. A measure of coupling is useful to determine how complex the testing of various parts of a design is likely to be. The higher the inter-object class coupling, the more rigorous the testing needs to be.

Metric 5: Response For a Class (RFC)

The response set of a class is a set of methods that can potentially be executed in response to a message received by and object of that class2. RFC measures both external and internal communication, but specifically it includes methods called from outside the class, so it is also a measure of the potential communication between the class and other classes. [ChK94, AlC98] RFC is more sensitive measure of coupling than CBO since it considers methods instead of classes [YSM02].

If a large number of methods can be invoked in response to a message, the testing and debugging of the class becomes more complicated since it requires a greater level of understanding required on the part of the tester. The larger the number of methods that can be invoked from a class, the greater the complexity of the class. A worst-case value for possible responses will assist in appropriate allocation of testing time. [ChK94]

Metric 6: Lack of Cohesion in Methods (LCOM)

The LCOM is a count of the number of method pairs whose similarity is 0 minus the count of method pairs whose similarity is not zero. The larger the number of similar methods, the more cohesive the class, which is consistent with traditional notions of cohesion that measure the inter-relatedness between portions of a program. If none of the methods of a class display any instance behavior, i.e., do not use any instance variables, they have no similarity and the LCOM value for the class will be zero. [ChK94]

Cohesiveness of methods within a class is desirable, since it promotes encapsulation. Lack of cohesion implies classes should probably be split into two or more subclasses. Any measure of disparateness of methods helps identify flaws in the design of classes. Low cohesion increases complexity; thereby it increases the likelihood of errors during the development process. [ChK94]

Evaluation of metrics / Application of OO metrics

Chidamber and Kemerer who introduced the basic suite for collecting object-oriented code and design metrics tested the metrics suite with two projects. The metrics proposed in their paper were collected using automated tools developed for this research at two different organizations which will be referred to here as Site A and Site B. [ChK94]

Site A is a software vendor that uses object-oriented design in their development work and has a collection of different C++ class libraries. Metrics data from 634 classes from two C++ class libraries that are used in the design of graphical user interfaces (GUI) were collected. Both these libraries were used in different product applications for rapid prototyping and development of windows, icons and mouse based interfaces. Reuse across different applications was one of the primary design objectives of these libraries. These typically were used at Site A in conjunction with other C++ libraries and traditional C-language programs in the development of software sold to UNIX workstation users.

Site B is a semiconductor manufacturer and uses the Smalltalk programming language for developing flexible machine control and manufacturing systems. Metrics were collected on the class libraries used in the implementation of a computer aided manufacturing system for the production of VLSI (Very Large Scale Integration) circuits. Over 30 engineers worked on this application, after extensive training and experience with object orientation and the Smalltalk environment. Metrics data from 1459 classes from Site B were collected.

The data from two different commercial projects and subsequent discussions with the designers at those sites lead to several interesting observations that may be useful for managers of object-oriented projects. Designers may tend to minimise inheritance hierarchies, forsaking reusability through inheritance for simplicity of understanding. This potentially reduces the extent of method reuse within an application. However, even in minimal class hierarchies it is possible to extract reuse benefits, as evidenced by the class with 87 methods at Site A that had a total of 43 descendants. This suggests that managers need to proactively manage reuse opportunities and that this metrics suite can aid this process.

Another demonstrable use of these metrics is in uncovering possible design flaws or violations of design philosophy. As the example of the command class with 42 children at Site A demonstrates, the metrics help to point out instances where sub classing has been misused. This is borne out by the experience of the designers interviewed at one of the data sites where excessive declaration of sub classes was common among engineers new to the object-oriented paradigm. These metrics can be used to allocate testing resources. As the example of the interface classes at Site B (with high CBO and RFC values) demonstrates, concentrating test efforts on these classes may have been a more efficient utilization of resources.

Another application of these metrics is in studying differences between different object-oriented languages and environments. As the RFC and DIT data suggest, there are differences across the two sites that may be due to the features of the two target languages. However, despite the large number of classes examined (634 at Site A and 1459 at Site B), only two sites were used in this study, and therefore no claims are offered as to any systematic differences between C++ and Smalltalk environments. [ChK94]

Basili’s, Briand’s and Melo’s document ‘A Validation of Object-Oriented Design Metrics as Quality Indicators’ [BBM96] presents the results of a study in which they empirically investigated the suite of object-oriented design metrics introduced in Chidamber’s and Kemerer’s document ‘A Metrics Suite for Object Oriented Design’ [ChK94]. In their study, 6

they collected data about faults found in object-oriented classes. Based on these data, they verified how much fault-proneness is influenced by internal (e.g., size, cohesion) and external (e.g. coupling) design characteristics of object-oriented classes. From the results, five out of six Chidamber’s and Kemerer’s object-oriented metrics showed to be useful to predict class fault-proneness during the high- and low-level design phases of the life cycle. The only metric that was not appropriate in their study was LCOM. In addition, Chidamber’s and Kemerer’s object-oriented metrics showed to be better predictors than the best set of ‘traditional’ code metrics, which can only be collected during later phases of the software processes. [BBM96]

This empirical validation provides evidence demonstrating that most of Chidamber’s and Kemerer’s object-oriented metrics can be useful quality indicators. Furthermore, most of these metrics appear to be complementary indicators, which are relatively independent from each other. The obtained results provide motivation for further investigation and refinement of Chidamber’s and Kemerer’s object-oriented metrics. [BBM96]

Conclusion

Metric data provides quick feedback for software designers and managers. Analyzing and collecting the data can predict design quality. If appropriately used, it can lead to a significant reduction in costs of the overall implementation and improvements in quality of the final product. The improved quality, in turn reduces future maintenance efforts. Using early quality indicators based on objective empirical evidence is therefore a realistic objective [BMB99]. According to my opinion it’s motivating for the developer to get early and continuous feedback about the quality in design and implementation of the product they develop and thus get a possibility to improve the quality of the product as early as possible. It could be a pleasant challenge to improve own design practices based on measurable data.

It is unlikely that universally valid object-oriented quality measures and models could be devised, so that they would suit for all languages in all development environments and for different kind of application domains. It should be also kept in mind that metrics are only guidelines and not rules. They are guidelines that give an indication of the progress that a project has made and the quality of design [LoK94].

[AlC98] Alkadi Ghassan, Carver Doris L.: Application of Metrics to Object-Oriented Designs, Proceedings of IEEE Aerospace Conference, Volume 4, pages 159 – 163, March 1998.

[ChK94] Chidamber Shyam R., Kemerer Chris F.: A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, Volume 20, Number 6, pages 476 – 493, June 1994.

[LIK00] Shuqin Li-Kokko: Code and Design Metrics for Object-Oriented Systems, Helsinki University of Technology, 9 pages, 2000.

[LoK94] Lorenz Mark, Kidd Jeff: Object-Oriented Software Metrics: A Practical Guide. P T R Prentice Hall, Prentice-Hall, Inc. A Pearson Education Company, 146 pages, 1994.

[SyY99] Tarja Systä, Ping Yu: Using OO Metrics and Rigi to Evaluate Java software, University of Tampere, Department of Computer Science, Series of Publications A A-1999-9, 24 pages, July 1999.

Metricsbymark>>

Product metrics

Product metrics-also known as quality metrics- measure system quality. You can of course describe quality in many different ways, most popularly through the so-called “ilities.” In Object-Oriented Metrics (Prentice Hall, 1994), Brian Henderson- Sellers describes a number of such categories: reliability, availability, maintainability, understandability, modifiability, testability, and usability.

We generally use product metrics for providing

• guidelines that suggest local and specific prescriptive action for improving the quality of different system components,

• comparisons between existing systems, and

• comparisons between new systems and other known systems.

Keep in mind that quality metrics do not correlate ell to a project’s overall size or status measurements. System quality is a critical concern ,however, and quality metrics do provide valuable insight into specific ways that enhance system quality.

CATEGORIES OF OO METRICS

In addition to specifying process and product metrics, it is useful to group OO metrics into four categories:

• System size. Knowing, for example, how many function calls and objects to anticipate using in a system can help make more accurate estimates.

• Class or method size.Though measured in various ways, small, simple classes and methods are typically better than large, complex ones.

• Coupling and inheritance.The number and types of these relationships indicate the interdependence of classes. Clear, simple relationships are preferable to numerous, complex ones.

• Class or method internals.This metric reveals how complex classes and methods are and how well you’ve documented them in your code comments.

Unfortunately, system size metrics have no standard values against which you might compare your own system. Size depends entirely on the amount of functionality you build into your system.

Other metrics, however, do have standard values. For instance, a method’s size is fairly consistent across systems. So you might want to provide some guidance to your team about how large the methods should be. You also might want to have in place an upper limit for method size, above which you would inspect methods to determine whether and how they might be shortened.

For OO metrics other than system size, it also makes sense to talk about system level averages. For example, ask yourself whether your methods are, on average, larger than those of comparable systems. Averages provide an indication of the overall system quality and can signal trends that may affect system quality.

Class and method size

You can usually consider class or method size metrics to be equivalent to design or quality metrics because unusually large classes or methods may indicate ill-conceived abstractions or overly complex implementations. Measurements of class or method size measures that differ substantially from average values are generally good candidates for inspection or rework. Class and method size metrics include:

• LOC and function calls per class/method. These metrics are similar to the LOC system size metrics but focus on individual classes and methods.

•Number of methods per class and public method count per class. The number of methods per class indicates the total evel of functionality implemented by a class. The number of public methods indicates the amount of behavior exposed to the outside world and provides a view of class size and how complex the class might be to use.

• Number of attributes per class and number of instance attributes per class.The number of attributes in a class indicates the amount of data the class must maintain in order to carry out its responsibilities. Attributes can either be instance attributes, which are unique to each instance of an object, or class variables, which have the same value for all members of the class.

Coupling and inheritance

Coupling and inheritance metrics help measure the quality of an object model. More specifically, they help reveal the degree to which interobject dependencies exist. Ideally, objects should be independent, which makes it easy to transplant an object from one environment to another and reuse existing objects when you build new systems. The reality, of course, is that objects have interdependencies, and reusing them is rarely as simple as cutting and pasting. All too often, the number of dependencies is so large that understanding and moving the entire group of objects is more expensive than rewriting objects from scratch.

• Class fan-in. Fan-in metrics measure the number of classes that depend on a given object. If you have to couple your objects, you’ll want to use fan-in, since it centralizes dependencies, as illustrated in Figure1.

• Class fan-out. Fan-out metrics measure the number of classes on which a given class depends. You should avoid using the fan-out technique, since it represents a situation in which you spread dependencies across the system, as illustrated in Figure 2.

• Class inheritance level. The inheritance depth of a class is the number of its direct ancestors. An unnecessarily deep class hierarchy adds to complexity and can represent a poor use of the inheritance mechanism.

• Number of children per class. This metric measures the number of direct descendants of a particular class, which can indicate unnecessarily complex hierarchies.

C

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: