This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers.
Abstract. Object-oriented frameworks offer reuse at a high design level promising several benefits to the development of complex systems. This paper sought to 1) define the concepts of object oriented techniques in addition with the OO issues, development techniques and concepts of object oriented programming, it is also introduced the UML as an ordinary and key tool for object-oriented design, additionally 2) we look further into the frameworks from the perspective of object-oriented techniques. In this section, it is aimed to define a reasonable promise between object oriented technology and frameworks. At the end, some future horizons for object oriented technology and frameworks are presented.
Computing power and network bandwidth have increased dramatically over the past decade. However, the design and implementation of complex software remains expensive and error-prone. Much of the cost and effort stems from the continuous re-discovery and re-invention of core concepts and components across the software industry. In particular, the growing heterogeneity of hardware architectures and diversity of operating system and communication platforms makes it hard to build correct, portable, efficient, and inexpensive applications from scratch. Object-oriented (OO) techniques and frameworks are promising technologies for reifying proven software designs and implementations in order to reduce the cost and improve the quality of software. A framework is a reusable, "semi-complete'' application that can be specialized to produce custom applications . In contrast to earlier OO reuse techniques based on class libraries, frameworks are targeted for particular business units (such as data processing or cellular communications) and application domains (such as user interfaces or real-time avionics). Frameworks like MacApp, ET++, Interviews, ACE, Microsoft's MFC and DCOM, JavaSoft's RMI, and implementations of OMG's CORBA play an increasingly important role in contemporary software development.
II. Object oriented concepts and techniques
The concept of objects and instances in computing had its first major breakthrough with the PDP-1 system at MIT which was probably the earliest example of 'capability based' architecture. Another early example was Sketchpad created by Ivan Sutherland in 1963; however, this was an application and not a programming paradigm. Objects as programming entities were introduced in the 1960s in Simula 67, a programming language designed for performing simulations, created by Ole-Johan Dahl and Kristen Nygaard of the Norwegian Computing Center in Oslo. (They were working on ship simulations, and were confounded by the combinatorial explosion of how the different attributes from different ships could affect one another. The idea occurred to them of grouping the different types of ships into different classes of objects; each class of objects being responsible for defining its own data and behavior.) Such an approach was a simple extrapolation of concepts earlier used in analog programming. On analog computers, mapping from real-world phenomena/objects to analog phenomena/objects (and conversely), was (and is) called 'simulation'. Simula not only introduced the notion of classes, but also of instances of classes, which is probably the first explicit use of those notions. The ideas of Simula 67 influenced many later languages, especially Smalltalk and derivatives of Lisp and Pascal.
The Smalltalk language, which was developed at Xerox PARC (by Alan Kay and others) in the 1970s, introduced the term object-oriented programming to represent the pervasive use of objects and messages as the basis for computation. Smalltalk creators were influenced by the ideas introduced in Simula 67, but Smalltalk was designed to be a fully dynamic system in which classes could be created and modified dynamically rather than statically as in Simula 67. Smalltalk and with it OOP were introduced to a wider audience by the August 1981 issue of Byte magazine.
In the 1970s, Kay's Smalltalk work had influenced the Lisp community to incorporate object-based techniques which were introduced to developers via the Lisp machine. Experimentation with various extensions to Lisp (like LOOPS and Flavors introducing multiple inheritance and mixins), eventually led to the Common Lisp Object System (CLOS, a part of the first standardized object-oriented programming language, ANSI Common Lisp), which integrates functional programming and object-oriented programming and allows extension via a Meta-object protocol. In the 1980s, there were a few attempts to design processor architectures which included hardware support for objects in memory but these were not successful. Examples include the Intel iAPX 432 and the Linn Smart Rekursiv.
Object-oriented programming developed as the dominant programming methodology during the mid-1990s, largely due to the influence of Visual FoxPro 3.0 or possibly C++. Its dominance was further enhanced by the rising popularity of graphical user interfaces, for which object-oriented programming seems to be well-suited. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits also enhanced the popularity of event-driven programming (although this concept is not limited to OOP). Some feel that association with GUIs (real or perceived) was what propelled OOP into the programming mainstream.
At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topics as data abstraction and modular programming (although this had been in common use in the 1960s or earlier). Modula-2 (1978) included both, and their succeeding design, Oberon, included a distinctive approach to object orientation, classes, and such. The approach is unlike Smalltalk, and very unlike C++.
Object-oriented features have been added to many existing languages during that time, including Ada, BASIC, Fortran, Pascal, and others. Adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code.
More recently, a number of languages have emerged that are primarily object-oriented yet compatible with procedural methodology, such as Python and Ruby. Probably the most commercially important recent object-oriented languages are Visual Basic.NET (VB.NET) and C#, both designed for Microsoft's .NET platform, and Java, developed by Sun Microsystems. VB.NET and C# both support cross-language inheritance, allowing classes defined in one language to subclass classes defined in the other language.
Just as procedural programming led to refinements of techniques such as structured programming, modern object-oriented software design methods include refinements such as the use of design patterns, design by contract, and modeling languages (such as UML).
The term OOPS, which refers to an object-oriented programming system, was common in early development of object-oriented programming.
III. Fundamental concepts and features
Defines the abstract characteristics of a thing (object), including the thing's characteristics (its attributes, fields or properties) and the thing's behaviors (the things it can do, or methods, operations or features). One might say that a class is a blueprint or factory that describes the nature of something. For example, the class Dog would consist of traits shared by all dogs, such as breed and fur color (characteristics), and the ability to bark and sit (behaviors). Classes provide modularity and structure in an object-oriented computer program. A class should typically be recognizable to a non-programmer familiar with the problem domain, meaning that the characteristics of the class should make sense in context. Also, the code for a class should be relatively self-contained (generally using encapsulation). Collectively, the properties and methods defined by a class are called members.
A pattern (exemplar) of a class. The class Dog defines all possible dogs by listing the characteristics and behaviors they can have; the object Lassie is one particular dog, with particular versions of the characteristics. A Dog has fur; Lassie has brown-and-white fur.
One can have an instance of a class; the instance is the actual object created at runtime. In programmer jargon, the Lassie object is an instance of the Dog class. The set of values of the attributes of a particular object is called its state. The object consists of state and the behavior that's defined in the object's class.
More on Classes, Metaclasses, Parameterized Classes, and Exemplars
There are two broad categories of objects: classes and instances. Users of object-oriented technology usually think of classes as containing the information necessary to create instances, i.e., the structure and capabilities of an instance is determined by its corresponding class. There are three commonly used (and different) views on the definition for "class":
- A class is a pattern, template, or blueprint for a category of structurally identical items. The items created using the class are called instances. This is often referred to as the "class as a `cookie cutter'" view. As you might guess, the instances are the "cookies."
- A class is a thing that consists of both a pattern and a mechanism for creating items based on that pattern. This is the "class as an `instance factory'" view; instances are the individual items that are "manufactured" (created) using the class's creation mechanism.
- A class is the set of all items created using a specific pattern. Said another way, the class is the set of all instances of that pattern.
We should note that it is possible for an instance of a class to also be a class. A metaclass is a class whose instances themselves are classes. This means when we use the instance creation mechanism in a metaclass, the instance created will itself be a class. The instance creation mechanism of this class can, in turn, be used to create instances -- although these instances may or may not themselves be classes.
A concept very similar to the metaclass is the parameterized class. A parameterized class is a template for a class wherein specific items have been identified as being required to create non-parameterized classes based on the template. In effect, a parameterized class can be viewed as a "fill in the blanks" version of a class. One cannot directly use the instance creation mechanism of a parameterized class. First, we must supply the required parameters, resulting in the creation of a non-parameterized class. Once we have a non-parameterized class, we can use its creation mechanisms to create instances.
In this paper, we will use the term "class" to mean metaclass, parameterized class, or a class that is neither a metaclass nor a parameterized class. We will make a distinction only when it is necessary to do so. Further, we will occasionally refer to "non-class instances." A non-class instance is an instance of a class, but is itself not a class. An instance of a metaclass, for example, would not be a non-class instance.
In this paper, we will sometimes refer to "instantiation." Instantiation has two common meanings:
- as a verb, instantiation is the process of creating an instance of a class, and
- as a noun, an instantiation is an instance of a class.
Some people restrict the use of the term "object" to instances of classes. For these people, classes are not objects. However, when these people are confronted with the concepts of metaclasses and parameterized classes, they have a difficulty attempting to resolve the "problems" these concepts introduce. For example, is a class that is an instance of a metaclass an object -- even though it is itself a class? In this paper, we will use the term "object" to refer to both classes and their instances. We will only distinguish between the two when needed.
Black Boxes and Interfaces
Objects are "black boxes." Specifically, the underlying implementations of objects are hidden from those that use the object. In object-oriented systems, it is only the producer (creator, designer, or builder) of an object that knows the details about the internal construction of that object. The consumers (users) of an object are denied knowledge of the inner workings of the object, and must deal with an object via one of its three distinct interfaces:
- The "public" interface. This is the interface that is open (visible) to everybody.
- The "inheritance" interface. This is the interface that is accessible only by direct specializations of the object. (We will discuss inheritance and specialization later in this chapter.) In class-based object-oriented systems, only classes can provide an inheritance interface.
- The "parameter" interface. In the case of parameterized classes, the parameter interface defines the parameters that must be supplied to create an instance of the parameterized class.
Another way of saying that an item is in the public interface of an object is to say that the object "exports" that item. Similarly, when an object requires information from outside of itself (e.g., as with the parameters in a parameterized class), we can say that the object needs to "import" that information.
It is, of course, possible for objects to be composed of other objects. Aggregation is either:
- The process of creating a new object from two or more other objects, or
- An object that is composed of two or more other objects.
For example, a date object could be fashioned from a month object, a day object, and a year object. A list of names object, for example, can be thought of as containing many name objects.
A monolithic object is an object that has no externally-discernible structure. Said another way, a monolithic object does not appear to have been constructed from two or more other objects. Specifically, a monolithic object can only be treated as a cohesive whole. Those outside of a monolithic object cannot directly interact with any (real or imagined) objects within the monolithic object. A radio button in a graphical user interface (GUI) is an example of a monolithic object.
Composite objects are objects that have an externally-discernible structure, and the structure can be addressed via the public interface of the composite object. The objects that comprise a composite object are referred to as component objects. Composite objects meet one or both of the following criteria:
- The state of a composite object is directly affected by the presence or absence of one or more of its component objects, and/or
- The component objects can be directly referenced via the public interface of their corresponding composite object.
- A heterogeneous composite object is a composite object that is conceptually composed of component objects that are not all conceptually the same. For example, a date (made up of a month object, a day object, and a year object) is a heterogeneous composite object.
- A homogeneous composite object is a composite object that is conceptually composed of component objects that are all conceptually the same. For example, a list of addresses is a homogeneous composite object.
It is useful to divide composite objects into two subcategories: heterogeneous composite objects and homogeneous composite objects:
The rules for designing heterogeneous composite objects are different from the rules for designing homogeneous composite objects.
Specialization and Inheritance
Aggregation is not the only way in which two objects can be related. One object can be a specialization of another object. Specialization is either:
- The process of defining a new object based on a (typically) more narrow definition of an existing object, or
- An object that is directly related to, and more narrowly defined than, another object.
Specialization is usually associated with classes. It is usually only in the so-called "classless" object-oriented systems that we think of specialization for objects other than classes.
Depending on their technical background, there are a number of different ways in which people express specialization. For example, those who are familiar with an object-oriented programming language called Smalltalk refer to specializations as "subclasses" and to the corresponding generalizations of these specializations as "superclasses." Those with a background in the C++ programming language use the term "derived class" for specialization and "base class" for corresponding generalizations.
It is common to say that everything that is true for a generalization is also true for its corresponding specialization. We can, for example, define "checking accounts" and "savings accounts" as specializations of "bank accounts." Another way of saying this is that a checking account is a kind of bank account, and a savings account is a kind of bank account. Still another way of expressing this idea is to say that everything that was true for the bank account is also true for the savings account and the checking account.
In an object-oriented context, we speak of specializations as "inheriting" characteristics from their corresponding generalizations. Inheritance can be defined as the process whereby one object acquires (gets, receives) characteristics from one or more other objects. Some object-oriented systems permit only single inheritance, a situation in which a specialization may only acquire characteristics from a single generalization. Many object-oriented systems, however, allow for multiple inheritance, a situation in which a specialization may acquire characteristics from two or more corresponding generalizations.
Our previous discussion of the bank account, checking account, and savings account was an example of single inheritance. A telescope and a television set are both specializations of "device that enables one to see things far away." A television set is also a kind of "electronic device." You might say that a television set acquires characteristics from two different generalizations, "device that enables one to see things far away" and "electronic device." Therefore, a television set is a product of multiple inheritance.
We usually think of classes as being complete definitions. However, there are situations where incomplete definitions are useful, and classes that represent these incomplete definitions are equally useful. For example, in everyday conversation, we might talk about such items as bank accounts, insurance policies, and houses. In object-oriented thinking, we often isolate useful, but incomplete, concepts such as these into their own special classes.
Abstract classes are classes that embody coherent and cohesive, but incomplete, concepts, and in turn, make these characteristics available to their specializations via inheritance. People sometimes use the terms "partial type" and "abstract superclass" as synonyms for abstract class. While we would never create instances of abstract classes, we most certainly would make their individual characteristics available to more specialized classes via inheritance.
For example, consider the concept of an automobile. On one hand, most people know what an automobile is. On the other hand, "automobile" is not a complete definition for any vehicle. It would be quite accurate to describe "automobile" as the set of characteristics that make a thing an automobile, in other words, the "essence of automobile-ness."
The public interface of an object typically contains three different categories of items:
- operations (sometimes referred to as "method selectors," "method interfaces," "messages," or "methods"),
- constants, and
An operation in the public interface of an object advertises a functional capability of that object. For example, "deposit" would be an operation in the public interface of a bank account object, "what is current temperature" would be an operation in the public interface of a temperature sensor object, and "increment" would be an operation in the public interface of a counter object.
The actual algorithm for accomplishing an operation is referred to as a method. Unlike operations, methods are not in the public interface for an object. Rather, methods are hidden on the inside of an object. So, while users of bank account objects would know that they could make a deposit into a bank account, they would be unaware of the details as to how that deposit actually got credited to the bank account.
We refer to the operations in the public interface of an object as "suffered operations." Suffered operations are operations that meet two criteria: they are things that happen to an object, and they are in the public interface of that object. For example, we can say that a bank account "suffers" the operation of having a deposit made into it. The bank account can also "suffer" the operation of being queried as to its current balance. Some people also refer to suffered operations as "exported operations."
There are three broad categories of suffered operations, i.e.:
- A selector is an operation that tells us something about the state of an object, but cannot, by definition, change the state of the object. An operation that tells us the current balance of a bank account is an example of a selector operation.
- A constructor is an operation that has the ability to change the state of an object. For example, an operation in the public interface to a mailbox object that added a message to the mailbox would be a constructor operation. (Please note that some people restrict the definition of the term "constructor" to those operations that cause instances of a class to come into existence.)
- In the context of a homogeneous composite object, an iterator is an operation that allows its users to visit (access) each of the component objects that make up the homogeneous composite object. If we have a list of addresses, for example, and we wish to print the entire list, an iterator would allow us to visit each address object within the list and then, in turn, to print each address.
Iterators can be further divided into two broad categories: active (open) iterators and passive (closed) iterators. Active iterators are objects in their own right. Passive iterators are implemented as operations in the interface of the object over which they allow iteration. Passive iterators are further broken down into selective iterators and constructive iterators. Passive selective iterators do not allow their users to change the object over which the iteration takes place. Passive constructive iterators do allow users to change the object over which iteration takes place.
We can also describe suffered operations as primitive or composite. A primitive operation is an operation that cannot be accomplished simply, efficiently, and reliably without direct knowledge of the underlying (hidden) implementation of the object. As an example, we could argue that an operation that added an item to a list object, or an operation that deleted an item from a list object were primitive operations with respect to the list object.
Suppose that we wanted to create a "swap operation," an operation that would swap in a new item in a list, while at the same time swapping out an old item in the same list. This is not a primitive operation since we can accomplish this with a simple combination of the delete operation (deleting the old item) followed by the add operation (adding the new item). The swap operation is an example of a composite operation. A composite operation is any operation that is composed, or can be composed, of two or more primitive operations.
Sometimes objects need help in maintaining their characteristics. Suppose, for example, that we wanted to create a "generic ordered list" object. An ordered list is a list that must order its contents from the smallest to the largest. Specifically, every time we add an item to our ordered list, that item would have to be placed in its proper position with respect to all the other items already in the list. By "generic," we mean a template that can be instantiated with the category (class) of items we wish to place in the ordered list.
It would not be unreasonable to implement this object as a parameterized class. Obviously, one of the parameters would be the category of items (e.g., class) that we desired to place in the list. For example, could instantiate (make an instance) the generic ordered list with a "name class" resulting in the creation of an "ordered list of names class."
There is a problem, however. Given that we could instantiate the generic ordered list with just about any category of items, how can we be sure that the ordered lists will know how to properly maintain order -- no matter what we use to instantiate the generic ordered list? Suppose, for example, that we wanted an ordered list of "fazoomas." How could the generic list class tell if one fazooma was greater than or less than another fazooma?
A solution would be for the generic ordered list to require a second parameter, a parameter over and above the category of items (class) that we desired to place in the list. This second parameter would be a "<" (less than) operation that worked with the category of items to be placed in the list. In the case of our ordered list of fazoomas, this second parameter would be a "<" that works with fazoomas.
The "<" that worked with fazoomas is an example of a required operation. A required operation is an operation that an object needs to maintain its outwardly observable characteristics, but which the object cannot supply itself. Some people refer to required operations as "imported operations."
In addition to suffered operations, the public interface of an object can also contain constants. Constants are objects of constant state. Imagine that we want to create a "bounded list of addresses class." A bounded list is a list that has a fixed maximum number of elements. A bounded list can be empty, and it can contain fewer than the maximum number of elements. It can even contain the maximum number of elements, but it can never contain more than the defined maximum number of elements.
Assume that we place a constant in the public interface of our bounded list of addresses. This constant represents the maximum number of elements that can be placed in the bounded list. Assume also that there is a suffered operation that will tell us how many elements (addresses, in our example) are currently in the bounded list. We can now determine how much room is available in the bounded list by inquiring how many addresses are already in the list, and then subtracting this from the previously-defined constant.
In some cases, as with the bounded list example above, constants are provided more for convenience than necessity. In other cases, such as in the case of encryption algorithms needing a "seed value," constants are an absolute requirement.
A third category of items that can be found in the public interface of objects is exceptions. Exceptions have two different definitions:
- an event that causes suspension of normal application execution, and
- a set of information directly relating to the event that caused suspension of normal application execution.
Exceptions can be contrasted with an older, less reliable technology: "error codes." The idea behind error codes was fairly simple. You would request that an application, or part of an application, accomplish some work. One of the pieces of information that would be returned to the requester would be an error code. If all had gone well, the error code would typically have a value of zero. If any problems had occurred, the error code would have a non-zero value. It was also quite common to associate different non-zero values of an error code with specific errors.
Error codes suffered from two major problems:
- No one was forced to actually check the value of returned error codes.
- Changes (additions, deletions, and modifications) in the meanings of the special values assigned to error codes were not automatically passed on to interested parties. Tracking the effects of a changed error code value often consumed a significant amount of resources.
- Exceptions may be defined by the environment or by the user.
- When an exceptional (but not unforeseen) condition occurs, an appropriate exception is activated. (People use different terms to express the activation of an exception. The most common is "raise." Less commonly, people use the terms "throw" or "activate.") This activation may be automatic (controlled by the environment) or may be expressly requested by the designer of the object or application.
- Once the exception is activated, normal application execution stops and control is transferred to a locally defined exception handler, if one is present. If no locally defined exception handler is present or if the exception handler is not equipped to handle the exception, the exception is propagated to the next higher level of the application. Exceptions cannot be ignored. An exception will continue to be sent to higher levels of the application until it is either turned off or the application ceases to function.
- An exception handler checks to see what type of exception has been activated. If the exception is one that the handler recognizes, a specific set of actions is taken. Executing a set of actions in response to an exception is known as "handling the exception." Handling an exception deactivates the exception; the exception will not be propagated any further.
To understand how exceptions directly address both of these issues, we first need to understand how exceptions typically work:
Examples of exceptional conditions include trying to remove something from an empty container, directing an elevator on the top floor to "go up," and attempting to cause a date to take on an invalid value like "February 31, 1993."
Unlike error codes, exceptions cannot be ignored. Once an exception has been activated, it demands attention. In object-oriented systems, exceptions are placed in the public interfaces of objects. Changes in the public interfaces of objects very often require an automatic rechecking of all other objects that invoke operations in the changed objects. Thus, changes in exceptions result in at least a partially automated propagation of change information.
Object Coupling and Object Cohesion
Engineers have known for centuries that the less any one part of a system knows about any other part of that same system, the better the overall system. Systems whose components are highly independent of each other are easier to fix and enhance than systems where there are strong interdependencies among some or all of the components. Highly independent system components are possible when there is minimal coupling among the components, and each component is highly cohesive.
Coupling is a measure of the strength of the connection between any two system components. The more any one component knows about another component, the tighter (worse) the coupling is between those two components. Cohesion is a measure of how logically related the parts of an individual component are to each other, and to the overall component. The more logically related the parts of a component are to each other the higher (better) the cohesion of that component.
The objects that make up an object-oriented system exhibit object coupling and object cohesion. Object coupling describes the degree of interrelationships among the objects that make up a system. The more anyone object knows about any other object in the system, the tighter (worse) the coupling is between those two objects.
To construct systems from objects, we must couple (to some degree) the objects that comprise the system. This is necessary object coupling. However, if in the design of an individual object, we give that object direct knowledge of other specific objects, we are unnecessarily coupling the objects. Unnecessary object coupling reduces both the reusability of individual objects, and the reliability of the systems that contain unnecessarily coupled objects.
Object cohesion, on the other hand, is a measure of how logically related the components of the external view of an object are to each other. For example, if we are told that a date object is comprised of a month object, a day object, a year object, and the color blue, we should recognize that the color blue is not appropriate, and lowers the cohesion of the date object. We want our objects to be as cohesive as possible for two reasons. First, objects with low cohesion are more likely to be changed, and are more likely to have undesirable side effects when they are changed. Second, objects with low cohesion are seldom easily reusable.
Systems of Objects
In constructing object-oriented models and object-oriented applications, one quickly finds that single classes and single instances are not enough. You need some way of creating and dealing with large objects. A system of objects is defined as two or more interacting or interrelated, non-nested objects. (We exclude simple aggregations of composite objects from our definition of systems of objects.)
Systems of objects fall into two general categories:
- kits, which are collections of items (classes, metaclasses, parameterized classes, non-class instances, other kits, and/or systems of interacting objects) all of which support a single, large, coherent, object-oriented concept, such as computer graphics windows or insurance policies. There may indeed be some physical connection among some of the members of a given kit. However, kits are "granular." While all the components of a kit are logically related, there are very few physical connections that bind them together.
- systems of interacting objects, which are collections of items (classes, metaclasses, parameterized classes, non-class instances, kits, and/or other systems of interacting objects) all of which support a single, large, coherent, object-oriented concept, and in which there must be a direct or indirect physical connection between any two arbitrary objects within the collection. Further, systems of interacting objects have at least one internal, independently executing thread of control. Lastly, systems of interacting objects may exhibit multiple, completely disjoint public interfaces.
Kits resemble libraries. Say, for example, that we had to create a computer application with a graphical user interface. Graphical user interfaces normally contain several different types of windows. It would be very useful if we had a library of windows and window components from which we could construct any window we desired. Windows are objects, and the components of windows (buttons and check boxes) are themselves objects. A collection of windows and window components can be viewed as a kit.
Systems of interacting objects, on the other hand, resemble applications. For example, suppose that we wanted to construct an object-oriented application that controlled the elevators in a particular building. We would assemble elevators, buttons, lamps, panels, and other objects into a working application that would control the elevators. Such an application would not be viewed as a library, but as a highly cohesive whole. The elevator controller application is a system of interacting objects.
An object's abilities. In language, methods (sometimes referred to as "functions") are verbs. Lassie, being a Dog, has the ability to bark. So bark() is one of Lassie's methods. She may have other methods as well, for example sit() or eat() or walk() or save_timmy(). Within the program, using a method usually affects only one particular object; all Dogs can bark, but you need only one particular dog to do the barking.
The process by which an object sends data to another object or asks the other object to invoke a method. Also known to some programming languages as interfacing. For example, the object called Breeder may tell the Lassie object to sit by passing a "sit" message which invokes Lassie's "sit" method. The syntax varies between languages, for example: [Lassie sit] in Objective-C. In Java, code-level message passing corresponds to "method calling". Some dynamic languages use double-dispatch or multi-dispatch to find and pass messages.
"Subclasses" are more specialized versions of a class, which inherit attributes and behaviors from their parent classes, and can introduce their own.
For example, the class Dog might have sub-classes called Collie, Chihuahua, and GoldenRetriever. In this case, Lassie would be an instance of the Collie subclass. Suppose the Dog class defines a method called bark() and a property called furColor. Each of its sub-classes (Collie, Chihuahua, and GoldenRetriever) will inherit these members, meaning that the programmer only needs to write the code for them once.
Each subclass can alter its inherited traits. For example, the Collie subclass might specify that the default furColor for a collie is brown-and-white. The Chihuahua subclass might specify that the bark() method produces a high pitch by default. Subclasses can also add new members. The Chihuahua subclass could add a method called tremble(). So an individual chihuahua instance would use a high-pitched bark() from the Chihuahua subclass, which in turn inherited the usual bark() from Dog. The chihuahua object would also have the tremble() method, but Lassie would not, because she is a Collie, not a Chihuahua. In fact, inheritance is an "a...is a" relationship between classes, while instantiation is an "is a" relationship between an object and a class: a Collie is a Dog ("a... is a"), but Lassie is a Collie ("is a"). Thus, the object named Lassie has the methods from both classes Collie and Dog.
Multiple inheritance is inheritance from more than one ancestor class, neither of these ancestors being an ancestor of the other. For example, independent classes could define Dogs and Cats, and a Chimera object could be created from these two which inherits all the (multiple) behavior of cats and dogs. This is not always supported, as it can be hard to implement.
Abstraction is simplifying complex reality by modeling classes appropriate to the problem, and working at the most appropriate level of inheritance for a given aspect of the problem.
For example, Lassie the Dog may be treated as a Dog much of the time, a Collie when necessary to access Collie-specific attributes or behaviors, and as an Animal (perhaps the parent class of Dog) when counting Timmy's pets.
Abstraction is also achieved through Composition. For example, a class Car would be made up of an Engine, Gearbox, Steering objects, and many more components. To build the Car class, one does not need to know how the different components work internally, but only how to interface with them, i.e., send messages to them, receive messages from them, and perhaps make the different objects composing the class interact with each other.
Encapsulation conceals the functional details of a class from objects that send messages to it.For example, the Dog class has a bark() method. The code for the bark() method defines exactly how a bark happens (e.g., by inhale() and then exhale(), at a particular pitch and volume). Timmy, Lassie's friend, however, does not need to know exactly how she barks. Encapsulation is achieved by specifying which classes may use the members of an object. The result is that each object exposes to any class a certain interface — those members accessible to that class. The reason for encapsulation is to prevent clients of an interface from depending on those parts of the implementation that are likely to change in the future, thereby allowing those changes to be made more easily, that is, without changes to clients. For example, an interface can ensure that puppies can only be added to an object of the class Dog by code in that class. Members are often specified as public, protected or private, determining whether they are available to all classes, sub-classes or only the defining class. Some languages go further: Java uses the default access modifier to restrict access also to classes in the same package, C# and VB.NET reserve some members to classes in the same assembly using keywords internal (C#) or Friend (VB.NET), and Eiffel and C++ allow one to specify which classes may access any member.
Polymorphism allows the programmer to treat derived class members just like their parent class's members. More precisely, Polymorphism in object-oriented programming is the ability of objects belonging to different data types to respond to calls of methods of the same name, each one according to an appropriate type-specific behavior. One method, or an operator such as +, -, or *, can be abstractly applied in many different situations. If a Dog is commanded to speak(), this may elicit a bark(). However, if a Pig is commanded to speak(), this may elicit an oink(). Each subclass overrides the speak() method inherited from the parent class Animal.
Decoupling allows for the separation of object interactions from classes and inheritance into distinct layers of abstraction. A common use of decoupling is to polymorphically decouple the encapsulation, which is the practice of using reusable code to prevent discrete code modules from interacting with each other. However, in practice decoupling often involves trade-offs with regard to which patterns of change to favor. The science of measuring these trade-offs in respect to actual change in an objective way is still in its infancy.
Not all of the above concepts are to be found in all object-oriented programming languages, and so object-oriented programming that uses classes is called sometimes class-based programming. In particular, prototype-based programming does not typically use classes. As a result, a significantly different yet analogous terminology is used to define the concepts of object and instance.
Benjamin Cuire Pierce and some other researchers view as futile any attempt to distill OOP to a minimal set of features. He nonetheless identifies fundamental features that support the OOP programming style in most object-oriented languages:
Dynamic dispatch - when a method is invoked on an object, the object itself determines what code gets executed by looking up the method at run time in a table associated with the object. This feature distinguishes an object from an abstract data type (or module), which has a fixed (static) implementation of the operations for all instances. It is a programming methodology that gives modular component development while at the same time being very efficient.
Encapsulation- (or multi-methods, in which case the state is kept separate)
Subtype polymorphism -- subtyping or subtype polymorphism is a form of type polymorphism in which a subtype is a datatype that is related to another datatype (the supertype) by some notion of substitutability, meaning that program constructs, typically subroutines or functions, written to operate on elements of the supertype can also operate on elements of the subtype. If S is a subtype of T, the subtyping relation is often written S <: T, to mean that any term of type S can be safely used in a context where a term of type T is expected. The precise semantics of subtyping crucially depends on the particulars of what "safely used in a context where" means in a given programming language. The type system of a programming language essentially defines its own subtyping relation, which may well be trivial.
Open recursion - a special variable (syntactically it may be a keyword), usually called this or self, that allows a method body to invoke another method body of the same object. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof.
IV. Object oriented techniques and UML
UML 2.2 has 14 types of diagrams divided into two categories. Seven diagram types represent structural information, and the other seven represent general types of behavior, including four that represent different aspects of interactions. These diagrams can be categorized hierarchically as shown in the following class diagram:
UML does not restrict UML element types to a certain diagram type. In general, every UML element may appear on almost all types of diagrams; this flexibility has been partially restricted in UML 2.0. UML profiles may define additional diagram types or extend existing diagrams with additional notations.
In keeping with the tradition of engineering drawings, a comment or note explaining usage, constraint, or intent is allowed in a UML diagram.
Structure diagrams emphasize what things must be in the system being modeled:
- Class diagram: describes the structure of a system by showing the system's classes, their attributes, and the relationships among the classes.
- Component diagram: depicts how a software system is split up into components and shows the dependencies among these components.
- Composite structure diagram: describes the internal structure of a class and the collaborations that this structure makes possible.
- Deployment diagram: serves to model the hardware used in system implementations, and the execution environments and artifacts deployed on the hardware.
- Object diagram: shows a complete or partial view of the structure of a modeled system at a specific time.
- Package diagram: depicts how a system is split up into logical groupings by showing the dependencies among these groupings.
- Figure8. Package Diagram
- Profile diagram: operates at the metamodel level to show stereotypes as classes with the <
> stereotype, and profiles as packages with the < > stereotype. The extension relation (solid line with closed, filled arrowhead) indicates what metamodel element a given stereotype is extending.
Since structure diagrams represent the structure of a system, they are used extensively in documenting the architecture of software systems.
Behavior diagrams emphasize what must happen in the system being modeled:
Activity diagram: represents the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.
State machine diagram: standardized notation to describe many systems, from computer programs to business processes.
Use case diagram: shows the functionality provided by a system in terms of actors, their goals represented as use cases, and any dependencies among those use cases.
Since behavior diagrams illustrate the behavior of a system, they are used extensively to describe the functionality of software systems.
Interaction diagrams, a subset of behavior diagrams, emphasize the flow of control and data among the things in the system being modeled:
Communication diagram: shows the interactions between objects or parts in terms of sequenced messages. They represent a combination of information taken from Class, Sequence, and Use Case Diagrams describing both the static structure and dynamic behavior of a system.
Interaction overview diagram: is a type of activity diagram in which the nodes represent interaction diagrams.
Sequence diagram: shows how objects communicate with each other in terms of a sequence of messages. Also indicates the lifespans of objects relative to those messages.
Timing diagrams: are specific types of interaction diagram, where the focus is on timing constraints.
The Protocol State Machine is a sub-variant of the State Machine. It may be used to model network communication protocols.
V. Limitations of Object Technology
Common themes to OOP problems:
- The real world does not change in a hierarchical way for the most part. You can force a hierarchical classification onto many things, but you cannot force change requests to cleanly fit your hierarchy. Just because a structure is conceptually simple does not necessarily mean it is also change-friendly. And when OO does not use hierarchies, it is messier than the alternatives.
- There are multiple orthogonal aspect grouping candidates and the ones favored by OOP are probably not the best in many or most cases. OO literature is famous for only showing changes that benefit the aspects favored by OO. In the real world, changes come in many aspects, not just those favored or emphasized by OO. Encapsulating by just a single dimension is often a can of worms.
- OOP's granularity of grouping and separation is often larger than actual changes and variations. OOP's alleged solutions to this, such as micro-methods and micro-classes, create code management headaches and other problems.
- OOP designs tend to reinvent the database in application code. In particular, OO generally reinvents navigational databases, which were generally rejected in the 1970's and replaced by relational techniques. It is my opinion that relational theory is generally superior to navigational theory, partly because it is based on set theory while navigational is based on a sea of undisciplined and narrow-situation pointers. Relational can provide more structure, more consistency, cleaner queries, relativistic viewpoints, and automated optimization. Plus, the usage of databases allows multiple tools and languages to share and use attributes (data) without writing explicit access methods for each new request.
- There is no decent, objective, and open evidence that OOP is better. It may just all be subjective or domain-specific. Software engineering is sorely lacking good metrics.
- There is a large lack of consistency in OO business design methodologies. Procedural/relational approaches tend to be more consistent in my experience. (Group code by task, and use database to model noun structures and relations.)
- Many of the past sins that OOP is trying to fix are people and management issues (incentives, training, etc.), and not the fault of the paradigms involved. Until true A.I. comes along, no paradigm will force good code. If anything, OOP simply offers more ways to screw up.
Some of the big problems with inheritance are:
- There are often multiple orthogonal candidates for subclass divisions.
- The features that make up the potential subclass divisions are often recombined in non-tree ways. Catdogs are real in the business world.
- The range of variation often does not fall on existing method boundaries. For example, only 1/3 of a new variation may be different for a given method. In other words, how do you override 1/3 of a method? This may end up requiring altering many sibling methods in a domino-like "polymorphic splitting cascade".
- Inheritance Buildup - Rather than alter the root or base levels of the tree (which risks unforeseen side-effects), the programmers often end up extending the inheritance tree to subclass the changes. Over time, you get a mess. (Or spend your time rearranging code, which has been given the convenient euphemism "refactoring" in some fan circles.)
- Users often maintain real-world hierarchies, such as product categories and accounting codes, via hierarchy edit interfaces, and not programmers writing subclasses. In other words, the hierarchy nodes are stored in a database, and not in program code.
- Hierarchies are often just one of many possible views of relationships. Consider an invoice model with a "header" portion and a "detail" portion for line items. ("Header" may be
Simula (1967) is generally accepted as the first language to have the primary features of an object-oriented language. It was created for making simulation programs, in which what came to be called objects were the most important information representation. Smalltalk (1972 to 1980) is arguably the canonical example, and the one with which much of the theory of object-oriented programming was developed.
Languages called "pure" OO languages, because everything in them is treated consistently as an object, from primitives such as characters and punctuation, all the way up to whole classes, prototypes, blocks, modules, etc. They were designed specifically to facilitate, even enforce, OO methods. Examples: Smalltalk, Eiffel, Ruby, JADE
Languages designed mainly for OO programming, but with some procedural elements. Examples: Java , Python.
Languages that are historically procedural languages, but have been extended with some OO features. Examples: C++ (derived from C), Fortran 2003, Perl, COBOL 2002, PHP, ABAP.
Languages with most of the features of objects (classes, methods, inheritance, reusability), but in a distinctly original form. Examples: Oberon (Oberon-1 or Oberon-2).
Languages with abstract data type support, but not all features of object-orientation, sometimes called object-based languages. Examples: Modula-2 (with excellent encapsulation and information hiding), Pliant, CLU.
VI. Object-Oriented Frameworks
As we have previously mentioned, Object-oriented (OO) and frameworks are a promising technology for reifying proven software designs and implementations in order to reduce the cost and improve the quality of software. In our study, we tend to get through Frameworks by studying Frameworks from the perspective of object-oriented techniques.
Object oriented frameworks are a cornerstone of modern software engineering. Framework development is rapidly gaining acceptance due to its ability to promote reuse of design and source code. Frameworks are application generators that are directly related to a specific domain, i.e., a family of related problems.
As an example, consider building a Graphical User Interface (GUI) tool kit. We might choose to design and implement a single tool kit. On the other hand, if we design the tool kit as a framework, our single design will enable us to generate a collection of tool kits for a variety of GUI applications. Frameworks must generate applications for an entire domain. Consequently, there must be points of flexibility that can be customized to suit the application. For example, one point of extensibility might be the algorithm used to draw graphical elements.
The points of flexibility of a framework are called hot spots. Hot spots are abstract classes or methods that must be implemented. Frameworks are not executable. To generate an executable, one must instantiate the framework by implementing application specific code for each hot spot. Once the hot spots are instantiated, the framework will use these classes using callback. In callback, the service user code declares that it wants to be called on the occurrence of a determined event. Then, the service provider code performs callback on the service user code when the event occurs. For this reason, the framework approach is sometimes characterized as "old code calls new code."
Some features of the framework are not mutable and cannot be easily altered. These points of immutability constitute the kernel of a framework, also called the frozen spots of the framework. Frozen spots, unlike hot spots, are pieces of code already implemented within the framework that call one or more hot spots provided by the implementer. The kernel will be the constant and always present part of each instance of the framework.
Think of a framework as an engine. An engine requires power. Unlike a traditional engine, a framework engine has many power inlets. Each of these power inlets is a hot spot of the framework. Each hot spot must be powered (implemented) for the engine (framework) to work. The power generators are the application specific code that must be plugged in to the hot spots. The added application code will be used by the kernel code of the framework. The engine will not run until all plugs are connected.
A framework is a basic conceptual structure used to solve or address complex issues, usually a set of tools, materials or components. Especially in a software context the word is used as a name for different kind of toolsets, component bases, then became a kind of buzzword or fashionable keyword.
A software framework is a re-usable design for a software system (or subsystem). A software framework may include support programs, code libraries, a scripting language, or other software to help develop and glue together the different components of a software project. Various parts of the framework may be exposed through an API.
In engineering, architecture, drafting, publishing and web design a logical environment used to frame elements in a precise fashion. Failure of elements to conform in the framework is catastrophic.
The word framework is used as a buzzword, in a variety of contexts. For example, the Java collections framework is not a software framework, but a library.
The primary benefits of OO application frameworks stem from the modularity, reusability, extensibility, and inversion of control they provide to developers, as described below:
- Modularity -- Frameworks enhance modularity by encapsulating volatile implementation details behind stable interfaces. Framework modularity helps improve software quality by localizing the impact of design and implementation changes. This localization reduces the effort required to understand and maintain existing software.
- Reusability -- The stable interfaces provided by frameworks enhance reusability by defining generic components that can be reapplied to create new applications. Framework reusability leverages the domain knowledge and prior effort of experienced developers in order to avoid re-creating and re-validating common solutions to recurring application requirements and software design challenges. Reuse of framework components can yield substantial improvements in programmer productivity, as well as enhance the quality, performance, reliability and interoperability of software.
- Extensibility -- A framework enhances extensibility by providing explicit hook methods  that allow applications to extend its stable interfaces. Hook methods systematically decouple the stable interfaces and behaviors of an application domain from the variations required by instantiations of an application in a particular context. Framework extensibility is essential to ensure timely customization of new application services and features.
- Inversion of control -- The run-time architecture of a framework is characterized by an ``inversion of control.'' This architecture enables canonical application processing steps to be customized by event handler objects that are invoked via the framework's reactive dispatching mechanism. When events occur, the framework's dispatcher reacts by invoking hook methods on pre-registered handler objects, which perform application-specific processing on the events. Inversion of control allows the framework (rather than each application) to determine which set of application-specific methods to invoke in response to external events (such as window messages arriving from end-users or packets arriving on communication ports).
Developers in certain domains have successfully applied OO application frameworks for many years. Early object-oriented frameworks (such as MacApp and Interviews) originated in the domain of graphical user interfaces (GUIs). The Microsoft Foundation Classes (MFC) is a contemporary GUI framework that has become the de facto industry standard for creating graphical applications on PC platforms. Although MFC has limitations (such as lack of portability to non-PC platforms), its wide-spread adoption demonstrates the productivity benefits of reusing common frameworks to develop graphical business applications.
Application developers in more complex domains (such as telecommunications, distributed medical imaging, and real-time avionics) have traditionally lacked standard ``off-the-shelf'' frameworks. As a result, developers in these domains largely build, validate, and maintain software systems from scratch. In an era of deregulation and stiff global competition, however, it has become prohibitively costly and time consuming to develop applications entirely in-house from the ground up.
Fortunately, the next generations of OO application frameworks are targeting complex business and application domains. At the heart of this effort are Object Request Broker (ORB) frameworks, which facilitate communication between local and remote objects. ORB frameworks eliminate many tedious, error-prone, and non-portable aspects of creating and managing distributed applications and reusable service components. This enables programmers to develop and deploy complex applications rapidly and robustly, rather than wrestling endlessly with low-level infrastructure concerns. Widely used ORB frameworks include CORBA, DCOM, and Java RMI.
Although the benefits and design principles underlying frameworks are largely independent of domain to which they are applied, we've found it useful to classify frameworks by their scope, as follows:
- System infrastructure frameworks -- These frameworks simplify the development of portable and efficient system infrastructure such as operating system  and communication frameworks , and frameworks for user interfaces and language processing tools. System infrastructure frameworks are primarily used internally within a software organization and are not sold to customers directly.
- Middleware integration frameworks -- These frameworks are commonly used to integrate distributed applications and components. Middleware integration frameworks are designed to enhance the ability of software developers to modularize, reuse, and extend their software infrastructure to work seamlessly in a distributed environment. There is a thriving market for Middleware integration frameworks, which are rapidly becoming commodities. Common examples include ORB frameworks, message-oriented middleware, and transactional databases.
- Enterprise application frameworks -- These frameworks address broad application domains (such as telecommunications, avionics, manufacturing, and financial engineering ) and are the cornerstone of enterprise business activities . Relative to System infrastructure and Middleware integration frameworks, Enterprise frameworks are expensive to develop and/or purchase. However, Enterprise frameworks can provide a substantial return on investment since they support the development of end-user applications and products directly. In contrast, System infrastructure and Middleware integration frameworks focus largely on internal software development concerns. Although these frameworks are essential to rapidly create high quality sofware, they typically don't generate substantial revenue for large enterprises. As a result, it's often more cost effective to buy System infrastructure and Middleware integration frameworks rather than build them in-house .
Regardless of their scope, frameworks can also be classified by the techniques used to extend them, which range along a continuum from whitebox frameworks to blackbox frameworks. Whitebox frameworks rely heavily on OO language features like inheritance and dynamic binding to achieve extensibilty. Existing functionality is reused and extended by (1) inheriting from framework base classes and (2) overriding pre-defined hook methods using patterns like Template Method . Blackbox frameworks support extensibility by defining interfaces for components that can be plugged into the framework via object composition. Existing functionality is reused by (1) defining components that conform to a particular interface and (2) integrating these components into the framework using patterns like Strategy  and Functor.
Whitebox frameworks require application developers to have intimate knowledge of the frameworks' internal structure. Although whitebox frameworks are widely used, they tend to produce systems that are tightly coupled to the specific details of the framework's inheritance hierarchies. In contrast, blackbox frameworks are structured using object composition and delegation more than inheritance. As a result, blackbox frameworks are generally easier to use and extend than whitebox frameworks.
However, blackbox frameworks are more difficult to develop since they require framework developers to define interfaces and hooks that anticipate a wider range of potential use-cases .
Frameworks are closely related to other approaches to reuse, including:
- Patterns -- Patterns represent recurring solutions to software development problems within a particular context. Patterns and frameworks both facilitate reuse by capturing successful software development strategies. The primary difference is that frameworks focus on reuse of concrete designs, algorithms, and implementations in a particular programming language. In contrast, patterns focus on reuse of abstract designs and software micro-architectures.
- Class libraries -- Frameworks extend the benefits of OO class libraries in the following ways:
- Frameworks define ``semi-complete'' applications that embody domain-specific object structures and functionality -- Components in a framework work together to provide a generic architectural skeleton for a family of related applications. Complete applications can be composed by inheriting from and/or instantiating framework components. In contrast, class libraries are less domain-specific and provide a smaller scope of reuse. For instance, class library components like classes for Strings, complex numbers, arrays, and bitsets are relatively low-level and ubiquitous across many application domains.
- Frameworks are active and exhibit ``inversion of control'' at run-time -- Class libraries are typically passive, i.e., they perform their processing by borrowing threads of control from self-directed application objects. In contrast, frameworks are active, i.e., they control the flow of control within an application via event dispatching patterns like Reactor and Observer. The ``inversion of control'' in the run-time architecture of a framework is often referred to as The Hollywood Principle, i.e., ``Don't call us, we'll call you.''
- Components -- Components are self-contained instances of abstract data types (ADTs) that can be plugged together to form complete applications. Common examples of components include VBX controls and CORBA Object Services. In terms of OO design, a component is a blackbox that defines a cohesive set of operations, which can be reused based solely upon knowledge of the syntax and semantics of its interface. Compared with frameworks, components are less tightly coupled and can support binary-level reuse. For example, applications can reuse components without having to subclass from existing base classes.
Frameworks can be viewed as a concrete reification of families of design patterns that are targeted for a particular application-domain. Likewise, design patterns can be viewed as more abstract micro-architectural elements of frameworks that document and motivate the semantics of frameworks in an effective way. When patterns are used to structure and document frameworks, nearly every class in the framework plays a well-defined role and collaborates effectively with other classes in the framework.
In practice, frameworks and class libraries are complementary technologies. For instance, frameworks tyically utilize class libraries like the C++ Standard Template Library (STL) internally to simplify the development of the framework. Likewise, application-specific code invoked by framework event handlers can utilize class libraries to perform basic tasks such as string processing, file management, and numerical analysis.
The relationship between frameworks and components is highly synergistic, with neither subordinate to the other. Frameworks can be used to develop components, whereby the component interface provides a Facade for the internal class structure of the framework. Likewise, components can be used as pluggable strategies in blackbox frameworks. In general, frameworks are often used to simplify the development of infrastructure and middleware software, whereas components are often used to simplify the development of end-user application software. Naturally, components are also effective for developing infrastructure and middleware, as well.
When used in conjunction with patterns, class libraries, and components, OO application frameworks can significantly increase software quality and reduce development effort. However, a number of challenges must be addressed in order to employ frameworks effectively. Companies attempting to build or use large-scale reusable framework often fail unless they recognize and resolve challenges such as development effort, learning curve, integratability, maintainability, validation and defect removal, efficiency, and lack of standards, which are outlined below:
- Development effort -- While developing complex software is hard enough, developing high quality, extensible, and reusable frameworks for complex application domains is even harder. The skills required to produce frameworks successfully often remain locked in the heads of expert developers. One of the goals of this theme issue is to demystify the software process and design principles associated with developing and using frameworks.
- Learning curve -- Learning to use an OO application framework effectively requires considerable investment of effort. For instance, it often takes 6-12 months to become highly productive with a GUI framework like MFC or MacApp, depending on the experience of developers. Typically, hands-on mentoring and training courses are required to teach application developers how to use the framework effectively. Unless the effort required to learn the framework can be amortized over many projects, this investment may not be cost effective. Moreover, the suitability of a framework for a particular application may not be apparent until the learning curve has flattened.
- Integratability -- Application development will be increasingly based on the integration of multiple frameworks (e.g. GUIs, communication systems, databases, etc.) together with class libraries, legacy systems, and existing components. However, many earlier generation frameworks were designed for internal extension rather than for integration with other frameworks developed externally. Integration problems arise at several levels of abstraction, ranging from documentation issues , to the concurrency/distribution architecture, to the event dispatching model. For instance, while inversion of control is an essential feature of a framework, integrating frameworks whose event loops are not designed to interoperate with other frameworks is hard.
- Maintainability -- Application requirements change frequently. Therefore, the requirements of frameworks often change, as well. As frameworks invariably evolve, the applications that use them must evolve with them.
Framework maintenance activities include modification and adaptation of the framework. Both modification and adaptation may occur on the functional level (i.e., certain framework functionality does not fully meet developers' requirements), as well as on the non-functional level (which includes more qualitative aspects such as portability or reusability).
Framework maintenance may take different forms, such as adding functionality, removing functionality, and generalization. A deep understanding of the framework components and their interrelationships is essential to perform this task successfully. In some cases, the application developers and/or the end-users must rely entirely on framework developers to maintain the framework.
- Validation and defect removal -- Although a well-designed, modular framework can localize the impact of software defects, validating and debugging applications built using frameworks can be tricky for the following reasons:
- Generic components are harder to validate in the abstract -- A well-designed framework component typically abstracts away from application-specific details, which are provided via subclassing, object composition, or template parameterization. While this improves the flexibility and extensibility of the framework, it greatly complicates module testing since the components cannot be validated in isolation from their specific instantiations.
- Inversion of control and lack of explicit control flow -- Applications written with frameworks can be hard to debug since the framework's ``inverted'' flow of control oscillates between the application-independent framework infrastructure and the application-specific method callbacks. This increases the difficulty of ``single-stepping'' through the run-time behavior of a framework within a debugger since the control flow of the application is driven implicitly by callbacks and developers may not understand or have access to the framework code. This is similar to the problems encountered trying to debug a compiler lexical analyser and parser written with LEX and YACC. In these applications, debugging is straightforward when the thread of control is in the user-defined action routines. Once the thread of control returns to the generated DFA skeleton, however, it is hard to trace the program's logic.
- Efficiency -- Frameworks enhance extensibility by employing additional levels of indirection. For instance, dynamic binding is commonly used to allow developers to subclass and customize existing interfaces. However, the resulting generality and flexibility often reduce efficiency. For instance, in languages like C++ and Java, the use of dynamic binding makes it impractical to support Concrete Data Types (CDTs), which are often required for time-critical software. The lack of CDTs yields (1) an increase in storage layout (e.g., due to embedded pointers to virtual tables), (2) performance degradation (e.g. due to the additional overhead of invoking a dynamically bound method and the inability to inline small methods), and (3) a lack of flexibility (e.g., due to the inability to place objects in shared memory).
- Lack of standards -- Currently, there are no widely accepted standards for designing, implementing, documenting, and adapting frameworks. Moreover, emerging industry standard frameworks (such as CORBA, DCOM, and Java RMI) currently lack the semantics, features, and interoperability to be truly effective across multiple application domains. Often, vendors use industry standards to sell proprietary software under the guise of open systems. Therefore, it's essential for companies and developers to work with standards organizations and middleware vendors to ensure the emerging specifications support true interoperability and define features that meet their software needs.
- Reducing framework development effort -- Traditionally, reusable frameworks have been developed by generalizing from existing systems and applications. Unfortunately, this incremental process of organic development is often slow and unpredictable since core framework design principles and patterns must be discovered ``bottom-up.'' However, since many good framework exemplars now exist, we expect that the next generation of developers will leverage this collective knowledge to conceive, design, and implement higher quality frameworks more rapidly.
- Greater focus on domain-specific enterprise frameworks -- Existing frameworks have focused largely on system infrastructure and middleware integration domains . In contrast, there are relatively few widely documented exemplars of enterprise frameworks for key business domains such as manufacturing, banking, insurance, and medical systems. As more experience is gained developing frameworks for these business domains, however, we expect that the collective knowledge of frameworks will be expanded to cover an increasing wide range of domain-specific topics and an increasing number of Enterprise application frameworks will be produced. As a result, benefits of frameworks will become more immediate to application programmers, as well as to infrastructure developers.
- Blackbox frameworks -- Many framework experts  favor black-box frameworks over white-box frameworks since black-box frameworks emphasize dynamic object relationships (via patterns like Bridge and Strategy ) rather than static class relationships. Thus, it is easier to extend and reconfigure black-box frameworks dynamically. As developers become more familiar with techniques and patterns for factoring out common interfaces and components, we expect that an increasing percentage of black-box frameworks will be produced.
- Framework documentation -- Accurate and comprehensible documentation is crucial to the success of large-scale frameworks. However, documenting frameworks is a costly activity and contemporary tools often focus on low-level method-oriented documentation, which fails to capture the strategic roles and collaborations among framework components. We expect that the advent of tools for reverse-engineering the structure of classes and objects in complex frameworks will help to improve the accuracy and utility of framework documentation. Likewise, we expect to see an increase in the current trend , of using design patterns to provide higher-level descriptions of frameworks.
- Processes for managing framework development -- Frameworks are inherently abstract since they generalize from a solution to a particular application challenge to provide a family of solutions. This level of abstraction makes it difficult to engineer their quality and manage their production. Therefore, it is essential to capture and articulate development processes that can ensure the successful development and use of frameworks. We believe that extensive prototyping and phased introduction of framework technology into organizations is crucial to reducing risk and helping to ensure successful adoption.
- Framework economics -- The economics of developing framework includes activities such as the following:
- Determining effective framework cost metrics -- which measure the savings of reusing framework components vs. building applications from scratch;
- Cost estimation -- which is the activity of accurately forecasting the cost of buying, building, or adapting a particular framework;
- Investment analysis and justification -- which determines the benefits of applying frameworks in terms of return on investment;
Moreover, it is usually hard to distinguish bugs in the framework from bugs in application code. As with any software development, bugs are introduced into a framework from many possible sources, such as failure to understand the requirements, overly coupled design, or an incorrect implementation. When customizing the components in framework to a particular application, the number of possible error sources will increase.
Over the next several years, we expect the following framework-related topics will receive considerable attention by researchers and developers:
We expect that the focus on framework economics will help to bridge the gap among the technical, managerial, and financial aspects of making, buying, or adapting frameworks .
VII. Overview of framework design methods
In this section, we first outline a set of concepts necessary in a framework design language. We then use the outlined concepts to examine current framework design languages that are based on UML. Finally, we present current framework design processes.
The following five requirements detail out concepts necessary in a design language for frameworks [Bouassida:01]:
1. The framework design notation must provide for a means to describe statically the framework:
- classes and their relations (association, generalization, aggregation);
- core; and
- whitebox and blackbox hot-spots.
2. Within a whitebox hot-spot, the notation must statically guide the user to the potential changes they are expected to introduce. For example, the notation indicates that the user is expected to redefine the code of a method, or the user may add inheriting classes, etc. This criterion facilitates a correct reuse of a framework.
3. The notation must contain concepts for regulating the interactions within a framework by:
- explicitly showing the collaborations between objects instantiated from the framework classes;
- clarifying the object responsibilities, contexts on which the responsibilities depend and how the objects may combine the different responsibilities; and
- being abstract and independent of unessential implementation details that may unnecessarily tie the design to a specific environment and limit the framework generality.
4. The notation must show the framework aim and potential uses of the, i.e., it must show scenarios of the framework instantiations.
5. The notation must be unambiguous to facilitate the correct comprehension of the framework.
Current Framework Design Languages
 models a framework through three UML diagrams: a class diagram enriched with packages, a collaboration diagram and a use case diagram. The enriched UML class diagram expresses the static structure of a design. However, it does not distinguish between the classes in the core and those in the hot-spots . Although the name of an abstract class is in italic in the UML notation, this is not sufficient to deduce all the hot-spots. The UML collaboration diagram successfully shows the object interactions and responsibilities (sender/receiver). However, working at the message exchange level can be too detailed and does not indicate how and in which context the framework works. The UML use case diagram defines a set of external actors and their possible uses of the system. It could therefore be used to define the aim and possible contexts (4).
Fontoura et al.  propose a UML profile for frameworks, called UML-F, where a design is expressed by a class diagram and a sequence diagram both extended by presentation tags (e.g., complete, incomplete), basic modeling tags (e.g., fixed, application, framework) and essential pattern tags (e.g., FacM-Creator, FacM-ConcreteCreator). The added tags are used to mark, essentially, the complete and incomplete parts, the variable parts in the diagrams and the roles of diagram elements. In this notation, the extended class diagram represents the framework classes and relations. However, according to the tag definitions, this notation only identifies the whitebox hot-spots. In addition, several tags are complementary and thus redundant (e.g., complete and incomplete, application and framework). Furthermore, the combined pattern tags and presentation tags could overcharge the diagram and impede the understanding of the design. The extended sequence diagram guides the user when adapting framework interactions, and it explicitly shows the object collaborations and responsibilities. However, similar to the UML sequence diagrams, it remains at a detailed level.
Sanada  presents an UML extension that aims to be comprehensive and well defined. However, most of the extensions proposed have already been defined by Fontoura , and the only difference is the constraint "covariant" which shows that adding a subclass to a certain class might result in adding a subclass to another one.
Riehle  proposes a role modeling language that adapts the OORAM methodology . The proposed language represents a framework through a class model with an extension-point class set (points of extension), a built-on class set (framework interface) and a free role type set (the use of the framework by other frameworks). Overall, this notation represents the architecture and collaborations in a framework and describes the framework context. However, it focuses more on framework composition than framework adaptation. For instance, it does not visually distinguish between extension-point classes and frozen classes in the framework. Therefore, one cannot easily recognize the whitebox and blackbox hot-spots.
Current Framework Design Processes
Current framework design processes can be classified as either bottom-up or top-down. Bottom-up design works well where a framework domain is already well understood, for example, after some initial evolutionary cycles. In this case, the design process starts from a set of existing applications and generalizes them to derive a framework design (c.f., , ). On the other hand, top-down design is preferred when the domain has not yet been sufficiently explored. In this case, the design process starts from a domain analysis and then constructs the framework design (c.f., ).
Koskimies and Mossenback  propose a two-phase bottom-up framework design process. The first phase, called problem generalization, generalizes a representative application in the framework domain into "the most general" form. In the second phase, called framework design, the generalization levels of the previous phase are considered in a reverse order leading to an implementation for each level. The implementation of the framework at level i requires adding specific classes and applying various design patterns on the framework. The last step in the design phase is to apply the resulting framework to the initial example problem of the generalization phase. This design process lacks guidelines for the problem generalization phase. In addition, both the reuse degree of the resulting framework and the ease of deriving the framework depend on how well the original application represents the domain. Furthermore, the resulting framework does not provide for reuse guidelines; that is, it does not clearly identify nor does it guide the designer in finding the framework core and hot-spots.
Schmid  decomposes the framework design process into three steps:
- design of a class model for an (arbitrary) application in the framework domain;
- analysis and specification of the domain variability and flexibility, i.e., identification of the hot-spots; and
- generalization of the class model by applying a sequence of transformations that incorporate the domain variability.
This design process leaves it to the developer's expertise to identify the hot-spots during the second step.
Pree  proposes a framework design process based on combining hot-spots specified as metapatterns. These latter are a set of design patterns that describe how to construct frameworks. This design process focuses on hot-spot combination without defining how to determine them.
Fontoura et al  propose a design process that considers a set of applications as viewpoints (i.e., perspectives) of the domain. The process informally defines a set of unification rules that describe how the viewpoints can be combined to compose a framework. The result of applying the unification rules is a template hook model that represents the hot-spots through template and hook methods. After developing the template hook model, the developer has to find which meta-pattern should be used to model each hot-spot. The resulting framework is an OMT class diagram that does not completely specify the framework; in particular, it neither distinguishes between the two hot-spot types, nor does it not specify the object interactions. In addition, this process does not address semantic issues in the unified applications (e.g., synonyms, homonyms,...); it supposes that all the semantic inconsistencies between the viewpoints have been solved beforehand.
VIII. Framework Development and Issues
The three major stages of framework development are domain analysis, framework design, and framework instantiation.
Domain analysis attempts to discover the domain's requirements and possible future requirements. In order to capture these requirements, previously published experiences, existing similar software systems, personal experiences, and standards are taken into account. During domain analysis, the hot spots and frozen spots are partially uncovered.
The framework design phase defines the framework's abstractions. Hot spots and frozen spots are modeled (perhaps with the Unified Modeling Language  diagrams), and the extensibility and flexibility proposed in the domain analysis is outlined. As mentioned above, design patterns are used in this phase.
Finally, in the instantiation phase, the framework hot spots are implemented, generating a software system. It is important to note that each of these applications will have the framework's frozen spots in common. The framework development process phases are compared to the traditional object oriented design phases in Figure 16. In this figure, we named the development phases as described in .
As shown in Figure 3, traditional object oriented development differs from framework development. In object oriented development, the problem analysis phase, also called inception, only studies the requirements of a single problem. On the other hand, framework development captures the requirements for an entire domain. Furthermore, the final result of traditional object oriented development is an executable application that is executable, whereas many applications result from the instantiation phase of framework development.
The instantiation phase comprises the construction and transition phases of the traditional development. Thus, separate construction and transition phases are present in each of the framework's instances. For each of the framework's instances there is an implementation effort introduced by these phases.
Even though framework development promises to be very efficient, there are several issues that should be discussed. In the following sections we will review seven issues that one must consider when choosing a framework model. Notice that these points should be carefully considered; they are neither good nor bad, just tradeoffs. Nonetheless it is important to keep in mind that object oriented framework development is a relatively recent approach. Also, one must remember that object oriented design and implementation practice are themselves recent developments. It is our belief that framework development will evolve and prove itself the rule of thumb for many domains, but surely not for all.
The assessment of the framework technology presented here is based on observations compiled by the authors about the development and instantiation of several frameworks for the e-commerce area in our laboratory, the TecComm/LES. One of these frameworks, called V-Market, is presented in , and we encourage the reader to examine this e-commerce framework.
Application Generator Development vs. Application Development
Frameworks generate applications by customization. They are not applications themselves; they are more complex constructs. It is important to keep in mind that the development of a framework will be at least as expensive as single application development, and generally much more expensive. One must carefully analyze the need for the flexibility of a framework when assessing requirements that must be met for a client or future user, otherwise a single use behemoth will be created unnecessarily.
On the other hand, the effort of building application generators can pay itself off through the repeated generation of applications within the proposed domain. When choosing a framework model one must ask: "Will I be creating applications of this same domain more than once?" If the answer is yes, then it is important to assess if the work of creating more than one application will pay off the work of creating an application generator. In brief, be aware of the costs versus the benefits of choosing to develop a framework instead of a custom-made software system.
Consider the design of a system to transform text files from one encoding, such as the ISO-8859-1 Latin character set, to an alternate encoding, such as the e-mail text encoding format Multi-purpose Internet Mail Extension (MIME) . Should this system be built as a framework? The answer is yes if there are plans to convert ISO-8859-1 into other formats (e.g. UUENCODE), or even to convert ASCII to MIME, UUENCODE, or possible future formats. In the first case, one hot spot would be the type of the output text. In the second case, the type of the input text would also be a hot spot. For example, in Figure 17 a text written using ISO-8859-1 characters like "ç" and "í", which do not exist in ASCII, are encoded in MIME and UUENCODE.
But what if this system will only convert ASCII to MIME and there are no plans of further development? In this case, a framework might be an overelaborate approach to the problem. Sadly, most times the choice is not so clear. You will find yourself in gray areas more often than you would like. It is always a good idea to check how similar systems have met the client's requirements. You might even discover that each similar system would be an instance of your framework, but is it still worth building?
It is common to integrate frameworks to fulfill application requirements. However, Michael Mattson argues that there are at least six common problems that application and framework developers encounter when integrating two or more frameworks . All of these problems derive from a set of five common causes: cohesive behavior, domain coverage, design intention, lack of access to source code, and lack of standards for the framework. These problems are detailed thoroughly in , where many solutions are proposed to each.
If one develops a framework and expects it to be used, framework integration is an inevitable reality. These composition issues must not be taken lightly. Frameworks are often abandoned or aborted because they cannot be easily integrated with other frameworks. Framework integration is not an easy task, and composition must be considered seriously during development.
One way to consider composition when developing frameworks is to maintain a set of APIs that encapsulate the services that the framework provides. When composing with a framework, the application only needs to know some functions and parameters that should be called, ignoring the inner workings of the framework. Another option is to create a mediation layer, to convert framework's requests. However, if many composing frameworks are involved, this approach can prove expensive if one does not choose a unified mediation layer between all the composed elements. In this case, the mediation layer will act as a "glue" between the composed elements.
Instantiation and Framework Documentation Issues
A framework can also be classified according to its extensibility; it can be used as a white box or a black box . In white box frameworks, also called architecture-driven frameworks, instantiation is only possible through the creation of new classes. These classes and code can be introduced in the framework by inheritance or composition. One must program the framework and understand it very well in order to produce an instance.
More on Black box frameworks produce instances using configuration scripts. Following configuration, an instantiation automation tool creates the classes and source code. For example, it is possible to use a graphical wizard that guides the user step by step through a framework instantiation process. The black box approach does not require framework users to learn details of the framework internals. Consequently, these frameworks are also called data-driven frameworks  and are generally easier to use. Frameworks that contain both white box and black box characteristics are called gray box frameworks.
White box, black box and mixed (gray box) approaches have a steep learning curve. Thus, the ability to create executable systems from a framework will depend on the usability of the instantiation mechanisms and the documentation of the framework. If a framework is ill documented, few people will use or maintain it over time. However, a new type of documentation must be present with frameworks: a "how-to extend" guide and/or instantiation tools.
Many documentation guidelines have been proposed for frameworks. The hot spots cards  approach focuses on the framework's flexible points. Another approach is to create cookbooks that discuss how the framework should be implemented and the steps required ,. These cookbooks contain many "recipes," which describe informally how to solve specific problems while instantiating the framework. A third approach is to map the architectural solutions used throughout the design of the framework. However, documentation of the framework's architecture does not account for all of the framework's facets.
As said before, framework development frequently uses design patterns. Even though these architecture fragments constitute a limited view of a framework, they are well known patterns that can help the comprehensibility of the framework instantiation process. Consider again the example shown in Figure 17. In this problem domain, one wishes to convert characters between formats using different algorithms (e.g., the ISO-8849-1 to MIME conversion algorithm). An effective solution might create an extensibility point (hot spot) that allows for the conversion algorithm to be altered in a "plug and play" fashion. The strategy design pattern is suited to this application and allows different conversion algorithms to be "plugged-in" to the framework without altering previously written code . This example is modeled in UML in Figure 19. This diagram uses the same notation used in Figure 16, with a few more elements. The folded-corner box is a comment, and the dashed line indicates what element of the diagram the comment refers to. It is important to notice that Figure 19 can also serve as part of the documentation of the design of the framework.
Even though there are many possible and feasible approaches to documenting a framework, there is no clear standard. The safest approach is to use two or more of the approaches discussed above.
Domain Analysis Cost and Experience
As stated above, frameworks are created to generate applications for a specific domain. For this purpose, one of the development phases of frameworks is domain analysis. Unlike the requirement phase of software systems, domain analysis covers an entire class of problems.
Domain analysis attempts to characterize the size and complexity of a chosen domain. If the domain is too large, it is time consuming to gather and assess information and resources. Furthermore, the development time and cost of the framework will be excessive. In addition, individuals familiar with the domain, prototypes or similar software systems will be necessary. Finding sources of experience that cover a large domain is difficult.
On the other hand, if one chooses a domain that is too narrow, the framework's applicability is reduced and the generated applications will be too similar to justify the effort of building a framework. It is important to keep in mind what hot spots are needed and made necessary by the requirements and those that are unnecessary or superfluous.
Parallel Evolution of Instances and Frameworks
As time passes and a framework becomes more mature, it changes and evolves accordingly. This process can represent the alteration of its architecture, new requirements being met or unsupported, and many other sources of change. Meanwhile, applications generated using the framework will also evolve and change.
How is it possible to deal with both application and framework evolution? Applications based upon frameworks might be orphaned if a framework is changed or discontinued. There is no clear solution, and certainly most people will suffer the "not made here syndrome," refusing to use any framework not built by themselves. The only possible advice for this situation is to carefully study the framework to be used; a well designed and implemented framework will not be as volatile as a poorly created one.
By "well designed" we mean that frameworks should have solid documentation and meet the domain requirements. A framework that maintains the object oriented concept of encapsulation and has a well-defined public interface is likely to remain forward compatible at the interface level across upgrades and revisions. The evaluation of a framework's design and/or implementation is not always straightforward; one must consider the experience of the designer, the complexity of the instantiation process, the requirements met and unmet, and the update road map. An update road map contains the plans for updating the framework, whether it will represent a complete redesign at each new version or a backward compatible smooth update.
Flexibility vs. Complexity and Performance
As stated above, frameworks are built for flexibility and generality, trying to cover a whole domain instead of particular problems. This approach produces an application generator that is more complex and more extensible (hot spots) than traditional software systems. Extensibility is achieved using inheritance and dynamic binding, common features in object oriented languages. For this reason, a tradeoff between flexibility and performance is present, since dynamic binding introduces an overhead, and its use throughout the system will make it a performance hindrance.
This tradeoff makes it necessary for the framework designer to choose the hot spots carefully, neither exaggerating nor creating a far too generic framework. Even though flexibility is important and useful, it should only be present where it is needed. Otherwise one could devise a "universal deterministic problem solver framework," illustrated in Figure 20. In this incredible framework, the hot spots are the problem_not_solved(), try_to_solve_problem() and return_solution() methods. It can be instantiated to find the solution of any problem in the deterministic problems domain, but is of course useless.
Along with performance issues, the abusive use of hot spots in a framework design will inevitably lead to complex software systems. Using hot spots to introduce generic solutions adds to the complexity of the framework. As there are no requirements for the "extra" hot spots, the added complexity will add nothing in terms of functionality. It is important to notice that it is common for developers to introduce improper hot spots thinking they will make the framework "more powerful." However, this approach will lead to complexity and performance issues, as shown above, and sometimes the extra functionality might be an inconvenient addition.
Problems with Debugging Framework Instances
Frameworks generate applications that have an intertwining of application specific code and frozen spot code. Consequently, a debug trace of the application code often leads to framework code. Using single-step methods for debugging will not work because of this blend. Frozen spot calls must be separated from hot spot calls.
Automatic distinction between frozen spot code and hot spot code is impossible for any debugger. One possible solution is the use of pre- and post-conditions in every method of the frozen spot code, serving as executable assertions ensuring that these calls are valid. This way, the assertions will alert for any corrupted input or feedback given to the frozen spot code, and tracing will be much easier. However, this solution will not deal with the complexity introduced by the mix of hot spot and frozen spot code. By entangling hot and frozen spots, the framework will be harder to instantiate, as application developers will have to change and introduce code carefully, in order to avoid corrupting frozen spot code that should not be altered. The cost of maintaining confusing code such as this is unavoidably high.
IX. Future of object Technology
OOP has its weak point : OOP is conceptualized from the nature of computers - the fact that a program would capture the real world is restricted because it is used only in terms of symbols (and finite set of those symbols.) These symbols are thought of as "boxed" surfaces to fit "round" or "fluid" real-life things. For example, the most immediate implementations of the object "CAR" pays little attention under to the real-life existence of tiny rust spot on the front hood. Typical CAR object contain surfaces that allow the user to CAR.move or CAR.park and even have CAR.AC_Turn_On... However, even if CAR.Small_Rust_Spots_On_Hood would be available at the definition of a CAR object, it will be other details that would be overlooked. By definition, even if so-called "all" details were to be considered, they would be a finite description of their real-life-counterpart, a CAR, that has an infinite number of aspects.
The issue at hand is not the motivation of the creation of huge objects, attempting to describe every detail in their real-world targets. This approach is bound to be incomplete by seeing the holes in the definition of a finite set representing an infinite reality. Instead, it is sought here that a quantitative method describe in consistent and simple terms the relationship between representation, computer objects and those of the real world. For example, this relationship would tell us is the CAR far from the DOG? And how much does the DOG like the CAR? Will 5 miles be considered "far"? How about 1 inch?
Classic logic will fail to be this quantitative method because it too cannot sustain the smooth ends of life's objects. For example, suppose the object type DOG "D" "Not like to so much" CAR "C". The shortcoming of crisp logic here is that it cannot relate an object "D" to another object "C" in terms of fondness (well, "liking") because DOG "D" both likes and does-not-like object "C" - simultaneously. A short elaboration on this example will yield, again, that a huge numbers of descriptors will be needed in both object definition in order to attempt to cover every detail in the object "DOG" and "CAR". However, by definition, such huge objects will fail to be complete in terms of relating DOG to CAR. In a sense the relationship of real life ("things") to their computerized symbolic ("objects") is of an infinite-to-one, because real life things are infinite and computer symbols objects are finite.
A super-set of classic logic will however, sustain the task. Fuzzy logic for example, would be able to quantify the relationship between computers' symbolic objects and their real life counterparts. It is by definition, the logic that describes gradation of any one element in relationship to a known quantity. Typically the element has an attribute and this attribute is measured on a scale between 0 and 1.0. For example, the color blue could be said to be 0.6 of black. The color white could be said to be 0.0 of black and so forth. Once we define this form of thinking I can obtain that the (yellow) pencil I am holding in my hand is 0.15 black. Classic logic would not be able to determine how "black" is this pencil - it would be said either 0.0 black ("Not black") or 1.0 black ("yes, it's black"). But neither would describe how much black is my pencil. Fuzzy logic will. By the same token, a car "C" is 0.0001 far from dog "D" when it's 5 miles away, and 0.9999 far when they are 1 inch apart. The question of "how far" now has an answer. So does the question "how dark is the pencil?" (Answer: 0.15 black.)
The idea of relating objects in a fuzzy way is not well tested to the best of my knowledge. But I believe that it could be at the core of the future of Object Oriented Programming. Fuzzy OOP, or just plain old "Fuzzy Logic" could help relate computer-capsuled "objects" to real life "things" and can become the power behind any and all new software. Such software will be able emulate, for example, a set of finite scenarios and under a set of restricted preconditions - it would first set forth the quantitative relationships between real life objects and then manipulate them in a symbolic manner, with a great ease.
The key behind this idea is that we recognize the inability of computers (or any finite set algorithm) to work outside their own symbolic universe, and work towards emulating the real life instances and their relationship in a more adequate way than symbolic classic logic.
Object oriented techniques and object oriented programming (OOP), and in general, object encapsulation, manipulation or code reusability have been at the core of all recent technological advances and implementations. OOP is the concept that drives technology. Although achieved at the price of overhead (in terms of bandwidth, memory or clock-cycles) it better achieves implementations of complex tasks. This includes tasks with uncertain results, or at least vast and varied enough to be hard to predict. It also includes tasks with unknown preconditions or inputs. Trivially, the overhead increase as a result of OOP, has become a small tradeoff as computers can exponentially increase their power in order to compensate for programming needs, and is justified by the market place that demands fast programming solutions.
About the framework approach, we come up to this conclusion that the frameworks should be considered when product requirements mutate rapidly. Frameworks can also be used for incremental development by implementing simple hot spot code at first and subsequently upgrading it. Frameworks are a recent research and development topic. Consequently, there are still many open problems. One such problem is framework documentation. At the time of publication of this paper, as we states before, there are no official or de facto standards for documenting frameworks. The lack of common ground creates a gap between framework developers, extenders, and users. Other open questions are in the area of framework economics, which estimates the cost of building frameworks vs. applications, and the return on investment. There currently are few industrial frameworks (i.e., those being used in commercial and industrial environments).
Object-oriented application frameworks will be at the core of leading-edge software technology in the twenty-first century. As software systems become increasingly complex, object-oriented application frameworks are becoming increasingly important for industry and academia. The extensive focus on application frameworks in the object-oriented community offers software developers an important vehicle for reuse and a means to capture the essence of successful patterns, architectures, components, and programming mechanisms.
The good news is that framework is becoming a mainstream and developers at all levels are increasingly adopting and succeeding with framework technologies. However, OO application frameworks are ultimately only as good as the people who build and use them. Creating robust, efficient, and reusable application frameworks requires development teams with a wide range of skills. We need expert analysts and designers who have mastered patterns, software architectures, and protocols in order to alleviate the inherent and accidental complexities of complex software. Likewise, we need expert middleware developers who can implement these patterns, architectures, and protocols within reusable frameworks. In addition, we need application programmers who have the motivation, skills, and training to learn how to use these frameworks effectively. We encourage you to get involved with others working on frameworks by attending conferences, participating in online mailing lists and newsgroups, and contributing your insights and experience.
Finally, there is a thought that newcomers to the framework development scenario should look at the issues exposed here carefully. Consider what you need, what you are doing, and be aware that frameworks have their pros and cons. They are not a solution to all problems. Furthermore, if you are considering using an existing framework, study its documentation and verify if there is a "how to instantiate" explanation of good quality, if it exists. It is also important to observe the applications that were generated and the amount of effort spent in this process.
A very special thanks goes out to Dr. Peter Rittgen, without whose motivation and encouragement I would not have continued my studies in informatics. Dr. Peter Rittgen is the one professor who truly made a difference in my views in system development philosophies. It was under his tutelage that I developed a focus and became interested in object oriented techniques. He provided me with direction, motivation and became more of a mentor and friend, than a professor. It was though his, persistence, understanding and kindness that I completed this paper. I doubt that I will ever be able to convey my appreciation fully, but I owe him my eternal gratitude.
- Forrest Shull, Filippo Lanubile, and Victor R. Basili, Fellow, IEEE," Investigating Reading Techniques for Object Oriented Frameworks Learning", URL : http://citeseerx.ist.psu.edu, accessed March 8th, 2010.
- M. Aksit, B. Tekinerdogan, F. Marcelloni and L. Bergmans," Deriving Object-Oriented Frameworks from Domain Knowledge, Building Application Frameworks: Object-Oriented Foundations of Framework Design" M. Fayad, D. Schmidt, R. Johnson (Eds.), John Wiley & Sons Inc., pp. 169-198, 1999.
- "Object Oriented Framework" http://www.acm.org/crossroads/xrds7-4/frameworks.html
- Eggenschwiler T. Birrer "Frameworks in the Financial Engineering Domain: An Experience Report" ECOOP '93 Proceedings, Lecture Notes in Computer Science nr. 707, Springer-Verlag, 1993.
- Booch, G., Jacobson I., Rumbaugh J. and Rumbaugh, J." The Unified Modeling Language User Guide." Addison-Wesley Pub Co, 1998.
- Jacobson, I., Booch, G., Rumbaugh, J. "The Unified Software Developement Process. " ,Addison-Wesley, 1999.
- H. Ben-Abdallah , N. Bouassida, F. Gargouri, A. Ben-Hamadou: "A UML based Framework Design Method", in Journal of Object Technology, vol. 3, no. 8 September-October 2004, pp. 97-119. URL : http://www.jot.fm/issues/issue_2004_09/article1
- Roy H. Campbell and Nayeem Islam "A Technique for Documenting the Framework of an Object-Oriented System", Computing Systems, Vol. 6, No. 4, Fall 1993.
- Mattsson, M., Bosch, J., and Fayad, M.E. "Framework Integration Problems, Causes, Solutions", Communication of the ACM October 1999/Vol.42, No.10.
- Ripper, P., Fontoura, M. F., Neto, A. M., and Lucena, C. J. V-Market: "A Framework for e-Commerce Agent Systems.",World Wide Web, Baltzer Science Publishers, 3(1), 2000.
- "Object Oriented Programming Oversold. ", URL: http://www.cs.loyola.edu/~binkley/772/articles/oopbad.htm
- Mohamed E. Fayad and David S. Hamu "Object-Oriented Enterprise Frameworks: Make vs. Buy Decisions and Guidelines for Selection", The Communications of ACM, 1997, to appear.
- M.F. Fontoura, W. Pree, B. Rumpe: "UML-F: A Modeling Language for Object-Oriented Frameworks," Proceedings of European Conference on Object Oriented Programming (ECOOP 2000), Springer-Verlag, 2000.
- M.F. Fontoura, S. Crespo, C.J. Lucena, P. Alencar, D. Cowan: "Using viewpoints to derive Object-Oriented Frameworks", A case study in the web education domain, Journal of Systems and Software (JSS) Elsevier Science, 54 (3), 2000.
- Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, "Design Patterns: Elements of Reusable Software Architecture", Addison-Wesley, 1995.
- Gamma, E., Helm, R., Johnson, R., and Vlissides, J. "Design Patterns : Elements of Reusable Object-Oriented Software. Addison-Wesley Pub Co, 1st edition, January 1995.
- Herman Hueni and Ralph Johnson and Robert Engel, "A Framework for Network Protocol Software,'' Proceedings of OOPSLA, Austin, Texas, October 1995.
- John C. Mitchell, "Concepts in programming languages", Cambridge University Press, 2003.
- Ralph Johnson and Brian Foote. "Designing Reusable Classes.'' Journal of Object-Oriented Programming. SIGS, 1, 5 (June/July. 1988), 22-35.
- K. Koskimies, H. Mossenback, "Designing a framework by stepwise generalization, 5th European Software Engineering Conference, Barcelona. Lecture Notes in Computer Science 989, Springer-Verlag,1995.
- Mattsson, M. "Object-Oriented Frameworks," A Survey of Methodological Issues. Technical Report 96-167, Dept. of Software Eng. and Computer Science, University of Karlskrona/Ronneby.
- --, MIME (Multipurpose Internet Mail Extensions) Part Three: Message Header Extensions for Non-ASCII Text. Request for Comments (RFC) 2047, URL: http://www.rfc-editor.org/rfc/rfc2047.txt.
- Wolfgang Pree, "Design Patterns for Object-Oriented Software Development", Addison-Wesley, Reading, MA, 1994.
- Pree, W. "Design Patterns for Object-Oriented Software Development", Addison-Wesley Pub Co, March 1995.
- "Rational Software", URL: http://www.rational.com/UML/
- T. Reenskaug , "Working with objects" , Greenwich : Manning, 1996.
- D. Riehle, Framework design," A Role modelling approach", Dissertation N° 13509, ETH, Zurich, 2000. http://www.riehle.org/diss/
- Y. Sanada, R. Adams: Representing Design Patterns and Frameworks in UML-Towards a Comprehensive Approach, Journal of Object Technology, Vol. 1, N°2, July-August 2002.
- Douglas C. Schmidt, "Applying Design Patterns and Frameworks to Develop Object-Oriented Communication Software,'' Handbook of Programming Languages, Volume I, edited by Peter Salus, MacMillan Computer Publishing, 1997.
- Fayad, M. E., Schmidt, D. C., and Johnson, R. E. Building Application Frameworks. Addison-Wesley Pub Co, 1st edition, 1999.
- Douglas C. Schmidt, Mohamed Fayad : Object-Oriented Application Frameworks, URL: http://www.cs.wustl.edu/~schmidt/CACM-frameworks.html
- "Unified Modeling Language", URL : http://en.wikipedia.org/wiki/Unified_Modeling_Language
- "Framework", URL: http://en.wikipedia.org/wiki/Framework
- "Object Oriented Programming", URL: http://en.wikipedia.org/wiki/Object-oriented_programming
- Yigal Rechtman, "Object Oriented Programming... What next?", URL: www.rechtman.com/oop.htm
- Cellular Communication refers to a modern mobile communication technology which facilitates the connection of small low power devices to empower them as wide distance-range communication devices. "Cellular Communication networks", URL: http://www.eecs.lehigh.edu/~caar/comm-net.pdf
- PARC (Palo Alto Research Center Incorporated), formerly Xerox PARC, is a research and development company in Palo Alto, California with a distinguished reputation for its contributions to information technology.
- N. Bouassida, H. Ben-Abdallah, and F. Gargouri, A. Ben-Hamadou: FUML: a design language for frameworks and its formal specification,International conference on Software Engineering and Formal Methods (SEFM'2003), Australia, Brisbane, 26-29 September, 2003.