Quality-Based Web Service Selection

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Quality-Based Web Service Selection

We have proposed a quality-based Web service architecture (QWSA) which includes a quality server to act on behalf of the requester to select the desired Web services dynamically. The main contribution of our proposal is the development of a quality matchmaker, which uses a mathematical method to facilitate and assist the requester to discover and select the best available Web services. The Microsoft Visual Studio .NET 2003 software product is used to implement the quality matchmaker and the service selection process, by developing a simulation system. We have demonstrated the feasibility of the quality service selection through a scenario.

Categories and Subject Descriptors: General Terms:

Additional Key Words and Phrases: Web services, quality criteria, Quality matchmaker, mathematical model.


1.1 Motivation

Web services are a technology, which allows applications to communicate with each other in a platform and programming language- independent manner over the Internet. Web services achieve system interoperability by exchanging an application development and service interactions using the XML-based (Bray et al.) standards such as Simple Object Access Protocol (SOAP) [Gudgin et al. 2003], Web Service Description Language (WSDL) [Christensen et al.2001] and Universal Description, Discovery and Integration (UDDI) (Manes).

The Web services are becoming increasingly popular and more businesses are planning to build their future solutions on the Web services technology. Due to this rapid growth, the Web service selection method considering quality criteria is becoming a significant factor and playing an important role for the success of this emerging technology.

However, the current Web service architecture does not offer comprehensive quality support. The UDDI is a registry database and service discovery engine and it allows the requester to search for Web services based on their functionality. The current selection mechanism in the Web service registries [Bellwood et al. 2003] is only based on the functional information. The requesters require a selection mechanism that is based on the functional information as well as the non functional information that is the quality criteria such as availability, reliability and reputation. In addition, the service discovery and selection are still done by human clients, which is not desirable if thousands of services

are available for selection. Searching and finding the most suitable service to match the requester's functional and non-functional requirements may be better performed by an automated system. This paper presents our attempt to develop a quality mechanism for Web Services.

1.2 Related work and our contributions

Quality has been extensively studied in the area of computer networks and especially the Internet and real-time computing. However, quality in the context of Web services has been a recent research activity.

The research work touches various quality issues in the Web services context. Therefore, relevant previous works on the quality Web service architecture, and the quality-driven service matchmaking and selection have been discussed.

1.2.1 Quality Web Service Architecture

Different approaches have been introduced in order to extend the current Web service architecture with the quality capabilities, as described below.

Extending the Web Service Architecture with a QoS Broker

Chen et al.[2003a] propose a QoS Web service architecture in which a QoS Broker acts as a mediator between the service providers and the service clients. This enables the Web service to make a selection instead of  the client. The QoS Broker consists of  four components: QoS information manager, QoS Negotiation Manager, QoS Analyzer and database. The Broker negotiates with the QoS server(s) to make sure that the guaranteed- quality of service can be provided to the clients.

Seo et al. [2004] propose a Web Service Quality Broker Architecture that helps the service requesters to find an optimal Web service. They describe the negotiation process by using the Multi-Attribute Utility Theory (MAUT) on the basis of the quality of information for both sides (service requester and service provider) participating in negotiation. The quality model is proposed by classifying the quality attributes into performance, safety, and cost aspects.

Yu and Lin in [2004] present a QoS-Capable Web Service Architecture (QCWS) in which a QoS broker acts as a mediator between service providers and clients. The QoS server collects QoS information about servers, makes select decisions for clients, and negotiates with servers to get QoS commitments. The non-homogeneous resource allocation algorithm (RQ) is used to allocate different amounts of resources to different clients according to their requirements.

Chen et al. [2003b] propose UX (UDDI eXtension), a system that is QoS-aware and facilitates the federated discovery for Web services. The QoS feedback from the service requesters are used to predict the service's performance. The UX server supports a wide area discovery across domains. The UX server's inquiry interface conforms to the UDDI specification. A discovery export policy is proposed that controls how the registered information is exported to UX servers and requesters.

A QoS Broker has been introduced as a mediator between the service requesters and providers,  and it is used in order to select the best service in the above related architecture. However, the QoS Brokers are not well defined. There is no information about how the QoS brokers discover and select the optimum Web services.

Extending the Web Service Architecture with a QoS Certification

Ran [2003] proposes a new Web Services discovery model in which the functional and non-functional requirements (for example, quality of service) are taken into account for the service discovery. A QoS certifier is introduced in this model that certifies the QoS claims given by the providers, and verifies these claims for the clients. An extension to UDDI's data structure types is proposed for implementing the proposed discovery model.

Serhani et al. [2004] present a broker-based architecture for QoS management for Web services. They propose a QoS broker which is used as a third party Web service that is published in the UDDI registry. It is invoked when a user requests a Web service with QoS requirements. The role of  the QoS broker is to support QoS provisioning and assurance in delivering Web services. It introduces  a new concept, called QoS verification and certification, which is used together with the QoS requirements in the selection process of Web services.

The QoS certification approach is used in both Ran [2003] and Serhani et al.[2004], but with different functions. In Ran [2003], the QoS certifier extends the original UDDI model and verifies the QoS claim for a Web service before registration, whereas in Serhani et al.[2004], the QoS certifier is a module in the QoS broker for certifying Web services and their providers QoS. The QoS certifier which was introduced in Ran [2003] is not well defined; it does not describe the details of the certification process as in Serhani et al.[2004].

From the previous approaches one cannot find a comprehensive solution for selecting the best available Web service based upon the quality criteria. The Broker functions are not well defined and no details are provided for the service selection. This thesis proposes a quality-based Web service architecture (QWSA) to bridge the gap between the service requester's quality requirements and the service providers' quality specifications. This architecture incorporates a  quality server that facilitates and assists the service requester to discover and select the best available Web services. The core component of the quality server is a quality matchmaker, which selects the best service based on a mathematical model.

1.2.2 Quality Service Selection

There are several research activities related to matchmaking, discovery and selection work that are based on semantic and computation approaches.

Semantic and Ontology- based Matchmaking Techniques

Most of the matchmaking techniques are based on the semantic approaches.

Balke and Wagner [2004] propose a cooperative discovery algorithm for selecting a suitable service by using an ontology-driven approach DAML-S. The paper mentions the UDDI shortcomings: UDDI is limited to keyword matching and does not support any inference to relax descriptions associated in user preferences or ontologies.

The above paper based on semantic matching by using DAML-S semantic Web services framework and the matching doesn't address the QoS issues.

Maximilien and Singh [2004] propose a comprehensive agent-based trust framework for service selection in an open environment. The authors introduce a policy language to capture the service consumer's and provider's profiles. They introduce a QoS ontology as a specification which enables both the matching services semantically and dynamically. The semantic matchmaking allows the service agent to match consumers to service using the provider's advertised  QoS policy for the services and the consumers' QoS preferences. The provider  policy and consumer preferences are expressed using the concepts in the QoS ontology (QoS model). The service selection is based on the user's preferences and business policies, and considers the trustworthiness of the service instances.  Consequently,   their  approach   enables  applications  to  be  configured dynamically at run time to select the best services with respect to each participant's preferences.

Li and Horrocks [2003] propose a matchmaking process based on DAML-S semantic Web ontology and a Description Logic (DL) reasoner to compare ontology based service descriptions.

Pilioura et al. [2004] propose an infrastructure for web service publication and discovery (PYRAMID-S), which addresses the UDDI limitations by combining  the technologies of Web Services, Semantic Web and Peer-to-Peer Networking. The main contribution of this infrastructure is that the Web service publication and discovery are based upon syntactic semantic information and QoS characteristics that enable the result ranking and service selection.

Zhou et al. [2005] propose a QoS ontology called DAML-QoS ontology as a complement for DAML-S ontology to provide a better QoS metrics model. It is designed for the matchmaking purpose. Matchmaking algorithm for QoS property constraint is presented and different matching degrees are described.

Most of the previous research on service discovery matchmaking and selection is based on the semantic service characteristics.  The semantic information refers to machine-understandable meaning to the concepts of the service description. However, our paper proposes a service selection approach using mathematical techniques and based on the requester's quality preferences.

Quality Computation Matchmaking Techniques

Zeng et al.[2004] present two service selection approaches: local optimization and global planning. A Simple Additive Weighing technique is used to select an optimal Web service. The users express their preferences regarding QoS by providing values for the weights. They propose a simple QoS model using the examples of price, availability, reliability and reputation.

Liu et al. [2004] present an open, fair and dynamic QoS computation model for Web services selection. They achieve the dynamic and fair computation of QoS values of Web services through a secure user's feedback and a monitor. Their QoS model is extensible, new domain specific criteria can be added, without changing the underlying computation model. They provide an implementation of a QoS registry based on their extensible QoS model.

Fedosseev in [2004] presents the global planning approach which is used to optimally select component services during execution of a composite service. This approach is based on quality-of-service (QoS) characteristics of services, different types of quality metrics have been introduced such as QoS: system, QoS: task, quality-of-experience (QoE), and quality-of-business (QoBiz).

We propose a quality service selection technique that is based upon a mathematical model. The Analytical Hierarchy Process (AHP) is used to calculate the

quality criteria weight, based on the requester preferences. The Euclidean distance is used to calculate the distance between the quality requirements and the quality specifications. The service associated with the minimum distance is the best service to select.


2.1 Quality Definition

Quality criteria may have different definitions in different domains. However, in the Web services  context, Quality  criteria can be defined  as a set of non-functional criteria [Rajesh and Arulazi 2002] such as availability, performance and reliability that impact the performance of Web services.

Quality is the measure of how well does a particular service perform relative to expectations, as presented to the requester. It determines whether the requester will be satisfied with the service delivered, that is, the quality is meeting requirements.

2.2 Quality Criteria Classification

We present a quality criteria classification that is organized into four groups: performance, failure probability, trustworthiness, and cost as shown in Figure 1. These groups are organized regarding its characteristics  and include generic criteria. The generic criteria are applicable to all Web services, reusable across domains (e.g., business

specific- criteria domain) and can benefit all service requesters.

Performance: The performance of a Web services measure the speed in completing a service request. It can be measured by:

Capacity: The limit of concurrent requests that the service support for guaranteed performance.

Response time: The maximum time that elapses from the moment that a web service receives a SOAP request until it produces the corresponding SOAP response [Gouscos et al.2003]. Response time is positively related to capacity [Ran 2003].

Latency: The round-trip time between the service request arrives and the request is being serviced [Mani and Nagarajan 2003].

Throughput: The number of Web service request completed at a given time period (Lee et al.). It is the rate at which a service can process requests. Throughput is related negatively to latency and positively to capacity.

Execution (processing) time: The time taken by a Web service to process its sequence of activities [Lee et al.2003].

In general, high performance Web services should provide higher throughput, higher capacity, faster response time, lower latency, and lower execution duration.

Failure Probability: It is the probability  of a Web service being incapable  to complete a service SOAP request within the maximum response time corresponding to this request [Gouscos et al.2003]. The failure probability is composed of:

Availability: The probability that a service is operating when it is invoked. Associated with the availability is the time-to-repair (TTR) property, addressing the time taken to repair a service [Mani and Nagarajan 2003]. Availability is related to accessibility and reliability.

Reliability: It is the probability of a service to perform its required functions under stated conditions within a maximum expected time interval [Ran 2003]. It refers to the assured and ordered delivery for messages being sent and received by service requesters and service providers [Mani and Nagarajan 2003]. Reliability is closely related to availability.

Accessibility: It is the capability of serving the Web Service request. The Web service might be available but not accessible because of a high volume of requests [Mani and Nagarajan 2003].

Accuracy: The amount of errors produced by the service during completing of the work [Ran 2003].

Scalability: The capacity of increasing the computing capacity of service provider's computer system and system's ability to process more operations or transactions in a given period of time. It is closely related to performance and throughput [Ran 2003].

Trustworthiness: Trust in general is a rational concept involving the trusted and the trusting parties. For example, on the eBay Web site, eBay is a trusted authority who authenticates the sellers in its auctions and maintains their ratings. However, eBay would be unable to authenticate parties who weren't subject to its legal contracts covering bidding and selling at its auctions [Singh 2003]. Web services trustworthiness can be achieved when the selected Web services  components fulfill its requester needs or requirements ( i.e., functional and non-functional ) [Zhang 2005]. Web services trustworthiness can be measured by:

Security: It represents the measure of trustworthiness and can be provided by: Authentication: It determines the identity of the sender [Salz 2003]. The service

requesters need to be authenticated by the service provider before sending information.

Authorization: It determines if the sender is authorized to perform the operation requested by the message [Salz 2003]. That is, what the requester are permitted to access?

Integrity: It protects the message content from being illegally modified or corrupted [Atkinson et al. 2002].

Confidentiality: Confidential information is to ensure that information is protected against the access of unauthorized principals (users or other services) [Mont et al. 2003].

Reputation: It is the measure of trustworthiness of a service, based on the end user's experiences of using the service. Different end users may have different opinions on the same service. The reputation can be defined as the average ranking given to the service by the end users. The value of the reputation is computed using the expression, where Ri is the end user's ranking on a service's reputation, n is the  number of times the service has been graded. Usually, the end users are given a range to rank Web services, for example, in Amazon.com, the range is [0,5] [Zeng et al.2003].

Cost: It is the cost charged by the service provider entity to the service client entity for a request that is successfully responded [Gouscos et al.2003]. Web service providers either directly advertise the service and its execution price, or they provide means to enquire about it [Zeng et al.2003]. The cost value can be measured by:

Service Cost: It is the amount of money which a service requester has to pay to the service provider to use a Web service such as checking a credit

Network transportation /Transaction Cost: It is the cost involving in each requesting, invoking, and executing the service. This cost associated with the hardware and software needed to set up and run the service as well as to maintain and update the service and its interface [G. et al.2003].


We propose a quality-based Web service architecture (QWSA), which extends the IBM Web service architecture with a quality server [Gottshalk et al.2002]. This extension enables the current UDDI to publish and discover Web services based on the proposed quality criteria classification, by extending the current Web services architecture with a quality server. The quality server registers quality specifications in its database and enables service discovery and selection based on the quality criteria.

The QWSA as shown in Figure 2 has four components: service requester, service provider, quality server, and UDDI registry. These components interact with each other

These components and their responsibilities are described below.

1) Service Provider

Service providers describe their services based on their functionality and quality

specification, and publish the Web services based on their functionality (such as the service name, service access point, UDDI classification of the service, etc.) in the current UDDI registry. Whereas, the service providers send the quality specification of their services to the quality server in order to store them in its database. Service providers separate the service's functionality from quality specification because the current UDDI registries are not designed to accept quality specification and do not allow the requester to look for Web services based on their quality issues.

2) Service Requester

Service requester sends his request including both the functional requirements as well as the quality requirements to quality server and let the server to select the most suitable Web service on behalf of him. If the result is not satisfying the requester, then he/she can reduce their quality of service constraints or consider trade-offs between the desired qualities of service. After invoking the service, requester submits a quality report regarding his feeling about the service. The quality report is sent to the quality Report Analyzer for processing

3) UDDI Registry

UDDI is a registry that allows the service providers to publish their services and the service requesters to look for Web services based on their functionality but not quality issues.

4) Quality Server

The  quality server consists of three main components; quality information manager (QIM), quality matchmaker, quality report analyzer and quality database. The quality server provides the following tasks.

  • Collects quality specifications about Web services provided by the service providers.

    By doing so, it enables the service providers to register their quality descriptions.

  • Submits a query to UDDI registry on behalf of the requester for services' functional information such as service name, service URL, service category, etc.
  • Holds up-to-date information on quality specifications currently available for services.
  • Matches the quality specifications against the quality requirement.
  • Makes service selection decisions for requester. By doing so, quality server assists the requester to choose the best available service based on quality criteria.

The quality server components and their functions are described below.

Quality Manager

When the service providers publish their Web services with the functional description to UDDI registries, the quality manager collects quality specifications of the corresponding published services in the UDDI from the service providers and places it in the quality server's database. The quality specifications are required for quality matchmaking and selection. The quality manager updates regularly the quality server's database whenever significant changes happen, to keep the server's information consistent and up to date with the UDDI registries. This regularly checks the available services for new quality specification. Once an offer expires, it is deleted from the quality server database.

Quality Matchmaker

The quality matchmaker is the core of a quality server. Before a requester binds to Web services and begins to execute its tasks, the quality matchmaker must first determine whether the service quality desired by the user can be achieved. It discovers and selects the best available Web service on behalf of the requester. When the requester sends the service request including both the functional and quality requirements to the quality server, a quality matchmaker matches the functional requirements with the functional specification in the UDDI registry and the quality requirements with the quality specifications in the quality database. The quality matchmaker will be described in details in the coming section.

Quality Report Analyzer

After the Web service is consumed, the requester sends a quality report based on his judgments on the services to quality report analyzer, which can be subjective. The quality report includes information such as service location, invocation date, service execution duration, quality criteria offered, service rank, and comments. An example of a quality

report is shown in Table 1.

The quality report analyzer produces statistical information about the service and stores them in the quality server's database as the historical quality information. The quality matchmaker uses this quality information  for future service matching and selection

Quality Database

The quality database stores the information retrieved by the quality information manager and the quality report analyzer. The information stored in the quality database includes: the service functional specifications that are retrieved from the UDDI registry (i.e. service endpoint, URI, function name), the quality specifications that are retrieved from the service providers (i.e. availability, service price) and the statistical information of each service  which are produced by the quality report  analyzer  (for example, reputation criteria).

The quality information which is stored in the quality database will be used by the quality matchmaker for selecting the best candidates Web service.

Figure 3 illustrates the participating roles service providers, service requester, UDDI

registry and quality server and their interactions.

3. Interactions between the four participating roles in the QWSA architecture

The interactions between the roles are as follows:

1. Service providers register their services in the UDDI registry.

2. Service requester sends a quality request to the quality server.

3. When the quality server receives an inquiry from the requester, it searches the UDDI

registry for related results.

4. The quality server gets a list of services implementing interfaces and stores them in the quality database.

5. The quality server requests service providers for service descriptions augmented with quality specification, in relation to the list of services stored in the quality server's database.

6. The quality server obtains the result and stores it in the quality database.

7. All the discovered Web services are ranked between the shortest distance and the farthest distance, by using the Euclidean distance technique.

8. The quality server selects the service with the shortest distance as the best available Web service.

9. The quality server sends a list of best services to the service requester.

10. If the requester is satisfied with the result, then he or she invokes the service. If the results are not  satisfactory then the requester can reduce the quality criteria values or consider the trade-offs between the desired qualities of service [Thio and Karunasekera 2005; Ran 2003].

11. After the requester invokes the service, he/she sends a quality report to the quality server as feedback, and this is stored in the database as historical quality information, that can be used in future selection.


The quality service selection in  this  paper is based on  a mathematical model. The proposed mathematical model uses two methods in order to select the best Web service. Analytical Hierarchy Process (AHP) method is used to calculate the quality criteria weights based on the service requester's quality preferences. Euclidean distance method is used as in [Taher et al. 2005], to measure the distance between the quality requirements specified by the service requester and the quality specifications specified by the service provider. The Web service with the minimum Euclidean distance is the best service to select. The mathematical model is described in the following sections.

4.1 Problem definition

Let us denote by S the set of n available web services with identical functional properties, S = {S1 , S 2 , ..., S n } . We might assume that all services are characterized by the same set of m quality criteria, C = {C1 , C2 , ..., Cm } .

The performance of any service in terms of each quality criterion can be measured and represented in a performance matrix P = { pij } of the type:

Each column of the performance matrix P corresponds to a specific web service published by the service providers and each row corresponds to a given  offered quality specification criterion, so any element of this matrix represents the performance measure of the j-th service in terms of the i-th quality criterion.

Requester requirements with respect to all quality criteria are given as a vector of m elements, where the element represents the quality required preferred value of service in terms of the i-th criterion. The requester's preferences on the importance of all quality criteria should be assessed and represented as a vector of criteria weights.

 The problem is to select a service that best matches requester's quality requirements by considering the weights of quality criteria that are based on requester's quality preferences.

4.2 Assigning Criteria Weights

Criteria weights could be assigned either directly or indirectly to a service requester. Direct assessment requires a scale, for instance from 1 to 10, where larger scale values represent greater importance of the quality criteria. However, indirect assessment via pair-wise comparisons, as shown below, yields more precise criteria weights, which better correspond to requester's preferences.

The method of pair-wise comparisons, used in the well-known Analytic Hierarchy Process [Saaty 1977; Saaty 1990] requires a set of comparison judgments to be provided by the requester. Comparing any two criteria Ci and C j , the requester assigns a numerical value aij , which represents the relative importance of preference of quality criterion  Ci over C j . Saaty [1990] suggests a nine-point relative scale measurement as

If the criterion Ci is preferred to C j , say three times, then aij =3. If both criteria are equally important, then satisfy the reciprocal property a ji = 1 / aij .

aij = 1 . Obviously, the comparison judgments

A full set of comparisons for m criteria requires m (m-1)/2 judgments. In such a way a positive reciprocal matrix of pair-wise comparisons

A = {aij } can be constructed:

The criteria weights are calculated from this matrix by using the following equation

[Hajeeh and Al-Othman 2005]:

The set of m relative weights is normalized to sum of one,


? wi = 1 , wi > 0 , i = 1,2,..., m ,  


therefore the number of independent weights is (m-1).

After constructing the pair-wise comparison matrix and obtaining the criteria weights, the next step is to determine the consistency of the criteria judgments. The Consistency Ratio (CR) is used to measure the consistency in the pair-wise comparison matrix A [Cheng and Li 2001]. Matrix A is consistent if the following condition is satisfies [Saaty 1990].

where i, j, k=1,..., m.

The consistency can be determined by the measure called Consistency Ratio (CR), defined as [Hajeeh and Al-Othman 2005]:


where CI is the consistency index and RI the random index.

The Random Index RI value is selected from Table 3. Consistency Index (CI) is defined as:

(?   - m )

CI = (m - 1)

where ? is the average of the row totals of the normalized matrix A divided by the weight vector. Its calculation is based on [Hajeeh and Al-Othman 2005], as in  the following:

1. Calculate the weighted sum matrix by the following:

2. Divide all the elements of the weighted sum matrices by their respective priority

vector element to obtain:

? = ws 1

, ? = ws 2

, ..., ?= ws n

1 w1

2 w2

n wn

3. ? can be obtained from the average of the above values:

(? + ?  + ... + ? )

?  =  1  2 n



If the Consistency Ratio (CR) in equation (6) is high, this means that the requester's preferences are not consistent and not reliable. A Consistency Ratio (CR) of 0.10 or less is considered acceptable.

4.3 Selecting the Best Web Service

The proposed solution method is based on the assumption that each criterion has a tendency towards monotonically increasing or decreasing utility, so it is easy to rank all services and locate the best one. Web services should be evaluated on the basis of their closeness to the requester requirements, taking into consideration the relative weights of criteria. In mathematical terms, the closeness between two objects can be expressed by their Euclidean distance, which geometrically is the straight-line distance between two points, representing these objects in the m-dimensional space. Therefore, the best service is this one that has the shortest distance from the given requester quality requirements, while the one with the farthest distance is the worst. All other services can be ranked in between these two extremes, with regard to the values of their Euclidean distances.

The proposed methodology for evaluation of the most appropriate Web service consists in the following steps:

Step 1: Construct a normalized performance matrix

Since the criteria are measured in different measurement units, the performance matrix P, equation [11], should be converted into a non-dimensional one. This could be done as

each element of P is normalized by the following calculation:

q ij   =

p ij n

k = 1

This step produces a normalized performance matrix Q = {qij } .

Step 2: Construct a weighted normalized performance matrix

The normalized values are then assigned weights with respect to their importance to the requester, given by the vector w = {w1 , w2 ,..., wm } . When these weights are used in conjunction with the matrix of normalized values

Q = {qij } , this produces the weighted

normalized matrix V = {vij } , defined as V = {wi qij } , or

Step 3: Calculate the relative distances

In this step each of the services is measured according to its closeness to the requester quality requirements. The relative Euclidean distances are calculated as follows:

i = 1,2,...,n.

Step 4: Rank services in preference order

This is done by comparison of the values calculated in Step 3. Obviously, the Web service with smallest value

E* = min{E1 , E2 ,..., En }

gives the closest match to the requester quality requirements and should be selected as the best one.


Quality matchmaking is defined as a process that requires the quality matchmaker to match the quality inquiry to all the quality advertisements that are stored in the quality server's database. This matchmaking  is required in order to find the appropriate advertised services, which satisfy the quality requirements.

5.1 The Quality Matchmaker

The quality matchmaker is the core component in the quality server. Every service request received by the quality matchmaker will be matched with the service specifications that are stored in the quality server database. If the match is successful, the quality matchmaker returns a ranked set of desired Web services and selects the appropriate service based on relevance quality criteria using mathematical technique.

The quality matchmaker component includes the following sub-components

  • Interface matchmaking
  • Quality criteria matchmaking
  • Mathematical matchmaking

The roles of each sub-component are described in the following:

1) Interface Matchmaking

The interface matchmaking discovers the Web services which fitting functionality with the request requirements. Functionality means an action that either the service or the service requester can do [Andreozzi et  al.2003]. This step finds all of the  services matching the interface by using the operation called find_tModel() API on the UDDI registry. This step retrieves a list of all relevant description tModels for the services, which have the same function. Once a set of tModels that match the specified requirements have been found, then a requester can find the corresponding services by using find_service() operation. This returns a  list of all services that implement the description in the chosen tModel [Colgrave et al.2004] then the quality manager stores the result in the quality database.

The interface matchmaking is important but not sufficient to achieve requester satisfaction. This is because there are many services implement the same functional properties but have different non-functional properties and need to differentiate between them. Therefore, further matchmaking technique is needed regarding the quality criteria.

2) Quality Criteria Matchmaking

The quality criteria matchmaking compares the quality specifications with the quality requirements based on the quality descriptions of  the services' behaviors. This step reduces or filters the returned list that is provided by the above interface matchmaking. The exact match occurs when the group quality criteria type (for example, performance, cost) and the sub-criteria type (for example, throughput, availability) are same for both the quality requirements and the quality specifications.

The quality criteria matchmaking then reduces the last returned list by considering the condition that the value of the required or preferred value of a certain quality sub-criteria has to be within the range of the offered quality sub-criteria. Further filtering needed to choose the optimum Web service from this list.

3) Mathematical Matchmaking

The mathematical matchmaking reduces the returned last list of services by using a mathematical model in order to choose an optimum Web service. The mathematical matchmaking calculates the weights of the quality criteria using the Analytical Hierarchy Process (AHP). Then the mathematical matchmaking ranks the services using the Euclidean distance technique by calculating the distance between the required quality sub-criteria and the offered quality sub-criteria. The smallest distance means the best match, and therefore the requester can select the best Web service. Once the services are ranked, the requester needs to invoke the service by using find_binding() operation.


The Microsoft Visual Studio .NET 2003 software product is used to implement the quality service selection process. The  selection process is performed by the quality matchmaker. Windows Application  and C# language have been used to build a simulation system called ''quality service selection system'' (QSSS). This is based on the quality criteria classification and the mathematical model.

The QSSS is a user interface which facilitates the service requester to specify the following: his/her quality criteria (for example, Performance, Failure Probability, Trustworthiness and/or Cost) preferences, sub-criteria (for  example, Response Time, Availability, Reputation, Service Price) preferences, and the quality sub-criteria requirement values (High, Medium, or Low).

The QSSS consists of a class called Utilities and window forms. The class diagram in

6.1 Utility Classes

Utilities class contains Matrix class and four methods: FillMatrix(), CalculateWeights(), ConsistencyRatio() and EuclideanDistance().

FillMatrix() method is used to construct a pair-wise comparison matrix that is based on the service requester's quality preferences.

CalculateWeights() method is used to calculate the criteria and the sub-criteria weights from the pair-wise comparison matrix.

ConsistencyRatio() method is used to calculate the Consistency Ratio (CR). The CR measures the degree of consistency of the selected preferences values of the quality criteria. This is considered as a condition for allowing the service requester to continue the selection process.

EuclideanDistance() method is used to calculate the Euclidean distance of the advertised Web services.

6.2 Window Forms

The above methods are called by the five Window forms:

CriteriaSelection, PreferenceSelection, SubCriteriaSelection, SubPreferenceSelection and RequirementsValue.

The Window forms are used to facilitate the requester to specify his/her quality preferences and requirements.

CriteriaSelection form contains the quality criteria group. CriteriaSelection form switches to SubCriteriaSelection form if only one criteria group is selected, otherwise switches to PreferenceSelection form.

SubCriteriaSelection form contains the quality sub-criteria within the selected criteria group.

PreferenceSelection form contains the preferences values between the selected criteria group.

SubPreferenceSelection form contains the preferences values for the selected quality sub-criteria.

RequirementsValue form contains the quality requirements values for the selected sub-criteria. The form sends a query to the Access database to retrieve list of  services associated with matchmaking distance. The service with the minimum distance is the best service to select (see Figure 8).


7.1 Evaluation of the Quality-Based Web Service Architecture

The proposed quality-based Web service architecture (QWSA) is evaluated by comparing it with the related Web services architectures regarding the following five criteria:

1. Scalability: Is the capability of a system to increase the total throughput and transactions under an increased load when resources or hardware are added.

2. Extensibility: Is the ability to extend a system through the addition of new functionality or through modification of the existing functionality.

3. Conformity to standards: Extend either the Web services' core standards or other higher standards with quality aspects.

4. Ease of implementation: The ability to implement the system in an easy way.

5. Techniques for selection: Specify the type of the selection technique.

The related Web service architectures are: QoS-Capable Web Service architecture (QCWS) [Chen et al. 2003a]; [Yu and Lin 2004], UDDI eXtension architecture (UX) [Chen et al. 2003b], Web service quality broker architecture [Seo et al. 2004], QoS certifier [Ran, 2003], WS-QoS architecture [Tian et al.2003], [Tian et al.2004] and Web service QoS architecture (WQA) [Farkas and Charaf 2003].

service architecture  (QWSA), and the related above six architectures regarding  the evaluation criteria. It is seen that the QWSA is the best architecture to select because it considers all the evaluation criteria except the scalability one. The only disadvantage of QWSA architecture is that it does not  support concurrent high number of  requests. However, the architecture is extendable; it can support the scalability without having to make   major  changes to  the  system   infrastructure. Therefore, this  disadvantage highlighted requires further investigation in the future.

7.2 Evaluation of the Quality-Based Service Selection

The proposed service selection is evaluated by applying the quality service selection system (QSSS) to use case scenario as describe below.

Scenario 1: Web Service Selection

The requester looks for a search engine Web services to search for books. Let's assume that there are four Web services: Amazon Ecommerce Web Services (ECS), Google Web Service, eBay Web Service and Yahoo Web service. Table 5 shows the four Web services (Amazon, Google, eBay and Yahoo) with different quality criteria values. The Web service quality criteria are: Throughput, Availability and Price.

In order to evaluate the proposed service selection, two ways are used to select the Web service: manually by using real requesters, and automatically by using the QSSS system. The two ways are described below.

1) Web service selection manually

In order to evaluate the proposed service selection, five requesters are asked to select manually from Table 5, the best Web service based on their quality requirements. The quality requirements include: Throughput has the highest importance, followed by the Price, and finally the Availability. Table 6 shows the five requesters selection of the Web service manually based on their perspectives.

The manual selection of the Web service is not efficient, and thus not easy if there are high numbers of Web services, with different quality criteria values.

Therefore, it requires a way that enables the requester to select the best Web service automatically. However, this paper proposes a quality service selection system (QSSS) that depends on a mathematical model. The QSSS enables the requester to select the best service based on his/her quality requirements in an automated way. The Web service selection by using the QSSS system is described below.

2) Web service selection using QSSS

The requester uses the QSSS system to specify the following quality requirements:

  • Throughput is six times more important than the Availability.
  • Throughput is three times more important than the Price.
  • Price is two times more important than the Availability.

The selected above quality criteria (Throughput, Availability and Price) is from the

Web service's perspective.

After applying the mathematical model, the weight of the quality criteria is:

W = [0.667 0.111 0.222]

The Throughput criterion is the most important criterion that has the highest priority

(0.667) followed by the Price (0.222) and finally the Availability (0.111).

The output result that is based on the requester's quality requirements and preferences is shows in Table 7. It is seen that Amazon Web service (ECS) is the best one to select because its matching distance is the minimum ''0.167''. Therefore, ECS appears to be the best Web service that the requester can select. The output displays the quality criteria values for each Web service in order to enable the requester to judge if the Web service with the minimum distance satisfies his/her requirements. If the result does not satisfy the requester expectation, then he/she can specify other quality requirements.


The Amazon Web service is the best service to select when using the QSSS system, as shown in Table VII. However, four real requesters out of five have selected the Amazon Web service, as shown in Table 7. Therefore, 80% of the requesters have selected the Amazon manually, which is the best service by using the QSSS system.

This evaluates the efficiency of the QSSS system. The comparison illustrates that

the output result for selecting the best service by using the proposed QSSS, satisfies the requester's quality requirements.

Scenario 2: Product Selection

After selecting Amazon E-Commerce Service (ECS) as the best Web service in scenario 1, the requester searches for books related to Web service field. The books result is shown in Table 8. The requester needs to select the best book regarding to its availability, seller reputation and its price. There are two ways to select the best Web service: manually by using real requesters and automatically by using the QSSS system. The two ways are described below.

1) Manual book selection

In order to evaluate the proposed service selection, five requesters are asked to select manually from Table 7-10 the best top five Web services based on their quality requirements. The quality requirements include: Availability has the highest important, followed by the Price, and finally the seller's Reputation. Table 9, Table 10, Table 11, Table 12 and Table 13, show the requesters selection of the top five books manually based on their perspectives.

The manual selection of the best books is not efficient, and is somewhat difficult if there are a high number of books with different quality criteria values. Therefore, there is a need for a system that allows the requester to select the best book automatically. However, this thesis proposes a quality service selection system (QSSS) that depends upon a mathematical model, and the quality matchmaking process (QMP). The QSSS system enables the service requester to select the best Web service in an automated way as described below.

2) The book selection using QSSS system

The requester specifies his/her quality requirements using QSSS system as in the following:

  • Failure probability is assigned by the service requester as five times more important than the Trustworthiness.
  • Failure probability is assigned by the service requester as three times more important than the Cost.
  • Cost is assigned by the service requester as two times more important than the Trustworthiness.

The selected above quality criteria (Availability, Reputation and Price) is from the book's perspective, and is based on the quality criteria classification.

After applying the mathematical model that is described in Section 4, the weight of the quality criteria is:

The total weight is equal to 1:

The most important criteria is the Availability (0.648) followed by the Price (0.23) and finally the Reputation (0.122).

The output result that is based on the requester's quality requirements and preferences are shows in Table 14. The first book with the title "J2EE Web Services" and its provider is "alphacraze" this is the best book to select, because its matching distance is the smallest (0.44). The output displays the quality criteria values for each Web service in order to enable the requester to judge if the book with the minimum distance satisfies his/her requirements. The requester can specify another quality requirements if the result is not satisfied.


The book "J2EE Web Services" and its provider "alphacraze" is the best book to select as proposed by the QSSS system, as shown in Table 14. The five requesters are selected manually the top five books from their perspectives. These top five books are compared to the first five books that are ordered in Table 14 by using the QSSS system.  In Table 9 related to the first requester, three of the selected books out of five are the same as in the first five books in Table 14. Therefore, 60% of the books that are selected manually correspond to the book selection using QSSS. In Table 10 related to the second requester, four of the selected books out of five are the same as in the first five books in Table 14. Therefore, 80% of the books that are selected manually correspond to the book selection using QSSS. In Table 11 related to the third requester, four of the selected books out of five are the same as in the first five books in Table 14. Therefore, 80% of the books that are selected manually correspond to the book selection using QSSS. In Table 12 related to the fourth requester, the five are the same in the first five books in Table 14. Therefore, 100% of the books that are selected manually correspond to the book selection using QSSS. In Table 13 related to the fifth requester, four of the selected books out of five are the same as in the first five books in Table 14. Therefore, 80% of the books that are selected manually correspond to the book selection using QSSS. The average percentage for the five requesters is (60%+80%+80%+100%+80%)/5 = 80%, of the book selection manually corresponds to selection using QSSS.

This percentage evaluates the efficiency of the QSSS system. The comparison illustrates that the output result for selecting the best book by using the proposed QSSS satisfies the requester's quality requirements.


In this paper, we have proposed a quality-based Web service (QWS) architecture, which includes a quality server to act on behalf of the requester to select the desired Web services. The main contribution of our proposal is the development of the mathematical components within the quality matchmaker, which uses a mathematical method to facilitate and assist the requester to discover and select the best available Web services. We have illustrated the feasibility of this method through developing a simulation system called QSSS. The QSSS enables the service requester to specify his/her quality requirements and to select  the best service. The efficiency of the proposed quality selection is evaluated through a scenario. This scenario compares the result of the Web service selection manually and using the quality service selection system (QSSS). The comparison shows that the best service using the QSSS fulfils the requester's quality requirements and satisfies his/her expectation.

The proposed architecture needs to be enhanced in future work. First, further investigations are needed to extend the functionality of the quality server, to manage several queries that are sent concurrently by multi- requesters. Second, the functionality of the quality server needed to be extended with a notification mechanism to capture the dynamic nature of the quality criteria, and sending a notification to quality manager of any changes in the quality criteria to keep update information in the quality database.

The quality criteria classification is a generic classification that can be applied in any domain. Further investigations are needed to develop a quality criteria ontology that can be applied in a specific domain. The quality criteria ontology can be used to match services semantically and dynamically. In the future, we will extend the WSDL and UDDI standards with the quality criteria of the service.


ANDREOZZI, S., MONTESI, D., and MORETTI, R. 2003. Web Services Quality. In: Conference   on   Computer,   Communication   and  Control   Technologies  (CCCT03), Orlando.

ATKINSON, B., DELLA-LIBERA, G., HADA, S. 2002. Web Services Security (WS- Security), Available at:http://www106.ibm.com/developerworks/webservices/library/ws- secure/.

BALKE, W.-T., and WAGNER, M. 2004, Semantics and discovery: Through different eyes: assessing multiple conceptual views for querying web services. In: Proceedings ofthe 13th international World Wide Web conference on Alternate track papers & posters, New York, NY, USA.

BELLWOOD, T., CLEMENT, L., and RIEGEN, C. v. 2003. UDDI Spec Technical Committee Specification, Version 3.0.Technical Report. Available              at: http://uddi.org/pubs/uddi_v3.htm.

BRAY, T., PAOLI, J., SPERBERG-MCQUEEN, C. M., and MALER, E. 2004. Extensible  Markup  Language (XML) 1.0 (Third   Edition). Available at: http://www.w3c.org/TR/REC-xml.

CHEN, H., Yu, T., and LIN, K.-J. 2003a. QCWS: An Implementation of QoS-Capable Multimedia Web Services. In: IEEE Fifth International Symposium on MultimediaSoftware Engineering (ISMSE'03). 38-45.

CHEN, Z., LIANG-TIEN, C., SILVERAJAN, B., and BU-SUNG, L. 2003b. UX-An Architecture Providing QoS-Aware and Federated Support for UDDI. In: Proceeding of the first International Conference on Web Services (ICWS03), Las Vegas, Nevada, USA.

CHENG, E. W. L., and LI, H. 2001. Information Priority-Setting for Better Resource Allocation Using Analytic Hierarchy Process (AHP). Information Management and Computer Security 9, 2, 61-70.

CHRISTENSEN, E., CURBEA, F., MEREDITH, G., and WEERAWARANA. 2001. Web Services Description Language              (WSDL)              1.1. Available at: http://www.w3.org/TR/wsdl.

COLGRAVE, J., AKKIRAJU, R., and GOODWIN, R. 2004. External matching in UDDI. In: IEEE International Conference on Web Services (ICWS'04), San Diego, California.

FARKAS, P.,  and CHARAF, H.2003. Web Services Planning  Concepts. Journal of  WSCG 11,1, 3-7.

FEDOSSEEV, P.2004. Composition of Web Services and QoS Aspects. Seminar: DataCommunication and Distributed Systems in the WS.

G., S., MILLER, J. A., SHETH, A. P., MADUKO, A., and JAFRI, R. 2003. Modeling and Simulation of Quality of Service for Composite Web Services .In: Proceedings of the7th World Multiconference on Systemics, Cybernetics and Informatics ( SCI'03 ), Orlando, Florida.

GOTTSHALK, K., GRAHAM, S., KREGER, H., and SNELL, J. 2002. Introduction to Web Services Architecture. IBM Systems Journal 41,2,170-177.

GOUSCOS, D., KALIKAKIS, M., and GEORGIADIS, P.2003. An Approach to Modeling Web Service QoS and Provision Price. In: 4th International Conference onWeb Information Systems Engineering Workshops (WISEW'03), Roma, Italy, pp. 121-130.

GUDGIN, M., HADLEY, M., MENDELSOHN, N., MOREAU, J.-J., and NIELSEN, H. F.  2003.  SOAP  Version  1.2 Part  1: Messaging  Framework. Available at :http://www.w3c.org/TR/SOAP12-part1.

HAJEEH, M., and Al-OTHMAN, A. 2005. Application of the analytical hierarchy process in the selection of desalination plants. Desalination 174,1, 97-108.

IBM Web Services Architecture Team, Web Services Architecture Overview. 2000. Available at: http://www-106.ibm.com/developerworks/web/library/w-ovr/?dwzone=web

LEE, K., JEON, J., LEE, W., JEONG, S.-H., and PARK, S.-W.2003. QoS for Web Services: Requirements and Possible Approaches. Available at: http://www.w3c.or.kr/kr- office/TR/2003/ws-qos/ .

LI, L., and HORROCKS, I. 2003. A Software Framework For Matching  Based  on Semantic Web Technology. In: Proceedings International World Wide Web Conference(WWW2003), Budapest, Hungary.

LIU, Y., NGU, A. H., and ZENG, L. Z. 2004. QoS computation and policing in dynamic web service selection. In: International World Wide Web Conference, New York, NY, USA.

MANES, A. 2003. Web Services Standardization: UDDI. Available at: http://www.uddi.org/news.html.

MANI, A., and NAGARIAN, A.2002. Understanding Quality of Service for Web Services, IBM DeveloperWorks Technical Paper.

MAXIMILIEN , E. M., and SINGH , M. P. 2004. Toward autonomic web services trust and selection. In: International Conference On Service Oriented Computing, New York, NY, USA, pp. 212 - 221.

MONT, M. C., HARRISON, K., and SADLER, M. 2003. The HP time vault service: exploiting IBE for timed release of confidential information. In: the 12th internationalconference on World Wide Web, Budapest, pp. 160 - 169.

PILIOURA, T., and TSAGATIDOU, A. 2004. PYRAMID-S: A Scalable Infrastructure for Semantic Web Service Publication. In: 14th International Workshop on Research Issues on Data Engineering: Web Services for e-Commerce and e-Government Applications, Boston,Massachusetts.

RAJESH, S., and ARULAZI, D.2003. Quality of Service for Web Services-Demystification,Limitations, and

Best Practice. Available at: http://www.developer.com/services/article.php/2027911.

RAN, S. 2003. A Model for Web Services Discovery With QoS. ACM SIGecom

Exchanges 4,1, 1-10.

SAATY, T., 1977. A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology 15:234-281.

SAATY, T. L., 1990. How to make a decision: The Analytic Hierarchy Process.

European Journal of Operational Research 48,1,9-26.

SALZ, R. 2003. Securing Web Services. Available at:


SEO, Y.-J., JEONG, H.-Y., and SONG, Y.-J. 2004. A Study on Web Services Selection Method Based on the Negotiation Through Quality Broker: A MAUT-based Approach. In: First International Conference on Embedded Software and Systems (ICESS 2004), Hangzhou, China.