WWW (World Wide Web) has changed the way people communicate today to each other and also the way they live their lives. The web has completely changed the way today's business runs. This ever changing development has changed the idea about the computers the way we used to think of computers. Before computer used to be thought of media of word-processing, general computing, and making spreadsheets or keep log of financial or log report, making presentation, playing games and basic multimedia entertainment. But still most work was done offline. At present the computers are the main entry point to information super high-way. 
Most of today's web contents are suitable for human consumption. As of today people use web to search and getting in touch with others, look for products possibly good deals or special offers and eventually buy the item from a secure online seller and also filling up forms or look for jobs.
Get your grade
or your money back
using our Essay Writing Service!
The above activities are not quite well supported by software that available now apart from someone has the exact URL (universal Resource locator) for the information they are up to. To find the right page we use various search engines such as Google, Bing, Yahoo, Wikipedia etc. It's no doubt that the web wouldn't be successful without these search engines. Though there are some drawbacks. These are as follows:
Search results have Very low accuracy. Even though the main relevant pages are retrieved but still there's not much use for another 50,000 results.
Some cases desired results are not always retrieved.
Present searches are highly sensitive to vocabulary. Often the initial keyword does not mach with the results we are after. In these cases the relevant documents use different terminology from the original query, which is very unsatisfactory.
Sometimes results are single Web page. If we need information that is spread over various documents, then we must initiate several queries to collect the relevant documentation and we must manually extract the partial information and fix them together.
Despite of all cleaver (indexed) search engines are readily available to all, the search results are given buy this engines are a good refine but still the human operation is needed to get to the desired link. So basically the search engines giving us narrow report and user does the rest. Sometimes it could be very time consuming. Therefore, the term information retrieval, used in association with search engines, is bit misleading; location finder might be a more appropriate term. Also, results of Web searches are not readily accessible by other software tools, search engines are often isolated applications. The main obstacle to providing better support to Web users is that, at present, the meaning of Web content is not machine-accessible. Of course, there are tools that can retrieve texts, split them into parts, and check the spelling. But when it comes to interpreting sentences and extracting useful information for users, the capabilities of current software are still very limited. It is simply difficult to distinguish the meaning of
Mainul goes to Unis.
Mainul does engineering at Unis.
The aim of this project is to give a brief introduction to semantic web and mechanism of semantic web-how it's been constructed using various tools. And also to introduce what is semantic web service and how to implement on mobile devices.
1.2 Work Plan/Project Structure
The work plan devised for the duration of the project is shown below:
Table 1: Project Plan / Gantt chart
Structure of the report:
Background theory /Semantic Web
Familiarising with new tools i.e. OWLS, WSDL, SPARQL, Tomcat server
Web services /Semantic web services
2.1 Semantic web
What is Semantic Web?
The Semantic Web is the representation of data on the World Wide Web. It is a collaborative effort led by W3C with participation from a large number of academic and industrial researchers and industrial partners. It is based on the Resource Description Framework (RDF), which integrates a variety of applications using XML for syntax and URIs for naming. 
Reason for Semantic Web and how to Implement:
Always on Time
Marked to Standard
There are ways to overcome this not so refined search results. One way it could be using sophisticated techniques based on AI (Artificial Intelligence). An alternative approach is to represent Web or re-structured web content, in a form that is more easily machine-process able and to use intelligent techniques to take advantage of these representations. One thing to make things straight that semantic web isn't going working parallel to current conventional web. Infect the Semantic Web is propagated by the World Wide Web Consortium (W3C), an international standardization body for the Web. After 30 years later the inventor of WWW realizes the true vision web where information could play more important role compared to today's web. The development of the Semantic Web has a lot of industry momentum and governments are investing heavily. The U.S. government has established "The DARPA Agent Mark-up Language (DAML)" Project and the Semantic Web is among the key action lines of the European Union's Sixth Framework Programme. 
The limitations of the current web:
Keyword or index based search has lot draw back as was discussed earlier
Human time and effort are very much required. Current agent system isn't quite giving the peace of mind solution.
Maintaining information is currently a problem since the inconsistence in the terminology.
Complexity on viewing different organization pages (i.e. restriction who can view and who can't) Data don't display adequately sometimes.
Aim of Semantic web:
Knowledge will be organized in conceptual spaces according to its meaning.
Automated tools will support maintenance by checking for inconsistencies and extracting new knowledge.
Keyword and indexing will be replaced by query answering requested knowledge and the data will be represented and retrieved more human friendly way
Web viewing will be much simpler (i.e restriction viewing part of page will be well designed).
Business-to-Consumer Electronic Commerce
By adopting the Semantic web the following advantages are possible to establish:
Pricing and product information will be extracted correctly, and delivery and privacy policies will be interpreted and compared to the user requirements.
Additional information about the reputation of online shops will be retrieved from other sources, for example, independent rating agencies or consumer bodies.
The low-level programming of wrappers will become obsolete.
More sophisticated shopping agents will be able to conduct automated negotiations, on the buyer's behalf, with shop agents.
2.2 Mechanism of semantic web:
Currently Web content is formatted for human readers rather than programs. HTML is the predominant language in which Web pages are written. Metadata capture part of the meaning of data, thus the term semantic in Semantic web. 
In general ontology describes formally a domain of discourse. Typically ontology consists of a finite list of terms and the relationships between these terms. The terms denote important concepts (classes of objects) of the domain. For example, in a university setting, staff members, students, Courses, lecture theatres, and disciplines are some important concepts. The relationships typically include hierarchies of classes. A hierarchy specifies a class C to be a subclass of another class T if every object in C is also included in T. Ontology may include information. Such as
properties (X teaches Y)
value restrictions (only faculty members can teach courses)
disjointness statements (faculty and general staff are disjoint)
specification of logical relationships between objects (every department must include at least ten faculty members)
Fig 1: Semantic Web Hierarchy 
In the context of the Web ontology provide a shared understanding of a domain necessary to overcome differences in terminology. One application's zip code may be the same as another application's area code. Another problem is that two applications may use the same term with different meanings. In university A, a course may refer to a degree (like computer science), while in university B it may mean a single subject (CS 101). Such differences can be overcome by mapping the particular terminology to a shared ontology or by defining direct mappings between the ontology. In either case, it is easy to see that ontology support semantic interoperability. Ontology is useful for the organization and navigation of Web sites. Many Web sites today expose on the left-hand side of the page the top levels of a concept hierarchy of terms. The user may click on one of them to expand the subcategories. Also ontology is useful for improving the accuracy of Web searches. The search engines can look for pages that refer to a precise concept in ontology instead of collecting all pages in which certain, generally ambiguous, Keywords occur. In this way, differences in terminology between Web pages and the queries can be overcome. In addition, Web searches can exploit generalization/specialization information. If a query fails to find any relevant documents, the search engine may suggest to the user a more general query. It is even conceivable for the engine to run such queries proactively to reduce the reaction time in case the Semantic Web Vision user adopts a suggestion. Or if too many answers are retrieved, the search engine may suggest to the user some specializations. In Artificial Intelligence (AI) there is a long tradition of developing and using ontology languages. It is a foundation Semantic Web research can build upon. At present, the most important ontology languages for the Web are the following: 
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
XML provides a surface syntax for structured documents but imposes no Semantic constraints on the meaning of these documents.XML Schema is a language for restricting the structure of XML documents.RDF is a data model for objects ("resources") and relations between them; It provides a simple semantics for this data model; and these data models can be represented in XML syntax.
2.5 RDF Schema
RDF Schema is a vocabulary description language for describing properties and classes of RDF resources, with a semantics for generalization hierarchies of such properties and classes.
OWL is a richer vocabulary description language for describing properties and classes, such as relations between classes (e.g., disjointness), cardinality (e.g. "exactly one"), equality, richer typing of properties, characteristics of properties (e.g., symmetry), and enumerated classes.
Logic is the discipline that studies the principles of reasoning; it goes back to Aristotle. In general, logic offers formal languages for expressing knowledge; logic provides us with well-understood formal semantics: in most logics, the meaning of sentences is defined without the need to operationalize the knowledge. Often we speak of declarative knowledge: we describe what holds without caring about how it can be deduced and finally automated reasoning can deduce (infer) conclusions from the given knowledge, thus making implicit knowledge explicit. Such reasoning has been studied extensively in AI. Here is an example of an inference. Suppose we know that all professors are faculty members that all faculty members are staff members and that R. Webb is professor. Hence logically the information is expressed as follows:
Prof (X) â†’ faculty(X)
Faculty(X) â†’ staff(X)
Then we can deduce the following:
Prof (X) â†’ staff(X)
Logic can be used to uncover ontological knowledge that is implicitly given. By doing so, it can also help uncover unexpected relationships and inconsistencies. But logic is more general than ontology. It can also be used by intelligent agents for making decisions and selecting courses of action. For logic to be useful on the Web it must be usable in conjunction with other data and it must be machine-process able as well. Hence there is ongoing work on representing logical knowledge and proofs in Web languages. Initial approaches work at the level of XML but in the future rules and proofs will need to be represented at the level of RDF and ontology languages. (I.e. DAML+OIL and OWL)
Agents are pieces of software that work autonomously and proactively. Conceptually they evolved out of the concepts of object-oriented programming and component-based software development.
A personal agent on the Semantic Web will receive some tasks and preferences from the person, seek information from Web sources, communicate with other agents and compare information about user requirements and preferences, select certain choices and give answers to the user. 
Fig 2: Personal Agent as of today and expected to be in future with advance functionality 
2.9 SPARQL Query Language
SPARQL is an RDF query language and protocol developed within W3C.Â Jena provides the ARQ query engine which is a complete implementation of the SPARQL query language. The Semantic Web, a knowledge-centric model for the Web's future, supplements human-readable documents and XML message formats with data that can be understood and processed by machines. The same way SQL is used in the current traditional web, SPARQL Protocol and RDF Query Language (SPARQL) does the same role in terms of Semantic Web. It allows applications to make sophisticated queries against distributed RDF databases, and is widely supported by many competing frameworks. 
SPARQL is built on top of several key technologies, in the same way that HTTP and HTML (the foundations of the World Wide Web) are built on deeper and lower level systems like TCP/IP. In 1997, Tim Berners-Lee pointed out that HTML and the World Wide Web were limited. It was not designed for dynamic Web-based applications. HTML and HTTP were just a crucial step towards a machine to machine understandable communications. The vision is founded on RDF (the Resource Description Framework). 
Since RDF can describe anything, it can describe itself, allowing it to be built up in thin layers that add more and more richness. This thin-layers approach is intended for the production of a vocabulary stack. The diagram in Figure 3 shows the layers defined by the W3C. The layers that sit on top of RDF currently include RDFS and OWL. RDFS, the RDF Schema language adds classes and properties to RDF. OWL (the Web Ontology Language) extends RDFS, providing a richer language to define the relationships between classes. A richer language makes it possible to use automated inference engines to create better systems. 
Fig 3: The Semantic Web layer cake: the technology stack for the W3C Web architecture 
Fig 4: SPARQL overview-how it related to RDF 
2.10 Working with SPARQL Query
Looking into PlanetRDF's bloggers.rdf model, it is fairly straightforward, using the FOAF and Dublin Core vocabularies to provide a name, a blog title and URL, and an RSS feed description for each blog contributor. Figure 4 shows the basic graph structure for a single contributor. 
Fig 4: Basic graph structure for a single contributor in bloggers.rdf 
RDF has been called a "meta description language," but that's just a one way to say that it describes things the same way that people do. Describe sentences like "The crow eats corn" or "Joni loves Chachi". Each one of those sentences has a subject (the crow, Joni), a predicate (eats, loves), and an object (corn, Chachi). In RDF, those subject-predicate-object sentences are triples. RDF uses triples to describe anything. 
One way to represent these triples visually is to use an RDF Graph, which is a collection of statements in RDF. A graph is defined in terms of nodes and arcs. In RDF the nodes are resources and the arcs are predicates-that is, they are statements about the relationships between the subjects and object nodes. At its core, the RDF spec is all about defining a graph; other things like serialization formats are very much of secondary importance. The subject and object elements define nodes in the graph (also known as resources since they are the target of URIs). Each predicate defines a relationship between the two nodes that the triple references. 
To make the graph available to the rest of the Web, we host the RDF files in a triple store -in other words, a place to store the triples that make up our graph. Once our RDF graph in a triple store and expose it to the Web, others can query that graph using SPARQL. 
Fig 6: A visual representation of an RDF graph with one statement of a predicate relationship between Subject and Object 
Fig 7: graph with a single statement asserting that 'Andrew' is related to 'guitar' by the 'plays Instrument' relationship 
Fig 8: The graph statement of Andrew and his guitar 
In RDF, each node and predicate is identified by a URI. RDF can also allow nodes that are not identified using URIs. These are called Blank Nodes or Blank Node Identifiers and are used as temporary, internally visible identifiers for local references. 
2.12 RDFS and OWL
RDF was consciously designed to support its own layered extension with more abstract vocabularies. The first way that it was extended was with the RDF Vocabulary Description Language, more commonly known as RDF Schema or RDFS. RDFS extends RDF with features to define classes, properties, and inheritance. It's pretty much the full toolkit of the object oriented designer. OWL extends RDFS to provide an extremely rich toolkit to describe the properties and relationships of a class. OWL is particularly stuffed with properties to describe the exact nature of the relationship between two classes. RDFS defines a triple predicate called rdfs:type that declares a resource to be of a type. It also allows a resource that is a class to declare that it descends from another class 
2.13 Defining an ontology in OWL
An ontology will define the format of the data that one will create. The ontology defines classes with properties. Micro-blog entries will be written in fragments of Turtle that conforms to the ontology. The data when it is stored in triple store is then available for querying using SPARQL. The results we get back from a SPARQL query are in an XML results format that packages the variables that the query matches. We can easily extract the information from the XML and display it on the Web.
First we need to define the journal ontology on the bellow listing-
Fig9: A small (but complete) ontology for the journaling system 
2.14 SPARQL in action
Since we have covered RDF, OWL now we will be using tomcat server imbedded with Netbeans as a SPARQL server. We will use the SPARQL query page to produce and test our queries. SPARQL allows us to query for triples from an RDF database (or triple store). Superficially it resembles the Structured Query Language (SQL) used to get data from a relational database. The idea is more to help developers who are familiar with databases, since a triple store and a relational database are fundamentally different. A relational database is table based, meaning that data is stored in fixed tables with a foreign key relationship that defines the relationship between rows in the tables. A triple store stores only triples, and we can pile the triples as high as we like while describing a thing. With relational databases we are confined to the layout of the database. 
RDF doesn't use foreign and primary keys either. It uses URIs, the standard reference format for the World Wide Web. By using URIs, a triple store immediately has the potential to link to any other data in any triple store. That plays to the distributed strengths of the Web.
Because triple stores are large amorphous collections of triples, SPARQL queries by defining a template for matching triples, called a Graph Pattern. As mentioned in the section on RDF and graphs, the triples in a triple store make up a graph that describes a set of resources. To get data out of the triple store using SPARQL, we need to define a pattern that matches the statements in the graph. Those will be questions like this: find me the subjects of all the statements that say 'plays guitar'. 
Fig 10: A typical architecture for a Semantic Web application 
2.15 Writing queries in SPARQL
SPARQL provides four different forms of query: SELECT, ASK, DESCRIBE, and CONSTRUCT. I'll provide a few queries that show the different forms of each query type with a commentary that illustrates any quirks of syntax, variant forms and what the purpose of the query was. Each of these query types shares a lot of common features. In most cases, I'll introduce everything in the SELECT form, since that is probably the most frequently used type of query. The SELECT form of query is used to form standard queries. Its results come back in the standard SPARQL XML result format. Most of the example queries covered in this section require the use of SELECT queries. ASK is used to get yes/no answers without providing specifics, and is shown bellow. 
Fig 11: Example SPARQL using ASK 
DESCRIBE, which is used to extract portions of the ontology as well as the instance data, and CONSTRUCT, which generates RDF based on the results of graphs which been query for. Here is an example:
Fig 12: Example SPARQL using DESCRIBE and CONSTRUCT 
SPARQL usually remove all the first two results unless the user needs to see everything. It breaks down a query. The syntax of a typical SPARQL query is as follows:
Fig13: A query to get all notes, ordered by date 
Each SPARQL SELECT query consists of a set of parts in order. It contains an optional BASE definition followed by a set of prefix definitions. It then contains a SELECT part that consists of SELECT followed by an optional dataset part that describes which graph to search in, followed by a WHERE clause that contains the graph patterns that describe the desired results. After the WHERE clause a set of solution modifiers: an Order clause, a Limit clause, or an Offset clause. Bellow the results from the fig 11 query. 
Fig 14: Query answer from fig11 query 
2.16 Searching a triple store
The process of getting results using a graph pattern is quite straight forward. Most triple stores maintain a 1store, either in memory or in a database, of triples that can be queried based on their subject, predicate, or object. SPARQL presents it with a set of triples in a query graph pattern. Assume, to begin, that just one triple is in the graph pattern. That query might provide a concrete URI for the subject. If so, the triple store can ignore all of the triples in the store that don't have that subject. From what's left, it can then filter out all of the predicates that don't match the predicate supplied in the graph pattern. Lastly, if SPARQL supplies a concrete object then it can be used to eliminate triples that it doesn't match. 
If a variable is used in the subject, predicate, or object of the query, then rather than eliminate non-matching triples the triple store will reserve them all as potential matches for the variable. In the example above the first triple reads ?e a :JournalEntry .. a is short for rdfs:type so the triple store can ignore all triples other than those with predicate rdfs:type. From what's left it can eliminate those without an object of :JournalEntry. What's left is a fund of possible triples that can furnish it with results for?e as the subject. 
The query above provided a graph pattern with multiple triples, so the triple store needs to follow the same procedure with the other triples. If a variable is found in multiple places then it can use all those triples where identical values intersect. In the example above, all matching triples must have a matching subject. If triples don't fit this scheme, they will be discarded and the result set is all the triples that remain. There might be multiple ways to match up the variables and, if so, then you will get multiple results. The last step is for the triple store to create a result set based on those variables that were required in the result set.
At the end of the search the triple store will have a result set containing (?e, ?notes, ?date) since those were all the variables defined in the query. If the SELECT query was presented like this "SELECT ?date ?notes" then ?e would not be returned, even though it was vital for the query. 
Fig 15: Booking in date order 
Fig16: Query results from the query in fig 13 
Fig 17: Query using a default prefix 
Fig 18: Query answer for fig 15 
Fig 19: Query using matching skill 
Fig 20: Query using Filter 
3.1 Web service:
What is Web Service?
A program programmatically accessible over the standard internet protocols.
Loosely coupled & reusable components.
Encapsulate discrete functionality
Add new level of functionality on top of the current web
Build application using loosely couple
Currently we are moving web of data to web of functionality. Web service has two roles. These are:
Consumer of a web service &
Provider of a web service
Fig 21: Web services framework 
Web services are very exciting for business communities. So the type of business one can offer, can now offer as a web service, ie for booking a flight is business service but it could be offered as web service.
3.2 Problems of today's web services:
Description are syntactic
All tasks associated with web services application development have to be carried out by humans.
Discovery, composition and invocation
Problems of Scalability
(UDDI, WSDL, SOAP
(URI, HTML, HTTP)
(RDF, OWL) Static
3.3 Semantic Web services:
Semantic Web Technology
Machine readable data
Semantic Web Services Technology -Reusable computational resources and to automate all aspects of application development through reuse.
OWL-S: Semantic Markup for Web Services 
The Semantic Web enables greater access to content as well as to services on the Web. Users and software agents should be able to discover, invoke, compose, and monitor Web resources offering various services and various properties. Based on DARPA Agent Markup Language program OWL-S (formerly DAML-S) has been developed that makes lot other functionalities possible. In this section I will explain three structures of a service. These are:
1) The service profile - advertising and discovering services;
2) The process model - gives a detailed description of a service's operation;
3) The grounding - provides details on how to interoperate with a service via messages.
Fig 22: Top level of the service ontology 
3.4.1 The service profile:
Any commercial transaction in a web services marketplace involves three parties: the service requesters, the service provider, and infrastructure components. The service requester identifies the buyer and seeks a service to complete its work. The service provider identified with the seller and provides a service for the favour of the requester. Requester may need a news service that reports stock quotes with no delay with respect to the market. The role of the registries is to match the request with the offers of service providers to identify which of them is the best match. Within the OWL-S framework, the Service Profile provides a way to describe the services offered by the providers and the services needed by the requesters. OWL-S provides one possible representation through the class Profile. An OWL-S Profile describes a service as a function of three basic types of information: what organization provides the service, what function the service computes, and a host of features that specify characteristics of the service.
3.4.2 The Process Modelling- Services as Processes
To describe the mechanism on how a service operates, it can be viewed as a process. OWL-S defines a subclass of SERVICEMODEL and the PROCESSMODEL which draws upon well-established work in a wide range of domain including AI on standardizations of planning languages, work in programming languages and distributed systems, emerging standards in process modelling and workflow technology such as the NIST's Process Specification Language (PSL) work on modelling verb semantics and event structure, previous work on action-inspired Web service markup, work in AI on modelling complex actions and work in agent communication languages.
OWL-S adopts two views of processes. First, a process produces a data transformation from a set of inputs to a set of outputs. Second, a process produces a transition in the world from one state to another. This transition is described by the preconditions and effects of the process. A process can have any number of inputs, representing the information that is, under some conditions, required for the execution of the process. It can have any number of outputs, the information that the process provides, conditionally, after its execution.
Fig 23: Top level of process ontology 
3.4.3 Grounding a Service
The grounding of a service specifies the procedure of how to access the service - details having mainly to do with protocol and message formats, serialization, transport, and addressing.
Concrete messages are specified explicitly in grounding. The central function of an OWL-S grounding is to show how the inputs and outputs of an atomic process are to be realized concretely as these messages carry those inputs and outputs in some specific transmittable format. There is a language called Web Services Description Language (WSDL) which is particularly specified language with strong industry sponsorship and which does a great effort in crafting an initial grounding mechanism for OWL-S. As mentioned earlier our intent is not to prescribe the grounding approach to be used with all services instead a general canonical approach that will be useful for the most cases. 
3.4.4 Web Services Description Language (WSDL) is an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, and then bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow description of endpoints and their messages regardless of what message formats or network protocols are used to communicate. It may readily be observed that OWL-S' concept of grounding is generally consistent with WSDL's concept of binding.
Relation between OWL-S and WSDL
AN OWL-S atomic process corresponds to a WSDL operation. Different types of operations are related to OWL-S processes as: WSDL request-response operation, WSDL one-way operation, WSDL notification operation and WSDL's solicit-response operation. Because OWL-S is an XML-based language and its atomic process declarations and input and output types sits properly with WSDL, it is easy to extend existing WSDL bindings for use with OWL-S as the SOAP (Simple Object Access Protocol) binding. Here we indicate briefly how an arbitrary atomic process specified in OWL-S, can be given a grounding using WSDL and SOAP, with the assumption of HTTP as the chosen transport mechanism. Grounding OWL-S with WSDL and SOAP involves the construction of a WSDL service description with all the usual parts (types, message, operation, port type, binding, and service constructs).