This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In this paper we suggest an approach to build ontology library for academic portal using Semantic Web technologies: RDF, XML, ontology, Simple Protocol and RDF Query Language (SPARQL) besides ASP.NET language. The OntoStudio, dotNetRDFstore, OntoMat, Ontobroker, SQL server 2005 and Visual Studio 2008 programs will be used also in this paper to achieve the proposed OntoLib and create ontologies library component.
OntoLib enables the Academic portal faculty to have their research papers published in one centralized place and make it easy for them to find the research papers that belongs to their colleges. This research papers is added to the OntoLib using a Semantic Web model and stored in a knowledge base using ontology. In this paper we define an OnloLib as a Semantic Web components that is lead to an intelligent Web.
Keywords: Ontology, Knowledge Base System, RDF, OntoLib
A knowledge acquisition system is important for any specific domain. It enables experts from reasoning domain knowledge and share it. It should be noted that these systems are suffering "bottleneck in the acquisition of knowledge" Cimiano et al. (2005) and Bendaoud et al. (2007) use analysis of formal concepts to build a hierarchy of concepts from texts. However, it is difficult to update those models. For example, in the field of astronomy, the classification of celestial objects in predefined classes (stars, galaxies, comets,...) is a very important knowledge base. This classification is done manually according to the properties with which objects appear in text: astronomer reads articles dealing with a particular object and tries to determine which class best suits. Until now more than 3 million objects were classified so in the SimBad1 database, but there are billions of objects to classify, hence the onset of the bottleneck in the acquisition of knowledge. An ontology is a formal specification of a conceptualization of a domain, shared by a group of people, which is established according to a certain point of view imposed by the built application. Currently, the Web is essentially syntactic. The structure of documents or resources on the Web is well defined, however, only humans can read their contents which are inaccessible to machine processing. Machines can only browse for treatment of routine or linking Web pages to other pages. It means that almost all Web content is intended to be read by a human user. Therefore, it cannot be handled intelligently by computer programs. The Internet is a space of sharable information between humans. The success of the Web is essentially based on simplicity. But, information on the Internet is enormous that man has difficulty to find it. Searching for information on the Web is quite slow and imprecise because it is a task of manual sorting of documents, and indeed, computers have no reliable method to deal with semantic information. In addition, there is another problem which is the difficult to find many services available on the Internet. The number of service develops very quickly because our needs is increased very rapidly, therefore, the ability of search engines routine to find the most appropriate service is extended to the limit. Today, the objective of the third generation of the Web is to improve the search engine to be able to retrieve the correct information needed. It is desired to have methods or mechanisms more effective: best accuracy, promote sharing and reusing knowledge and materials, inference mechanisms and the association of semantic metadata to documents and knowledge.
In this paper we suggest an approach to build ontology library for academic portal that used Semantic Web technologiessuch as RDF, XML, ontology, Simple Protocol and RDF Query Language (SPARQL) besides ASP.NET language. The OntoStudio, dotNetRDFstore, OntoMat, Ontobroker, SQL server 2005 and Visual Studio 2008 programs will be used also in this paper to achieve the proposed OntoLib and create ontologies library component. This paper concentrates on the Ontology and presented it as major component of Semantic Web that it relies on. They advising on engineer ontologies rapidely and easily and avoid knowledge acquisition problems to have a success Semantic Web. Decher et al. [Decher et al., 2000] discuss the extensible markup Language (XML) and Resource Descriptive Framework (RDF) standards in depth. These standards are used as part of this paper. In this paper we develop a new OntoLib for Academic portal that use the Semantic Web technologies: RDF, XML, ontology, Simple Protocol and RDF Query Language (SPARQL) besides ASP.NET language. The OntoStudio, dotNetRDFstore, OntoMat, Ontobroker, SQL server 2005 and Visual Studio 2008 programs are used to achieve the proposed OntoLib .
Similar studies have shown some results and conclusions regarding the OntoLib and academic web portals. Alexander et al. [Maedche et al., 2003] have done a similar research paper talking about "SEAL - A Framework for Developing Semantic Web Portals", they developed a generic approach for developing semantic portals. This approach is based on enriching the portal with utilized semantics. The main goal of their SEAL framework origin from Ontobroker is to allow users to search for knowledge on the Web. Yuangui Lei et al. [Lei et al., 2005] presented their work which is KMi semantic web portal infrastructure. They specified three main components for the infrastructure, an automated metadata extraction tool that supports the extraction of high quality metadata from heterogeneous sources, an ontology driven question answering tool which makes use of the domain specific ontology and the semantic metadata to answers questions in natural language format, and semantic search engine which enhances traditional text based searching. Their infrastructure contains a source data layer, extraction layer, a semantic data layer, a semantic service layer, and a presentation layer. Nicola Guarino and Pierdaniele Giaretta (ref) presented in their papers Ontologies and Knowledge Bases. They have clearly defined the Ontology from the technology view and philosophy view. They analyzed the Gruber's definition of ontology that presents a specification of a conceptualization. Alexander and Seffen [Maedche, 2001] have written a paper about "Ontology Learning for the Semantic Web".
Definition of Ontology
In the literature, we can find different definitions for the word ontology. An ontology is defined as an explicit specification of a conceptualization. The term conceptualization reference to a system of concepts, i.e. a set of concepts. Explicit specification means the conceptualization is represented in a language. This language may be a natural language (ex: English, French) or a formal language (ex: semantic network, first-order logic, etc.). Ontology as a discipline, is designed to characterize the different modes of existence of objects, according to species of objects (natural, artificial, aesthetics, etc.). Also the ontology is simply defined as a set of concepts (classes) and the relationship between these concepts. Ontology, is also defined as an explicit specification of a conceptualization [Gruber, 2003] often considered as a reusable and shareable model. Geographical ontology can be used for exploration, and extraction information and also for inter operation of GIS [Nadine, 2003]. An Ontology is defined as, a common vocabulary for persons who need to share information in a specific domain. Different ontology are used in different domains (Geography, Biology, â€¦) to share common understanding of the structure of information among people or software agents, to analyze domain knowledge, and enable reuse of domain knowledge. definition in our case: we define an ontology as a description of concepts in a domain (classed, concepts) when the properties of each concept describe various features and attributes of the concept (properties, roles), and Slots that describe properties of classes and instances. According to (Gruber, 1993), ontology is an explicit specification of conceptualization. Conceptualization is basically how we express our views through words, expression of concept and elements, and relations between entities. This definition stresses on the application of the common ontology in different application as well as translating the language text or the documentation to defined terms. The Ontology or the data schema provides the concepts with a meaning that makes the machine understand it. It is the tool that connects people with machines and makes them communicate in a smooth way. The idea that gives the ontology its ability to learn is to have the real world conceptualization in a knowledge base and digitalize it using a readable language such as Ontology Web Language (OWL). In the ontology, everything in the real world is expressed as <owl: Thing> composed of properties and instances. The thing is drill down to classes that holds the concepts which is in turn drill down to object properties that describes the properties of the concepts. The object is composed by instances that depict the properties. The combination of them is composing the knowledge base.
Sharing knowledge and reusing them are important features of any semantic web application. The ontology is to component that brings this feature the semantic web application as it is developed based on artificial intelligence concepts. It's responsible for sharing the common understanding of domain between people and machine and that's important for providing the interoperability feature to the semantic web application. The component of ontology are, the classes which are the concepts or things that describe any object in the world (e.g. Person, table, chair, etcâ€¦), the relations used to provide relations between classes (e.g. hasName, hasArticle, etcâ€¦), the functions that represent the relations with one result, and the instances of the classes (e.g. na, apple, etcâ€¦).
Tim BernersLee, the creator of the Web, has declared that the Semantic Web is the next evolution of the Web. Which mean an intelligent Web where information is stored understandable by computers in order to provide the user really seeking. Today, just humans have the capacity to understand the information find and decide how this relates to what we want to get really. By what means? What are search engines that help us. But they are capable of only answering two questions: what are pages containing a Word? and what are the most popular pages on a subject? Indeed, the idea of the Semantic Web is not to understand human language or computers operate in natural language. This isn't a reflection with an artificial intelligent Web, but more simply, it includes a useful information. And, like the current Web, which is built primarily around the identifier URI, HTTP protocol and language HTML, Semantic Web is also based on URI, HTTP and the RDF language. You find that the current Web automation capabilities are limited because the Internet has been designed to publish unstructured documents. The Web is difficult to facilitate access to information. For example: If you want to find a manufacturer of doors and Windows to build a House by typing the word "gates" and "windows" in Google, the results were not met because most concerning Mr.Bill Gates and Microsoft Windows. "" "". Why? As all the world knows well, the HTML is unstructured language that cannot distinguish the presentation information. This is why the search engine has no ability to understand this problem. Therefore, one of the goals of the Semantic Web is to refine search on the Internet. To do this, it will add to existing information metadata layer so that computers can exploit it. The Semantic Web relies on three additional steps. First, it adds metadata to each Web resource. Then, it certifies their authenticity. And finally we're going to fix youth of HTML errors. It is found that the use of XML and Resource Description Framework (RDF) should fix this. With XML, you can structure a document or a resource to indicate: here it is, it is the door of a manufacturer or Mr.Bill Gates of MS. In this case to find a door can exclude results in resources on someone because it indicated manufacturer: object: gates. So, let's get the XML tree of the documents it indexes and find branches containing "gates". But, is it possible? No, because the construction of an index like this would take too long. Solution: we'll add a file to describe its content to every HTML page in accordance with a standard structure. W3C therefore propose to enrich existing (and future) information metadata RDF. First word, arguably the RDF format allows to define metadata to specify the characteristics of information. RDF are triplets who will associate the metadata defined by group of three. One can describe a triplet as three URIs. Thus it can compose the basic concepts through combined three triplets. So what is the difference with current Web? "" The current Web uses links which are pairs, e.g. association "Mr. Tim BernersLee" and "W3C". Then, the Semantic Web seeks to typer information by adding the third term "founder". ". <tim bernerslee=""> <fondateur> <w3c>Can see the relationship between M.TBL and the W3C organization. In this case, the computer can perfectly determine what fact logically must be attached to another. Currently, there is logic as RDFS experimental languages , DAML + OIL  and OWL . For describing</w3c></fondateur></tim>
You can see that there is the relationship between M.TBL and the W3C organization. In this case, the computer can perfectly determine what fact logically must be attached to another. Currently, there is logic as RDFS experimental languages , DAML + OIL  and OWL . To describe these languages, there is need of ontology. In short, ontology is the precise description of terms and relationships of a specific subject. (It will explain what is ontology in part at the top of this report). Thus, it appears that computers understand information working through specific ontologies and complete. Here it is, Semantic Web aims to provide the information a sense that even computers can understand. And then there is another question: How can we build a machine to machine communication system to share information? The answer is the Semantic Web that combine URI, HTTP, and RDF. Thanks to him, it is able to fully automate the process, keep precise track of the transaction and provides an abstraction of thereof. 1.2 Semantic Web knowledge representation functions, machines must have access to collections structured information and sets of inference rules . They can be used to send automated reasoning. It is the representation of knowledge. Thus, the challenge of the Semantic Web is to provide a language which expresses both data and rules. Then obviously thinking about the data. Then, any system of knowledge representation rules can be exported on the Semantic Web. It is added logic to the Web to give the possibility to use the rules to make inferences, choose courses of action and answer user questions. And how it can do that? There are two important development of Semantic Web technologies already exist: XML and RDF. In short, XML allows us to add an arbitrary structure our documents without saying the significance of structures. RDF is used by Semantic Web and it will allow machines to understand documents and semantic data. Arguably, given the meaning by RDF. Using RDF in a document, it is assumed the principle that particular things have properties with certain values. For example: Sir Tim Berners Lee is the founder of W3C or I am a student at the IFI... etc. This structure is a way to describe most of the data processed by machines. And how can we more describe the information that it publishes and interpreting the information it receives? In a shared context, must be interpreted information . So, we need also formal models for knowledge representation.
Figure 1.1 contexts shared - Ontologies
In this figure, it is assumed that you want to access several databases which we give information about people, with their addresses. We want to find the names of people with a specific postal code. So, we need to know the fields in each database that represent names and postcodes. For example: RDF can specify «(the field 5 in the database) (as a field of type) (postal code). Assuming that there are two databases that use different identifications for the same concept "postal code" for two different contexts. Therefore, the program needs to know that these two terms are the same information when he wants to compare them and combine. It is difficult to find identical meanings in the database. In this case, a set of information called Ontologies will help to solve the problem. An ontology represents a formal description of knowledge. We use ontology languages to define knowledge, which mean, defining shared vocabularies as concepts and relationships for the interpretation of information.
Semantic Web Architecture
Semantic Web Architecture requires an architecture shared by all Exchange resources on the Internet . There needs also standards for: â€¢the ontologies and inference mechanisms associated with explicit semantic â€¢the of semantic metadata or resource related ontologies â€¢the format of resources or documents â€¢L' addressing resources or documents there are many languages semantic marker in the Semantic Web stack. First, XML provides an external syntax for structured documents, but imposes no semantic constraints on the meaning of these documents. The XML schema is a language for limiting the structure of XML documents. Then, RDF is a language for creating a model of data for objects (or resources) and the relationships among them. It can also provide a simple semantics for the data model. Data models are represented in XML syntax. RDF schema is a vocabulary for describing properties and classes of RDF resources with the semantics for hierarchies of generalization of such properties and classes. Then the OWL adds more vocabulary for describing properties and classes. In addition, it can add relationships among classes. Arguably, OWL adds the expression of the meaning and semantics to XML, RDF and RDF schema. So they can represent content understandable by a machine. We will now introduce the three important levels of the Semantic Web: addressing, the syntactic level and the semantic level.
Building ontology is an iterative process that consists in different steps. The first step consists in defining the classes of the ontology, and arranging them in a taxonomic hierarchy. During this step we should define the relation between the classes and specifying the super and subclasses. The second step defines slots, describe the allowed values for them and filling in the values for slots for instances. The third step consists in creating a knowledge-base by defining individual instances, filling the slots with specific values and adding restrictions to slots.
To create ontology, we should follow different steps. The first step consists in importing information from another ontology if needed, the second consists in identifying the classes and their description. During the third step, we should identify the properties. When the fourth step consists in creating the instances and individuals and make the rule. And finally we should experience the reasoning.
The following is an example of ontology in xml showing the components of this ontology.
Figure x shows the diagram of the example described above:
Figure : Ontology diagram
Resource Description Framework
Resource Description Framework (RDF) is a W3C standard model used for the purpose of data interchange in the Web. It's providing the semantic web application with interoperability feature because RDF is readily for any program and facilitates data merging, no matter what schema used. Storing knowledge using this standard done by decomposing it into triples. One triple is composed of object, attribute and value. In another way it composed of a resource (object), named property (attribute) and value for the property (value). RDF allows structured and semi structured data to be exchanged between applications by using URI to identify each relationship between data in a triple.
The triples can be expressed in three ways: tables, xml files and graphs. The easiest view is the graph view. For example: Name ('http://www.acadimic portal .edu.bh/employee/id1061", " Anne "). This example has three views table, xml and graph (see Fig. â€¦).
http://www.acadimic portal .edu.bh/employee/id1061
Fig â€¦: Table View of RDF example and the xml expression of the example is: