This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
By matching the NL query terms or searching keywords to the terms that express the concepts or techniques of a SWD which is commonly known as the ontology concept. When this technique is applied in the semantic search engines as it is in found in the SWOOGLE search engine, the matching algorithm used in this search engine does not utilize the semantics of the SWD. Matching process is done on the basis of the lexical techniques i.e the searching of the keywords with that of the expressed concepts of the ontology.
The basic concept of semantic matching is based on the extension to the word that expresses the concept i.e the lexical one and the syntatctic similarity between the keyword term and matching term are not considered. Here the similarity of the meaning of two terms is found as very important when compared to keyword search. Considering the following case, the matching term or the searching keyword 'book' and the document term 'reserve' executed. Here it identifies the meaning of the matching term book as the reservation of a ticket. In another case, it is incorrectly identified as the meaning of the book denotes the publication which was used in a query and while in the document term the meaning of book is taken as the reservation (polysemy).
Searching of query term or keyword in semantic matching is done by adding description or semantics about the document and the query term which is to be searched. This query term and its corresponding semantics must be well known and uncovered prior their matching. The query term can be specified formally or informally. When the query or keyword is formally specified, its corresponding description of each term is explicitly defined. Therefore in order to represent the query as a ontology (query ontology), the semantic relations between this concept and the other various concepts of the ontology structure of its neighbourhood reveals the explanation of each term that expresses the ontology concept. Such semantic relations are know as the synonym or meaning to the query term that is been searched by the semantic matching.
The query term can also be specified informally, which is another type of representing a query as a ontology. Though the semantics of the query is not known, it should be some how uncovered. Here the biggest challenge is to retrieve the document according query term searched since the act of guessing the exact or relevant meaning of the query term which is been specified informally. Natural language processing techniques or précising the query term with interested users is used to handle the challenge of retrieving the relevant document as it is used in the intelligent search engines for example AskJeeves (Teoma technology).
Applying the two techniques such as vector space indexing techniques (eg: LSI) (Deerwester et al, 1990) and a lexicon ( eg: WordNet) (Miller, 1995) is combined together in order to map the query term to the intended meaning.
Acquiring the required information about a topic, and accessing it and maintaining (modifying, deleting and adding) the information is known as the knowledge management. This knowledge management has become an important factor for all level of business in this competitive world. In order to sustain, great productivity is drawn and new values are created by having the closer look on the internal knowledge. Particularly in larger and international concerns, this has become a major process with geographically various departments.
Mostly the information's available in the web are weakly structured. Some of the weakly structured forms are text, audio and video. The current web technology is not effectively applied since there are more limitations in the following areas:
Searching information - Most of the companies uses the keyword search engines, where I have already listed the problems of it.
Extracting information - The available intelligent agents are not able to extract the relevant information from the results produced in a satisfactory fashion.
Maintaining information - Outdated and inconsistent information are failed to remove.
Uncovering information - Extraction of new knowledge from the corporate databases are done using the data mining. But its not very effective in the distributed weakly structured documents.
Viewing information - It is difficult to find the restricted information over an intranet or the web whereas it is know from the database area.
Aims of the Semantic Web in knowledge management systems are the following:
Knowledge is maintained according to the meaning in conceptual spaces.
Knowledge is also effectively maintained by using the automated tools in finding the inconsistencies and getting new knowledge.
The requested information or knowledge are searched by the query answering and the requested knowledge is retrieved, extracted and presented user friendly.
Restrictions to view the knowledge over the web, intranet and also over the corporate databases.
Semantic Web Technologies:
The process of allowing machines to understand the meaning of query term is done by adding semantics to the document in the World Wide Web. This process is done by using group of methods and technologies. It is called as the Semantic Web.
For the semantic matching, it is very effective to add the semantics of the document which are specified formally and explicitly in ontology. Where as in the unstructured documents, it is necessary to use the advanced ontology learning techniques to deduce their intended semantics and use them as the comments or guidelines to the related documents.
In the case of known structure of SWD for the formal queries in priory which is called as the semantic homogeneity, Semantic Web query language like OWL-QL (Fikes et al, 2003) and RQL ( karvounarakis, 2003) are used by the Querying Semantic Web documents to query the semantic portals. In the another case of unknown structure of SWD which is called as the semantic heterogeneity, querying process is done by using the global schema which is know as the shared common ontology or using the horizontal mapping techniques across local schemas in distributed setting using the approach called p2p.
The typical ontological commitment is the abstract classes depend on the properties that are shared from the combined specific objects.
Nowadays, most of the concerns are started using ontologies for the use navigation between different websites. Most of the websites are started displaying the top levels of a conceptual hierarchy of terms on the left-hand side of web pages. The user are also allowed to click on one of the hierarchical concepts of term and allowed to see the subcategories of it.
The main advantage of using ontologies is to improve the quality of the web searches by producing accurate results relevant to the search value. By using ontologies, the search engines look for the exact pages by referring to the precise concept in the ontology rather than collecting all pages and presenting to the user using the keywords that occur.
The another use of ontology is that if the query fails find the relevant document for the user, the search engine may interact or prompt to the user for a more general query. In the case of retrieving more pages as a result, again the search engine may suggest the user for some specialization (eg. Advanced Search).
Creating properties for the individual present in the ontology will describe the relations between different individuals. So when compared to the current web technology, in Semantic Web, using these types of ontologies for developing web application is a long tradition of Artificial intelligence. In the current real world, the most important ontology languages for the Web are the following:
RDF is one of the Ontology languages for developing a data model for objects. Here the relation between those objects can also be specified. Simple semantics for this data model are provided by using RDF. XML syntax is the representation of these data models.
Another Ontology language is the RDF Schema. It is known as a vocabulary description language for describing the properties of RDF resources. It also describes the classes of RDF resources. These descriptions are given with semantics for generalization hierarchies.
Among RDF and RDF Schema, OWL ontology language is a richer vocabulary description language for describing properties and classes. It also used in specifying many relations between classes (e.g., disjointness), cardinality (e.g., exactly one), equality, richer typing of properties, characteristics of properties (e.g., symmetry) and enumerated classes.
It is a study made by the machine about the principles of reasoning. Once the machine studies about the reasons we have created then it is said to be logic is understood by the machine and it is used for the process of retrieving information according toand for the process of drawing conclusions. This logic will also provide explanations about the conclusions to the agents.
It is a piece of software which works freely and tending to initiate change rather than reacting to the events. Agent was developed form the concepts of Object oriented programming and component-based software development. The role of the agent is to collect the information and organizing it. Semantic Web agents will make use of the technologies like Metadata, ontologies and logic. Using these technologies will help the agent to extract relevant information from the web sources, helps in communicating with the other agents and compare the relevant information according to the user query and making decision the presenting the desired output to the user.
A Layered Approach:
The development of the semantic web application is done in sequence of steps. A separate layer is being build for each step on top of another. The layered approach to the Semantic Web diagram describes the important key layers of the Semantic Web design and vision.
In this layered approach, we find XML in the bottom most layer. XML is language that helps the programmer to write the structured documents with the vocabulary that are user defined. This language is mainly appropriate for transferring documents across the web.
The next top layer contains RDF which is found over the top of the XML layer. RDF is a basic data model, for adding simple statements about the objects similar to the entity-relationship model. Here the object denotes the resources where those simple statements are written. RDF data model has XML based syntax but it does not depend on XML.
RDF Schema organizes the Web objects into hierarchies. The provided modelling primitives of the RDF Schema help in organization. Here key primitives are classes and properties, subclass and subproperty relationships, domain and range restrictions. This RDF Schemas is based on the RDF.
A primitive language helps in writing the ontologies. In the layered approach of the Semantic Web, RDF Schema is viewed as a primitive language. But RDF Schema is quiet efficient is representing the more complex relationships between objects. Therefore instead of RDF Schema, there is a need for stronger ontology languages which extends the RDF Schema.
Next is the current standard Web ontology language which is been instantiated with two alternative from the Ontology layer. The alternatives of the Ontology layers are OWL and ruler-based language. These alternatives lead to the development of the Semantic Web appears.
DLP is another alternative to the Ontology layer which is the intersection of OWL and Horn logic, and serves a widespread basis.
The Ontology language is improved further by stating the application-specific declarative knowledge by the use of logic layer.
From lower level to the higher level, all the proofs are validated in the proof layer. The proof layer also performs the concrete deductive process and also the demonstration of the proofs in the web languages.
Trusting of data is done in the trust layer. Use of digital signatures, recommendations of trusted agents or rating given by the consumer bodies makes the use of Semantic Web data trustable in WWW. This will achieve its full potential only when the users trust the quality of operations performed.
Protege OWL API
API- Application programming interface is a interface which is been applied by a software program. It is used as an interface for the interactions between the other software programs. The API is designed and it is used for the development of components. These developed components are executed inside of the Protege-OWL editor's user interface. The API is also designed and it is used for the development of stand alone application. Some of the stand alone applications are Swing applications, Servlets or Eclipse plug-ins.
The protege - OWL API is an open-source Java library. It is used as the Web Ontology Language (OWL) and RDF(S) for the development of Semantic Web. The API allows the classes and the methods to load and save OWL files, searching and controlling of OWL data models is performed by the API. This API also provides reasoning of OWL data models that are created based on Description Logic engines. In order to implement the graphical user interfaces, the API in Owl is optimized.