Accessing The Deep Web Computer Science Essay
Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
The World Wide Web has grown from few thousand web pages in 1993 to almost 2 billion web pages at present. It is a big source of information sharing. This source of information is available in different forms; text, images, audio, video, tables etc. People use this information via web browsers. Web browser is an application to browse web on internet. Search engines are used to search specific data from the pool of heterogeneous information . In the rest of this chapter I will how people can search relevant information, how search engine works, what a crawler is, how it works, and what related literature about the particular problem is.
A search engine is a program to search for information on the internet. The results against a search query given by user are presented in a list on a web page. Each result is a link to some web page that contains the specific information against the given query. The information can be a web page, an audio or video file, or a multimedia document. Web search engines work by storing information in its database. This information is collected by crawling each link on a given web site. Google is considered a most powerful and heavily used search engine in these days. It is a large scale general purpose search engine which can crawl and index millions of web pages every day . It provides a good start for information retrieval but may be insufficient to manage complex information inquiries those requires some extra knowledge.
A web crawler is a computer program which is use to browse the World Wide Web in a automatic and systematic manner. It browses the web and save the visited data in database for future use. Search engines use crawler to crawl and index the web to make the information retrieval easy and efficient .
A conventional web crawler can only retrieve surface web. To crawl and index the hidden or deep web requires extra effort. Surface web is the portion of web which can be indexed by conventional search engine . Deep or hidden web is a portion of web which cannot be crawled and indexed by conventional search engine .
DEEP WEB AND DIFFERENT APPROACHES TO DISCOVER IT
Deep web is a part of web which is not part of surface web and lies behind HTML forms or the dynamic web . Deep web content can be classified into following forms;
Dynamic Content: this is a type of web contents which are accessed by submitting some input value in a form. Such kind of web requires domain knowledge and without having knowledge, navigating is very hard.
Unlinked Content: These are the pages which are not linked in other pages. This thing may prevent it from crawling by search engine.
Private Web: These are the sites which require registration and login information.
Contextual Web: These are the web pages which are varying for different access context.
Limited Access Content: These are site which limit its access to their pages.
Non-HTML/ Text Content: The textual contents which are encoded in images or multimedia files cannot handled by search engines.
All these create a problem for search engine and for public because a lot of information is invisible and a common user of search engine even donâ€™t know that might be the most important information is not accessible by him/her just because of above properties of web applications. The Deep Web is also believed that it is a big source of structured data on the web and retrieving it is a big challenge for data management community. In fact, this is a myth that deep web is based on structured data which is in fact not true because deep web is a significant source of data most of which is structured but not only one. .
Search engines pre-cache the web site and crawl locally. AJAX applications are event based so events cannot be cached.
The entry point to the deep web is a form. When a crawler finds a form, it needs to guess the data to fill out the form [15, 16]. In this situation crawler needs to react like a human.
There are many solutions to resolve these problems but all have their limitations. Some application developer provides custom search engine or they expose web content to traditional search engine based on agreement. This is a manual solution and requires extra contribution from application developers . Some web developers provide vertical search engine on their web site which is used to search specific information about their web site. There are many companies which have two interfaces of their web site. One is dynamic interface for users convenient and one is alternate static view for crawlers. These solutions only discover the states and events of AJAX based web content and ignore the web content behind AJAX forms. This research work is going to propose solution to discover the web content behind AJAX based forms. Google has proposed a solution but still this project is undergone .
The process of crawling web behind AJAX application becomes more complicated when a form encounters and crawler needs to identify the domain of the form to fill out the data in form to crawl the page. Another problem is that no form has the same structure. For example, a user looking for a car finds different kind of form than a user looking for a book. Hence there are different form schemas which make reading and understanding of form more complicated. To make the forms crawler read-able and understand-able, the whole web should be classified in small categories, each category belongs to a different domain and each domain has a common form schema which is not possible. There is another approach, focused crawler. Focused crawlers try to retrieve only a subset of the pages which contains most relevant information against a particular topic. This approach leads to better indexing and efficient searching than the first approach . This approach will not work in some situations where a form has a parent form. For example, a student fills a registration form. He/she enters country name in a field and next combo dynamically load city names of that particular country. To crawl the web behind AJAX forms, crawler needs special functionality.
Traditional web crawlers discover new web pages by starting from known web pages in web directory. Crawler examines a web page and extracts new links (URLs) and then follows these links to discover new web pages. In other words, the whole web is a directed graph and a crawler traverse the graph by a traversal algorithm . As mentioned above, AJAX based web is like a single page application. So, crawlers are unable to crawl the whole web which is AJAX based. AJAX applications have a series of events and states. Each event is act as an edge and states act as nodes. Crawling states is already done in [14, 18], but this research is left the portion of web which is behind AJAX forms. The focus of this thesis is to crawl web behind AJAX forms.
Indexing means creating and managing index of document for making searching and accessing desired data easy and fast. The web indexing is all about creating indexes for different web sites and HTML documents. These indexes are used by search engine for making their searching fast and efficient . The major goal of any search engine is to create database of larger indexes. Indexes are based on organized information such as topics and names that serve as entry point to go directly to desired information within a corpus of documents . If the web crawler index has enough space for web pages, then those web pages should be the most relevant to the particular topic. A good web index can be maintained by extracting all relevant web pages from as many different servers as possible. Traditional web crawler takes the following approach: it uses a modified breadth-first algorithm to ensure that every server has at least one web page represented in the index. Every time, when a crawler encounters a new web page on a new server, it retrieves all its pages and indexes them with relevant information for future use [7, 21]. The index contains the key words in each document on web, with pointers to their locations within the documents. This index is called an inverted file. I have used this strategy to index the web behind AJAX forms.
Query processor processes query entered by user in order to match results from index file. User enters his/her request in the form of a query and query processor retrieves some or all links and documents from index file that contains the information related to the query and present to the user in a list of results [7, 14]. This is a simple interface that can find relevant information with ease. Query processors are normally built by breadth first search which make sure that every single server containing relevant information has many web pages represented in the index file . This kind of design is important for users, as they can usually navigate within a server more easily that navigating across many servers. If a crawler discovers a server as containing useful data, user will possibly be able to search what they are searching for. Review this after implementing query processor in my thesis.
RESULT COLLECTION AND PRESENTATION
Search results are displayed to user in the form list. The list contains the URLs and words those matches to the search query entered by user. When user make a query, query processor match it with index, find relevant match and display all them in result page . There are several result collection and representation techniques are available. One of them is grouping similar web pages based on the rate of occurrence of a particular key words on different web pages . Need a review
SYSTEM ARCHITECTURE AND DESIGN
EXPERIMENTS AND RESULTS
Cite This Essay
To export a reference to this article please select a referencing stye below: