A Hidden Page Web Crawler Model Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The traditional search engines available over the internet are dynamic in searching the relevant content over the web. The search engine has got some constraints like getting the data asked from a varied sources, where the data relevancy is exceptional. The web crawlers are designed only to more towards a specific path of the web and are restricted in moving towards a different path as they are secured or at times restricted due to the apprehension of threats. It is possible to design a web crawler that will have the capability of penetrating through the paths of the web, not reachable by the traditional web crawlers, in order to get a better solution in terms of data, time and relevancy for the given search query. The paper makes use of a newer parser and indexer for coming out with a novel idea of web crawler and a framework to support it. The proposed web crawler is designed to attend HTTPS based websites and web pages that needs authentication to view and index. User has to fill a search form and his/her creditionals will be used by the web crawler to attend secure web server for authentication. Once indexed the secure web server will be inside the web crawler's accessible zone.

Keywords-Deep web crawler, hidden pages, Accessing secured databases, indexing.

Introduction

A web crawler has to take into account an array of parameters in order to execute a search query. The working of a deep web crawler differs with the working of a traditional web crawler in several aspects, initially the web, taken as a graph by the web crawler has to be traversed in a different path with diverse authentication and permission to enter into a secure and restricted network. The process of doing so is not simple, as it involves structuring and programming the web crawler to do so. Basically the web crawlers are divided into one of the several categories listed below

Dynamic web crawler: The crawler returns dynamic content in response to the submitted query or completed form. The primary search attribute for this kind of web crawler is text fields.

Unlinked pages/content: several pages over the web are independent and are not connected to any other in/back links preventing them to be found by search engines. These contents are referred to as back links.

Private pages/web: Several sites that are administered by organisation and contain certain copyrighted material needs a registration to access it. There is also a possibility of the website to ask the user to authenticate. Most of these pages are encrypted and may also require Digital Signature for the browser to access.

Context oriented web: These web pages are accessible only by a range of IP addresses and are kept in the intranet, ready to be accessed by internet too.

Partial access web: several pages limit the access of their pages to avoid search engine to display the content in a technical way, by the use of Captcha code and restriction of meta data, preventing the web crawler's entry.

Scripted web content: pages are accessible only through the link provided by web servers or name space provided by the cloud. Some video, flash content and applets will also falls under this category

Non-HTML content: Certain content embedded in image and video files are not handled by search engines.

Other than this category of content, there are several different formats of data that are inaccessible by any of the web crawlers. Most of the internet search happens through the Hyper Text Transfer Protocol (HTTPS), the existence of other protocols like gopher, FTP, HTTPS also restrict the content to be searched by traditional search engines.

The paper deals with the techniques by which these above mentioned information known as deep-content or hidden content for web crawlers can be included in the search outcomes of a traditional web crawler. The whole web can be categorised into two types, the traditional web and the hidden web [25, 26, 27]. The traditional web is the one, surfaced by the normal deployed by based on general purpose search engine. And the hidden web which has got abundant and important information, but cannot be traversed directly by a general purpose search engine as it has certain security concerns on the crawlers. Internet survey says that there are about 300,000 Hidden Web databases [28]. Few qualities of the hidden web contains are, it has broad coverage containing high quality contents exceeding all print data available.

Related works

There exists several other web crawlers that are intended to search hidden web pages, a periodical survey of such web crawler is being done here in order to know their limitations and constraints and overcome the same in the proposed framework. By the way of setting apart noisy and unimportant blocks from the web pages can facilitate search and to improve the web crawler has been proved. This way can facilitate even to search hidden web pages [3]. The most popular ones are DOM-based segmentation [5], location-based segmentation [10] and Vision-based Page Segmentation [4]. The paper deals with capability of differentiating features of the web page as blocks and modeling is done on the same to find some insights to get the knowledge of the page using two methods based on neural network and SVM facilitating the page to be found.

The availability of robust, flexible Information Extraction (IE) systems for transforming the Web pages into algorithm and program readable structures like one as relational database that will help the search engine to search easily[6]. The problem of extracting website skeleton, i.e. extracting the underlying hyperlink structure used to organize the content pages in a taken website. They have proposed an automated BOT like algorithm that has the functionality of discovering the skeleton of a given website. Named by SEW algorithm, it examines hyperlinks in groups and identifies the navigation links that point to pages in the next level in the website structure. Here the entire skeleton is then constructed by recursively fetching pages pointed by the discovered links and analysing these pages using the same process is explained [7].

The issue of extraction of search term for over millions and billions of information and have touched upon the issue of scalability and how approaches can be made for a very large databases [8]. This paper have focused completely on current day crawlers and their inefficiencies in pulling the correct data. Their analysis covers the concept of Current-day crawlers retrieving content only from the publicly index able Web, the pages reachable only by following hypertext links and ignoring the pages that require certain authorization or prior registration for viewing them [9]. The different characteristics of web data, the basic mechanism of web mining and its several types are summarized. The reason for the usage of web mining for the crawler functionality is well explained here in the paper. Even the limitations of some of the algorithms are listed. The paper talks about the usage of fields like soft computing, fuzzy logic, artificial networks and genetic algorithms for the creation of crawler. The paper gives the reader the future design that can be done with the help of the alternate technologies available [11].

The later part of the paper deals with describing the characteristics of web data, and the different components and types of web mining and also the limitations of existing web mining methods. The applications that can be done with the help of these alternative techniques are also described. The survey involved in the paper is in-depth and surveys all systems which aim to dynamically extract information from unfamiliar resources. Intelligent web agents are available to search for relevant information using characteristics of a particular domain got from the user profile to organize and interpret the discovered information. There are several available agents such as Harvest [15], FAQ-Finder [16], Information Manifold [17], OCCAM [[18], and Parasite [19],that rely on the predefined domain specific template information and are experts in finding and retrieving specific information.

The Harvest [15] system depends upon the semi-structured documents to extract information and it has the capability to exercise a search in a latex file and a post-script file. at most used well in bibliography search and reference search ,is a great tool for researchers as it searches with key terms like authors and conference information. In the same way FAQ-Finder [16], is a great tool to answer frequently asked questions (FAQs), by collecting answers from the web. The other systems described are ShopBot [20] and Internet Learning Agent [21] retrieves product information from numerous vendor website using generic information of the product domain.

The evolving web architecture and the ways the behavior of web search engines have to be altered in order to get the desired results [12]. In [13] the authors' talk about ranking based search tools like Pubmed that allows users to submit highly expressive Boolean keyword queries, but ranks the query results by date only. A proposed approach is to submit a disjunctive query with all query keywords, retrieve all the returned matching documents, and then rerank them.

The user fills up a form in order to get a set of relevant data. The process is tedious for a long run and when the number of data to be retrieved is huge, is discussed [14]. In the thesis by Tina Eliassi-Rad, several works that retrieve hidden pages are discussed. There are many proposed hidden pages technique, which are an unique web crawler algorithm to do the hidden page search [23]. An architectural model for extracting hidden web data is presented [24]. The end of the survey circumstances that much less work has been carried out an advanced form based search algorithm, that is even capable of filling forms and captha codes.

3. The Approach and Working

Consider a situation, where a user is to search a term "ipad".The main focus of a traditional crawler will be to list a set of search results mostly consisting of the information about the search term and certain shopping options for the search term "ipad". It might omit several websites with best offer on the same search term "ipad" as it involves, only a registered user to give authentication credentials to view the product pricing and review details. The basic need of the search engine is to enter into such type of web pages, after filling the username and password. Enabling the web crawler to do the same is the primary importance given in the paper.

For the same an already available PIW crawler is taken and the automatic form filling concept is attached and the results are analysed using several different search terms. The proposed algorithm will be analysing most of the Websites and will tend to pull out the related pages of the search query. The URL's of the pages are identified and are added to the URL repository. The role of parser comes to live at this moment and it sees for any extended URL's from the primary source of URL. The analyser will be co-working with the parser and will extract finite information from the web page. It scans each page for the search terms by analysing each and every sentence by breaking them and retrieves the essential information before showing the page. The composer will then compose the details of the web pages in a database. This is how a typical hidden-pages searching web crawler works.

The analyser sees for the web page with more number of terms relevant to the search query. It has a counter, which will be initialised and the counter increments as soon as some of the words in the web page are found similar to that of the search term. The web page of web site with more counter value are analysed and numbered and they are projected in page-wise as search results.

The proposed web crawler

The traditional mode of working of the hidden web crawler is taken into account as a skeleton and several improvements are done after finding out its limitations and constraints from the literature survey. The crawler has to be given capabilities to find out hidden pages better than the existing hidden crawlers [2]. For the same, certain extra module has to be added with the existing modules of hidden crawler. The added module is named as structure module capable of filling authentication forms before entering the web site, if needed. The module facilitates the crawler to enter a Secure Hyper Text Mark-up Page. Almost all the e-shopping sites has https as their transport protocol and this ability will lead to get information from this kind of web sites, which are not visible to ordinary web crawlers. The web crawler writes down the websites found in a particular domain in text files, enabling easy access. The list divides the good and bad pages, according to certain attributes of the webpage. The proposed web crawler will also be legible to crawl through Ajax and java script oriented pages.

Design Modules

The design modules for the prototype of WebCrawler are as below.

Analyser

The primary component of the web crawler is the analyses, capable of looking in to the web pages .The module is after the structure module, which is a search form used by the user to give search term and also his credentials. The analyser will scan each and every page and will keep the vital information in a text file. The files got as an outcome of the analyser phase is a text file consisting of all the website information and is stored in a log database, for further use for another search query.

Figure 1: The Web Crawler architecture

Parser and Composer

The primary function of the parser in the proposed approach is to take the document and splitting it into index- able text segments, letting it to work with different file formats and natural languages. Mostly linguistic algorithms are applied as parser. Here we follow a traditional parser algorithm.

The function of indexer is dependent on parser and builds the indexes necessary to complement the search engine. This part is decide the power of the search engine and determines the results for each of the search word. The proposed indexer has the capability to index terms and words from secure as well as open web. The difference between the normal web crawler and hidden page searching web crawler is shown here. The Google's web indexer is supposed to be the best and uses ranking algorithm and changes the terms of the web pages as per their popularity and updating, making it a dynamic indexer.

The proposed web indexer has the capability to fill search words within the web pages and find out results, as well as concentrating on secure pages with HTTPS protocol too.

Result analyser

The result analyser explores the searched results and gives the same in a GUI based structure for the developer to identify and come out with modifications. It is done by inputting a web page and all the HTML tags of it are considered to be output.

Implementation: - As part of implementation an open source web crawler was identified. There are several open source web crawlers available and some of them are Heritrix [29], an internet Archive's open-source, extensible, web-scale, archival-quality web crawler that is web-scalable and extensible. WebSPHINX [30] is a Website-Specific Processors for HTML Information extraction and is based on java and gives an interactive development environment for creating web crawlers. JSpider [31], is a highly configurable and customizable Web Spider engine written purely in java. Web-Harvest [32] is an Open Source Web Data Extraction tool written in Java and focuses mainly on HTML/XML based web sites. JoBo [33] is a simple program to download complete websites to your local computer.

For the implementation of our specific method which can make use of a different pattern of search to mine the searches via HTTPs, HTTP and FTP and also has the capability of getting information from preregistration-then only access sites, GNU Wget is downloaded and modified. GNU Wget is a freely distributed, GNU licensed software package for retrieving files via HTTP, HTTPS and FTP. It is a command based tool.

The tool when examined showed visible improvement and some results had in it pages from HTTPs and a form filled web site. Figure .2 shows the comparison

Figure 2

Observations and Results:-

The results are taken for several keywords to find out the proposed Hidden web page web crawler's difference from the traditional web search engine and a better search is found, which includes several secure and hidden pages input in the search results. The results proved that the modified version of

Conclusion:

With the advent of search is increasing exponentially people and corporate rely on searches for multiple decision making, search engine with newer and wider results including pages that are rare and useful. The proposed Hidden page web crawler, makes use of integration of several secure web pages as a part of indexing and comes out with a better result. In future the same can be applied for a mobile search and can be extended for ecommerce application.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.