The Information Gathering Phase Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The information gathering phase focuses on the determination of the characteristics of the target network in terms of: node availability: what hosts or network nodes are active?, ser- vice availability and role: what network applications run on those entities and what activity do each one of them perform?, network topology: how are nodes organized?, perimeter security and access control policies: what are the rules allowing or denying access to the network resources?, and application vulnerabilities: are there some security weaknesses or misconfiguration problems on the exposed noes and services?.

The above task requires an in-depth knowledge and an extremely clear topological view of the infrastructure to be analyzed. Unfortunately, the complete knowledge about the network being protected often exists only in the mind of the network administrator and is very difficult, if not impossible, also for security reasons, to obtain some information from the outside. Part of this knowledge can be obtained by using a number of active scanning/analysis tools, in addition to the more traditional network management/browsing tools each of which can only provide a subset of the information about the protected network. For example, the information about the services active on the hosts could be determined by scanning the ports of such hosts. In addition, the results obtained from the execution of one tool are often used as the basis for additional analysis and possibly as input for the execution of other tools. That is, starting from the obtained information on the known hosts and services, more sophisticated vulnerability assessment tools can automatically perform checks on them by looking for vulnerable applications, misconfigured services, and flawed operating system versions. Examples of these tools are Nessus [?], Nmap [?], and ISSs Internet Scanner [?].

They provide different functionalities and use different means to retrieve information about a network and store such information in different formats. The coordination of their executions and the composition of their results is usually a human-intensive task, that in the proposed assessment framework has been totally automated. In more detail, information gathering is implemented by applying and combining the following techniques:

• Host and Service Detection : A basic network footprint analysis is performed by scanning for accessible hosts through node and port scanning techniques that allow to determine the presence of an host reachable on the network together with the TCP, UDP, or RPC services listening on its specific port. Such evidence often implies the existence of associated applications running, e.g. a listening port 80/tcp often implies an active apache web server or a listening 53/udp can flag the presence of an active DNS server. Several different port scanning techniques (i. e. traditional, sweep, half, xmas and stealth scan [5]) have been used in order to bypass security tools and filtering policies and hence to obtain a more complete and accurate result regarding the ports which are opened, closed, or filtered.

• OS Detection : Most of the available techniques for detecting the OS running on a remote system rely on implementation differences between OSs to identify the specific software variant. They work by using a set of network queries and a classification model. Detec- tion is performed by issuing the queries against the remote host, typically by sending carefully crafted network packets, collecting response packets, and feeds these responses into the classification model. If the different implementations of the software predictably generate different responses to the queries , the classification model can use them to re- liably identify the remote OS. Several very common OS detection techniques have been used. The first one, called IP stack fingerprinting, allows the determination of re- mote OS type by comparison of variations in OS IP stack implementation behavior. Ambiguities in the RFC definitions of core internet protocols coupled with the com- plexity involved in implementing a functional IP stack enable multiple OS types (and often revisions between OS releases) to be identified remotely by generating specifically constructed packets that will invoke differentiable but repeatable behavior between OS types, e.g. to distinguish between Linux RedHat and Microsoft Windows 7. Addition- ally, the pattern of listening ports discovered using service detection techniques may also indicate a specific OS type; this method is particularly applicable to out of the box OS installations. Commonf fingerprinting tool such as Nmap provide a database of thou- sands of reference summary data structures for known OSs that are continuously kept up-to-date by finding new probes and re-examining the existing ones.

• Topology mapping and policy discovery : Discovering the physical layout and intercon- nections of network elements is a fundamental prerequisite for any security analysis. The lack of automated solutions for capturing physical topology information from the outside means that network security auditors are forced to ask for such information to the network management staff of the network under scrutiny and manually determine the needed assessment parameters. However, it should be considered that every secu- rity assessment process is an avolving task requiring periodic and continuous updates.

Given the dynamic nature and the ever-increasing complexity of todays IP networks, keeping up-to-date these manually-determined information is a daunting task. This situation clearly mandates the development of effective, general-purpose solutions for automatically discovering the up-to-date physical topology of an IP network. An addi- tional challenge in the design of such solutions is dealing with the lack of established, industry-wide standards on the topology information maintained locally by each network element, and the diversity of elements and protocols present in todays multi-vendor IP networks. Many of techniques can be utilized to map a network starting from the com- mon available application or infrastructure support services. For example SNMP (Simple Network Management Protocol) [3] enabled devices are often not configured with se- curity in mind, and can consequently be queried for network availability, usage, and topology data. Similarly, DNS servers can be queried to build lists of registered (and consequently likely active) hosts. Furthermore, routers on (or logically associated with) the target network can often be queried via specific routing protocol queries for known routes. This information can be used to further aid construction of a conceptual model of the topology of the target network. More sophisticated techniques (known under the term Firewalking) can be used to traverse firewalls and packet filtering devices to gather network topology information. These techniques are generally based on sending legal packets which are allowed by the in transit filtering policies (mainly ACK, SYN/ACK, FIN ...) and evaluating the response. There is not a commmon schema for analyzing the above response packets, but they often contain a lot of information about the network in form of ICMP control messages [11] [2] [8]. For example, a Traceroute-Like Analysis of Time Exceeeded, Host Unreachable and Network Unreachable packet responses can be used to easily determine both the list of the gateway nodes in transit, with their topology gathered in terms of multiple paths from diferent observation points, and the list of controls/rulesets implementing the access contrl policies. Some other tests com- bined with massive scanning activities (e.g. idle scanning) is also useable to determine the absence IP-based anti-spoofing filtering rules. filter rules

• Application-Layer Vulnerability Information Gathering : Specific scanning tools, exploit application services available on the network at first by identifying and enumerating them (e.g., file transfer protocol or hypertext transfer protocol-based services, infrastructure service etc.) and then by searching for the presence of default accounts, directory traver- sal attacks, form validation errors, insecure cgi-bin files, demonstration Web pages, and other known vulnerabilities. This can be accomplished through a variety of means op- erating both at the application-level (service behavior and protocol compliance probing, banner grabbing or exploiting stack smashing buffer overflow for mailicious code execu- tion), or at the network-level (packet forgery, hijacking TCP connections, port diversion and ARP or IP spoofing. Testing by exploit involves using a script or program designed to take advantage of a specific vulnerability. However, there is a well-known problem associated with such kind of vulnerability assessment that often prevents it from being used extensively as a basic security practice: the safety of the scanning tools used for information gathering. Thet is, many scanners can cause adverse effects on the net- work systems being tested by crashing the involved systems/services (e.g. due to the

unpredictable results of exploiting a buffer overflow vulnerability), or even worse leaving permanent damaging side effects and/or undesirable modifications to the system state (e.g., by putting a plus in the rhosts file). Consequently, the above analysis must by accomplished by using a weakened exploit code with the only sake of probing the target system to demonstrate the presence of a vulnerability, thet should not leave the system itself in a vulnerable or damaged state.

The adopted philosophy is to perform each part of the overall analysis with the tool best suited for the specific task. This works best if a tool performs one specific task instead of implementing many different functionalities as a monolithic application. A tool that is able to perform many tasks should at least have parameters that can limit its operation to exactly what is needed. For example, Nmap is able to performs ping scans, port scans, OS fingerprinting, and RPC scans in a flexible and very effective way, so that it can be finely tuned to suit the specific needs of each initial discovery activity. On the other hand the Metasploit framework, which is an advanced open-source platform providing a set of application programming interfaces (APIs) for for packaging vulnerability exploit code in a fully automated fashion, has been used, together with the OpenVAS open source remote security scanner, for detecting vulnerable applications and services running on the available hosts and providing a warning level (to be used for quantitative assessment) for each possible vulnerability. Only open source applications will be used for the previous purposes, in order to avoid licensing costs and legalities.

5 The Assessment Paradigm

The proposed automatic assessment framework has the objective of characterizing the security of a network by defining several metrics whose combined observation and measurement results in different security levels against which it will be possible to evaluate and compare the secu- rity of different infrastructures. The underlying methodology takes into account the specific security criteria associated to the individual evaluation metrics and combines them within a structured hierarchical decision process assessing the relative importance of these criteria, comparing the alternatives for each of them, and determining an overall ranking associated to the confidence according to which each criterion will provide correct information about the security index describing a particular network component or aspect. The information about the most important important components will be put together to form a relevance matrix which is used to generate a vector of fundamental security indexes representing the actual network security level. In order to automatically estimate and analyze the associated metrics values, the aforementioned security criteria must be described in a rigorous, controllable way, according to a formal model containing a possibly complete information about the network infrastructure, topology, and deployed services (identified through port or vulnerability de- tection scans, or assumptions based on some configuration evidences). Such network security model has to be designed by taking into account the models used by the existing network man- agement and vulnerability scanning tools. The key for a successful and effective assessment is the availability of quantitative data, as complete and accurate as possible, for the generation of

the above network security analysys model. This implies assigning quantitative values to the individual security criteria and mathematically combining them in a way that demonstrates their relative or absolute effects on the overall system security. These effects can be generic numbers that only have bearing on each other or they can be converted into specific cost or risk values. Such a quantitative analysis can be viewed as just a refining or a partitioning of a more complex qualitative security analysis problem. It breaks down the qualitative issues into smaller factors that can have easily obtainable quantities ascribed to them. The inherent hierarchical structure of the above analysis, starting from the principle of breaking down the process into individual measurable components, requires a method for accurately estimating the weight, and hence the relevance, of each component for the determination of an accurate and quantified network security degree. AHP is an ideal decision making methodology to do this, since It allows an easier, and efficient identification of security evaluation criteria, their weighting and analysis.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.