• Order
  • Offers
  • Support
    • Due to unforeseen circumstances, our phone line will be unavailable from 5pm to 9pm GMT on Thursday, 28th March. Please be assured that orders will continue to be processed as usual during this period. For any queries, you can still contact us through your customer portal, where our team will be ready to assist you.

      March 28, 2024

  • Sign In

Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Analysis of Intrusion Detection Systems (IDS)

Info: 15420 words (62 pages) Dissertation
Published: 11th Jan 2022

Reference this

Tagged: Information SystemsInformation Technology

Introduction

Intrusion detection systems (IDS) were developed in 1990’s, when the network hackers and worms appeared, initially for the identification and reporting of such attacks. The intrusion detection systems didn’t have the ability to stop such attacks rather than detecting and reporting to the network personnel.

The Intrusion Prevention Systems got both characteristics i.e. threat detection and prevention. The detection process analyzes the events for any possible threats while the intrusion prevention stops the detected possible threats and reports the network administrator.

Purpose & Scope

The main purpose of the project is to evaluate the security capabilities of different types of IDPS technologies in maintaining the network security. It provides detail information about the different classes & components of IDPS technologies, for example, detection methods, security capabilities, prevention capabilities & internals of IDPS. It is mainly focused on different detection techniques & responses by these technologies.

1.2 Audience

The information can be useful for computer network administrators, network security personnel, who have little knowledge about these IDPS technologies.

1.3 Project Structure

The project is organized into the following major structure:

  • Section 2 provides a general introduction of IDPS.
  • Section 3 provides detail information about of IDPS technologies, components & architecture, detection methodologies, security capabilities & prevention capabilities.
  • Section 4 provides the internals of IDPS & incident response.

Introduction of IDPS

This Chapter Explains the Intrusion Detection & Prevention Process, Uses, Functions and Different Types of IDPS.

The modern computer networks provide fast, reliable and critical information not only to small group of people but also to ever expanding group of users. This need led the development of redundant links, note book computers, wireless networks and many others. On one side, the development of these new technologies increased the importance and value of these access services and on other side they provide more paths to attacks.

During the past, In the presence of firewalls and anti-virus software, organizations suffered huge losses in minutes to their businesses in terms of their confidentiality and availability to the legitimate clients. These modern threats highlighted the need for more advance protection systems. Intrusion detection & prevention systems are designed to protect the systems and networks from any unauthorized access and damage.

An intrusion is an active sequence of related events that deliberately try to cause harm, such as rendering system unusable, accessing unauthorized information or manipulating such information. In computer terminology, Intrusion detection is the process of monitoring the events in a computer network or a host resource and analyzing them for signs of possible incidents, deliberately or incidentally. The primary functions of IDPS are the identification of incident, logging information about them, stopping them & preventing them from causing any damage. The security capabilities of IDPS can be divided into three main categories:

  • Detection : Identification of malicious attacks on network & host systems
  • Prevention: stopping of attack from executing
  • Reaction: Immunization of the system from future attacks.

On the basis of location and type of events they monitor, there are two types IDPS technologies, host-based & network based. The network-based IDPS monitors traffic for particular network segment and analyze the network & application protocol activity for suspicious events. It is commonly deployed at the borders between networks. While on the other hand, host-based IDPS monitors the activity of a single host and events occurring within that host for suspicious activity.

There are two complementary approaches in detecting intrusions, knowledge-based approach and behavior based approach. In knowledge-based approach an IDPS looks for specific traffic patterns called Signatures, which indicates the malicious or suspicious content while in the behavior-based approach an intrusion can be detected by observing a deviation from normal or unexpected behavior of the user or the system.

What is an IDS?

The Intrusion Detection Systems (IDS) can be defined as: tools, methods & resources to identify, assess & report unauthorized or unapproved network activity.

It is the ability to detect attacks against a network or host and sending logs to management console providing the information about malicious attacks on the network and host resources. IDSs fall into two main categories:

  • Host-Based Intrusion Detection System (HIDS): A HIDS system require some software that resides on the system and can scan all host resources for activity. It will log any activities it discovers to a secure database and check to see whether the events match any malicious event record listed in the knowledge base.
  • Network-Based Intrusion Detection Systems (NIDS): A NIDS system is usually inline on the network and it analyzes network packets looking for attacks. A NIDS receives all packets on a particular network segment via one of several methods, such as taps or port mirroring. It carefully reconstructs the streams of traffic to analyze them for patterns of malicious behavior.

The basic process for IDS is that it passively collects data and preprocesses and classifies them. Statistical analysis can be done to determine whether the information falls outside normal activity, and if so, it is then matched against a knowledge base. If a match is found, an alert is sent. Figure 1-1 outlines this activity.

Response

Manager

  • GUI
  • Host System
  • Pre-processing
  • Statistical Analysis
  • Alert Manager
  • Knowledge
  • Base
  • Long-Term Storage
  • Signature
  • Matching

Fig 1.1 Standard IDS System

What is an IPS?

IPS technology has all capabilities of an intrusion detection system and can also attempt to stop possible incidents. IPS technologies can be differentiated from the IDS by one characteristic, the prevention capability. Once a threat is detected, it prevents the threat from succeeding. IPS can be a host-based (HIPS), which work best at protecting applications, or a network-based IPS (NIPS) which sits inline, stops and prevents the attack.

A typical IPS performs the following actions upon the detection of an attack:

  • IPS terminates the network connection or user session.
  • It blocks access to target .i.e. IP address, user account or sever.
  • It reconfigures the devices i.e. firewall, switch or router.
  • It replace the malicious portion of an attack to make it benign

An IPS typically consists of four main components:

  • Traffic Normalizer: Interpret the network traffic and do packet analysis and packet reassembly & traffic is fed into the detection engine & service scanner.
  • Service Scanner: Builds a reference table that classifies the information & helps the traffic shaper manage the flow of the information.
  • Detection Engine: Detection engine does pattern matching against the reference table.

Figure 1.2 outlines this process:

  • Response
  • Manager
  • GUI
  • Traffic Normalizer
  • System Scanner
  • Detection Engine
  • Alert Manager
  • Reference Table
  • Long-Term Storage
  • Signature
  • Matching

FIG 1-2 Standard IPS

Uses of IDPS Technologies

The identification of possible incidents is the main focus of an IDPS, for example, if an intruder has successfully compromised a system by exploiting the vulnerability in the system, the IDPS could report this to the security personnel. Logging of information is another important function of IDPS. This information is vital for security people for further investigation of attack. IDPS has also the ability to identify the violation of security policy of an organization which could be intentionally or unintentionally, for example, an unauthorized access to a host or application.

Identification of reconnaissance activity is one of the major capabilities of IDPS, which is the indication of an imminent attack, for example, scanning of hosts and ports for launching further attacks. In this case, an IDPS can either block the reconnaissance activity or it can alter the configurations of other network devices

Functions of IDPS Technologies

The main difference between different types of IDPS technologies is the type of events they can recognize. Following are some main functions;

  • Recording of information regarding observed events, this information could be stored locally or could be sent to the logging server.
  • Sending of alerts is one of the vital functions of IDPS. Alerts are sent through different methods i.e. email, SNMP traps, syslog messages etc.
  • In case of detection of a new threat, some IDPS do have the ability to change their security profile, for example, when a new threat is detected, it might be able to collect more detail information about the threat.

IDPS not only performs detection but it also performs prevention by stopping the threat to succeed. Following are some prevention capabilities:

  • It can stop the attack by terminating either network connection or user session, by blocking access to a target host.
  • It could change the configuration of other network devices (firewalls, routers & switches) to block the attack or disrupt it.
  • Some IDPS could change the contents of a malicious IP packet, for example, it can replace the header of an IP packet with a new one.

Types of IDPS Technologies

IDPS technologies can be divided into following two major categories:

  • Network-Based IDPS
  • Host-Based IDPS

Network-Based IDPS

Network-based IDPS monitors network traffic for a particular network segment. They analyze the network and application protocol activity to identify any suspicious activity.

A network based IDPS is usually sits inline on the network and it analyzes network packets looking for attacks. It receives all packets on a particular network segment, including switched networks. It carefully reconstructs the streams of traffic to analyze them for patterns of malicious behavior. They are equipped with facilities to log their activities and report or alarm on questionable events. Main strengths of network-based IDPS are:

  • Packet Analysis: Network-based IDPSs perform packet analysis. They examine headers of all IP packets for malicious contents. This helps in detection of the common denial of service (DOS) attack. For example, LAND attack, in which both the source & destination addresses and source & destination ports are same as of the target machine. This cause the target machine to open connection with itself, causing the target machine either performs slowly or crash. It can also investigate the payload of an IP packet for specific commands.
  • Real Time Detection & Response: Network-based IDPS detects attacks in real time as they are occurring in the real time and provides faster response. For example, if a hacker initiated a TCP based DoS attack, IDPS can drop the connection by sending a TCP reset.
  • Malicious Content Detection: Network-based IDPS remove & replaces suspicious portion of the attack. For example, if an email has infected attachment, an IDPS removes the infected file and permits the clean email.
  • Evidence for Prosecution: Network-based IDPS monitors real time traffic and if an attack is detected and captured the hacker cannot remove the evidence. Because the captured attack has data in it but also the information about his or her identification which helps in the prosecution.

Host-Based IDPS

A Host-Based system monitors the characteristics of a single host and the events occurring within that host for suspicious activity. It require some software that resides on the system and monitors the network traffic, syslog, processes, file access & modification and configuration or system changes. It logs any activities it discovers to a secure database and check to see whether the events match any malicious event record listed in the knowledge base. Some of the major strengths of Host-Based IDPS are as under:

  • Verification of Attack: Host-based IDPS uses logs which contains events that have actually occurred. It has the advantage of knowing if the attack is successful or not. This type of detection is more accurate and generates fewer false alarms.
  • Monitoring of Important Components: Host-Based IDPS monitors key components for example, executables files, specific DDLs and NT registry. All of these can cause damage to the host or network.
  • System Specific Activity: Host-based IDPS monitors user and file access activity. It monitors the logoff or login procedure and monitors it on the basis of current policy. It also monitors the file access for example, opening of a non shared file.
  • Switched & Encrypted Environments: Host-Based IDPSs provide greater visibility into purely switched environment by residing on as many critical hosts as needed. Encryption is a challenging problem for network-based IDPS but not a major problem for host-based IDPS. If the host in question has log-based analysis the encryption will have no impact on what goes in to the log files.
  • Near Real Time Detection: A host-based IDPS relies on the log analysis which is not a true real time analysis. But it can detect & respond as soon as the log is written to and compared to the active attack signatures.
  • Real Time Detection & Response: Stack-based IDPS monitors the packets as they transverse the TCP/IP stack. It examines inbound & outbound packets and examines in real time if an attack is being executed. If it detects an attack in real the time then it can responds to that attack in the real time.

Section 2: IDPS Analysis Schemes

IDPSs Perform Analysis: This Chapter is about the Analysis Process- What Analysis does and Different Phases of Analysis.

2.2 Analysis

In the context of intrusion detection & prevention, analysis is the organization of the constituent parts of data and their relationships to identify any anomalous activity of interest. Real time analysis is analysis done on the fly as the data travels the path to the network or host. The fundamental goal of intrusion-detection & prevention analysis is to improve an information system’s security.

This goal can be further broken down:

  • Create records of relevant activity for follow-up.
  • Determine flaws in the network by detecting specific activities.
  • Record unauthorized activity for use in forensics or criminal prosecution of intrusion attacks.
  • Act as a deterrent to malicious activity.
  • Increase accountability by linking activities of one individual across system.

2.3 Anatomy of Intrusion Analysis

There are many possible analysis schemes but in order to understand them, the intrusion process can be broken down into following four phases:

  • Preprocessing
  • Analysis
  • Response
  • Refinement

1. Pre-Processing

Preprocessing is the key function once the data is collected from IDPS sensor. The data is organized in some fashion for classification. The preprocessing helps in determining the format the data are put into, which is usually some canonical format or could be a structured database. Once the data are formatted, they are broken down further into classifications.

These classifications can depend on the analysis schemes being used. For example, if rule-based detection is being used, the classification will involve rules and patterns descriptors. If anomaly detection is used, then statistical profile based on different algorithms in which the user behavior is baseline over the time and any behavior that falls outside of that classification is flagged as an anomaly.

Upon completion of the classification process, the data is concatenated and put into a defined version or detection template of some object by replacing variables with values. These detection templates populate the knowledgebase which are stored in the core analysis engine.

2. Analysis

Once the processing is completed, the analysis stage begins. The data record is compared to the knowledge base, and the data record will either be logged as an intrusion event or it will be dropped. Then the next data record is analyzed. The next phase is response.

3. Response

Once information is logged as an intrusion, a response is initiated. The inline sensor can provide real time prevention through an automated response. Response is specific to the nature of the intrusion or the different analysis schemes used. The response can be set to be automatically performed or it can be done manually after someone has manually analyzed the situation.

4. Refinement

The final phase is the refinement stage. This is where the fine tuning of the system is done, based on the previous usage and detected intrusions. This gives the opportunity to reduce false-positive levels and to have a more accurate security tool.

Analysis Process By Different Detection Methods

The intrusion analysis process is solely depends on the detection method being used. Following is the information regarding the four phases of intrusion analysis by different detection methods:

Analysis Process By Rule-Based Detection

Rule-based detection, also known as signature detection, pattern matching and misuse detection. Rule-based detection uses pattern matching to detect known attack patterns. The four phases of intrusion analysis process applied in rule-based detection system are as under:

  • Preprocessing: The data is collected about the intrusions, vulnerabilities and attacks and then it is putted down into classification scheme or pattern descriptors. From the classification scheme a behavior model is built and then into a common format;
  • Signature Name: The given name of the signature
  • Signature ID: The unique ID for the signature
  • Signature Description: The description of the signature & what it does
  • Possible False Positive Description: An explanation of any “false positives” that may appear to be an exploit but are actually normal network activity.
  • Related Vulnerability Information: This field has any related vulnerability information

The pattern descriptors are typically either content-based signatures, which examine the payload and header of packet, or context-based signatures that evaluate only the packet headers to identify an alert. The pattern descriptors can be atomic (single) or composite (multiple) descriptors. Atomic descriptor requires only one packet to be inspected to identify an alert, while composite descriptor requires multiple packets to be inspected to identify an alert. The pattern descriptors are then put into a knowledge base that contains the criteria for analysis.

  • Analysis: The event data are formatted and compared against the knowledge base by using pattern-matching analysis engine. The analysis engine looks for defined patterns that are known as attacks.
  • Response: If the event matches the pattern of an attack, the analysis engine sends an alert. If the event is partial match, the next event is examined. Partial matches can only be analyzed with a stateful detector, which has the ability to maintain state, as many IDS systems do. Different responses can be returned depending on the specific event records.
  • Refinement: Refinement of pattern-matching analysis comes down to updating signatures, because an IDS is only as good as its signature update.

Analysis Process By Profile-Based Detection (Anomaly Detection)

An anomaly is something that is different from the norm or that cannot be easily classified. Anomaly detection, also referred to as Profile-based detection, creates a profile system that flags any events that strays from a normal pattern and passes this information on to output routines. The analysis process by profile-based detection is as following:

  • Preprocessing: The first step in the analysis process is collecting the data in which behavior considered normal on the network is baselined over a period of time. The data are put into a numeric form and then formatted. Then the information is classified into a statistical profile that is based on different algorithms is the knowledge base.
  • Analysis: The event data are typically reduced to a profile vector, which is then compared to the knowledge base. The contents of the profile vector are compared to a historical record for that particular user, and any data that fall outside of the baseline of normal activity is labeled as deviation.
  • Response: At this point, a response can be triggered either automatically or manually.
  • Refinement: The profile vector history is typically deleted after a specific time. In addition, different weighting systems can be used to add more weight to recent behavior than past behaviors.

Section 3: IDPS Technologies

This section provides an overview of different technologies. It covers the major components, architecture, detection methodologies & security capabilities of IDPS.

Components

Following are the major components and architecture of IDPS;

Sensor & Agents: Sensors & Agents monitors and analyze the network traffic for malicious traffic.

Sensor: The technologies that use sensors are network based intrusion detection & prevention systems, wireless based intrusion detection & prevention systems and network behavior analysis systems.

Agents: The term “Agent” is used for Host-Based Intrusion detection & prevention technologies.

  • Database Server: The information recorded by the sensors and agents are kept safely in a database server.
  • Console: A console is software that provides an interface for the IDPS users. Console software is installed on the administrator’s PC. Consoles are used for configuring, monitoring, updating and analyzing the sensors or agents.
  • Management Server: It is a centralized device, receives information from sensors & agents and manages that information. Some management server can also perform analysis on the information provided by sensor & agents, for example correlation of events. Management server can be both appliance based or software based.

3.1 Network architecture

IDPS components are usually connected with each other through organization’s network or through Management network. If they are connected through management network, each agent or sensor has additional interface known as management Interface that connects it to the management network. IDPS cannot pass any traffic between management interface and its network interface for security reasons. The components of an IDPS i.e. consoles and database servers are attached only with the Management network. The main advantage of this type of architecture is to hide its existence from hackers & intruders and ensure it has enough bandwidth to function under DoS attacks

Another way to conceal the information & communication is to create a separate VLAN for its communication with the management. This type of architecture doesn’t provide a much protection as the management network does.

3.2 Security capabilities

IDPS provide different security capabilities. Common security capabilities are information gathering, logging, detection and prevention.

3.2.1 Information gathering

Some IDPS gather general characteristics of a network, for example, information of hosts and network. They identify the hosts, operating system and application they use, from observed activity.

3.2.2 Logging capabilities

When a malicious activity is detected by the IDPS, it performs logging. Logs contain date & time, event type, rating and prevention action if performed. This data is helpful in investigating the incident. Some network-based IDPS captures packet while host-based IDPS records user ID. IDPS technologies allow log to be store locally and send copies of centralized logging server i.e. syslog.

3.2.3 Detection capabilities

The main responsibility of an IDPS is to detect malicious activity. Most IDPS uses combination of detection techniques. The accuracy and types of events they detect greatly depends on the type of IDPS. IDPS gives great results once they are properly tuned. Tuning gives more accuracy, detection and prevention. Following are some the tuning capabilities:

  • Thresholds: It is a value that sets the limit for normal and abnormal behavior. For example, the number of maximum login attempts. If the attempts exceed the limit then it is considered to be anomalous.
  • Blacklists & Whitelists: A blacklist is list which contains TCP or UDP port numbers, users, applications, files extensions etc that is associated with malicious activity. A whitelist is a list of discrete entities that are known to be benign. Mainly used to reduce false positive.
  • Alert Setting: It enables IDPS to suppress alerts if an attacker generates too much alerts in a short time and blocking all future traffic from that host. Suppressing of alerts provide IDPS from being overwhelmed.

3.2.4 Prevention Capabilities

IDPS offers multiple prevention capabilities. The prevention capability can be configured for each type of alert. Depending on the type of IDPS, some IDPS sensors are more intelligent. They have learning & simulation mode which enables them to know when an action should be performed-reducing the risk of blocking benign activity.

3.2.5 Types of Alarms

When IDPS detects an intrusion it generates some types of alarms but no IDPS generates 100% true alarm. An IDPS can generate alarm for legitimate activity and can be failed to alarm when an actual attack occurs. These alarms can be categorized as:

  • False Alarms: When an IDPS fails to accurately indicate what is actually happening in the network, it generates false alarms. False alarm fall into two main categories:
    • False Positives: These are the most common type of alarms. False positive occurs when an IDPS generates alarm based on normal network activity.
    • False Negatives: When an IDPS fails to generate an alarm for intrusion, it is called false negative. It happens when IDPS is programmed to detect ck but the attack went undetected.
  • True Alarms: When an IDPS accurately indicates what is actually happening in the network, it generates true alarms. True alarms fall into two main categories:
    • True Positives: When an IDPS detects an intrusion and sends alarm correctly in response to actually detecting the attack in the traffic. True positive is opposite of false negative.
    • True Negative: It represents a situation in which an IDPS signature does not send alarm when it is examining normal user traffic. This is the correct behavior.

ARCHITECTURE DESIGN

Architecture design is of vital importance for the proper implementation of an IDPS. The considerations include the following:

  • The location of sensors or agents.
  • The reliability of the solutions & the measurements to achieve that reliability. For example using of multiple sensors, for monitoring the same activity, as a backup.
  • The number & location of other components of IDPS for usability, redundancy and load balancing.

The systems with which IDPS needs interfacing, including:

  • System to which it provides the data i.e. log servers, management softwares.
  • System to which it initiates the prevention responses i.e. routers, firewalls or switches.
  • The systems used to manage the IDPS components i.e. network management software.
  • The protection of IDPS communications on the standard network.

3.3 Maintenance & Operation

Mostly IDPS are operated & maintained by user graphic interface called Console. It allows administrator to configure and update the sensors and servers as well as monitor their status. Console also allows users to monitor and analyze IDPS data and generate reports. Separate accounts could be setup for administrators and users.

Command Line Interface (CLI) is also used by some IDPS products. CLI is used for local administration but it can be used for remote access through encrypted tunnel.

3.3.1 Common Use of Consoles

Many consoles offer drill down facilities for example, if an IDPS generates an alert, it gives more detail information in layers. It also give extensive information to the user i.e. packet captures and related alerts.

Reporting is an important function of console. User can configured the console to send reports at set time. Reports can be transferred or emailed to appropriate user or host. Users can obtain and customized reports according to their needs.

3.3.2 Acquiring & applying updates

There are two types of updates –software updates and signature updates. Software updates for enhancing the performance or functionality and fixing the bugs in IDPS while the signature updates for adding detection capabilities or refining existing capabilities.

Software updates are not limited for any special component but it could include all or one of them i.e. sensor, console, server and agents. Mostly updates are available from the vendor’s web site.

Detection Methodologies

Most IDPS uses multiple detection methodologies for broad & accurate detection of threats but following are primary detection methodologies:

  • Signature Based Detection
  • Anomaly Based Detection
  • Stateful Protocol Analysis

Signature Based Detection

The term Signature refers to the pattern that corresponds to a known threat. In signature based detection, the predefined signatures, stored in a database, are compared with the network traffic for series of bytes or packet sequence known to be malicious, for example, an email with the subject of free screen savers and an attachment of screensavers.exe, which are characteristics of known form of malware Or a telnet log attempt with a false username is the violation of an organization’s security policy.

A signature is a string that is part of what an attacking host sends to an intended victim host that uniquely identifies a particular attack. Input strings are passed through on to detection routines match a pattern in the IDPS’s signature files. This method is the simplest detection method because the current unit of an activity, which could be either a packet or a log entry, is compared with the predefined list of signatures using a string comparison.

This is very effective for detecting known threats but is ineffective in detecting unknown threats. Signature based technologies have very less understanding of network and application protocols and because of this they are ineffective for handling complex data communication. For example, in the above case if the attacker changes the name of the file to scrrensaver2.exe instead of screensaver.exe, the signature based technology , using simple string comparison, could not detect it be a malware. So for this reason, whenever a new threat is detected (by other means) a new signature has to be created to stop such attacks in future.

Anomaly-Based Detection

Anomaly-Based detection is based on the behavior of an event in a system or in a network. It is the comparison of observed events, over a period of time which is considered to be normal, against the events of significant deviations. When some unusual behavior occurs, which could be in events, state or content, it triggers an alarm. This methodology uses Profiles. The profiles are built over a period of time by monitoring the characteristics of a typical activity, for example, a profile for a network might show that Web activity comprises an average of 20% of network bandwidth any more bandwidth usage will be considered as anomaly.

Initial profile is generated over a period of time called a training period. Profiles set the baseline for the normal accepted behavior of the network. Any activity which deviates from this baseline would be considered as anomalous, for example, the normal port used by HTTP is 80 but if it uses other non standard ports then it would be considered as abnormal or during the day. Profiles can either be dynamic or static.

  • Dynamic Profiles: Dynamic profiles are developed by monitoring a typical activity over a period of time called training period.
  • Static Profiles: Static profiles are configured manually.

It is difficult for the network administrator to manually configure the profiles because of the complexity of network traffic while the dynamic profile can adjust itself as new events are observed. With the passage of time, the networks & systems do change, in such a situation static profiles needed to be updated while the dynamic learn themselves and update the profiles accordingly but they are susceptible to invasion techniques for example, an attacker can perform small amount of malicious activity occasionally then slowly increasing the quantity and frequency of activity.

One key distinction between anomaly detection and other analysis schemes is that anomaly-based schemes not only define activities that are not allowed, but also activities that are allowed. In addition, anomaly detection is typically used for its ability to collect statistical behavior and characteristic behavior. Statistics are quantitative and characteristics are more qualitative. For example, ‘’a server’s UDP traffic never exceeds 25 percent of capacity’’, describes a statistical behavior, and a user ‘’X does not normally FTP files outside of the company’’ describes a characteristic behavior.

Anomaly-Based detection methods are quite effective for detecting previously unknown threats because it detects network traffic that is new or unusual, which is opposite to the anomaly-based detection.

Stateful Protocol Analysis

Stateful Protocol Analysis is another comparison technique, which depends on the pre-defined universal standards that specify how a particular protocol should behave. It compares predetermined profiles of generally accepted definitions of benign protocol activity for each protocol state against observed events to identify deviations. It looks for network protocol violations or misuse based on RFC-based behavior. Stateful protocol analysis relies on vendor-developed universal profiles that specify how particular protocols should and should not be used.

The “protocol analysis” performed by stateful protocol analysis methods includes reasonableness checks for individual commands, such as minimum and maximum lengths for arguments. If a command typically has a username argument, and usernames have a maximum length of 20 characters, then an argument with a length of 1000 characters is suspicious. If the large argument contains binary data, then it is even more suspicious.

Stateful protocol analysis performs the inspection of headers as well as the contents of an IP packet up to the application layer. The information obtained from various layers i.e. transport, session and network layer, the stateful protocol analysis decides whether the traffic is legitimate or not. It also examines the state of a connection & store information in a state table. The following looks at the interaction of stateful inspection in the context of an IDPS:

Internet Protocols: Some applications create complex patterns of network traffic, for example, FTP and web traffic. In order to analyze the ‘’state’’ of these network connections, IDPS keeps a track in a central cache. When IDPS receives a packet, it is analyzed against the state table to assess whether or not to allow it to its destination. For example, FTP utilizes multiple simultaneous network connections. When a user opens a connection to a server on the internet and requests a file, the stateful protocol analysis searches for outgoing PORT commands and subsequently adds a cache entry for the anticipated data connection. Since the port command contains the address & port information, the connection would be identified.

TCP Connections: A normal TCP connection follows a three-way handshake process to setup a connection. In the TCP initiation packet, the SYN flag is set and its ACK flag is cleared but the following packets do not have the same structure because they contain data. These subsequent packets are bi-directional & stateful inspection monitors these packets in order to make it sure that they are legitimate.

UDP Connections: UDP is considered to be the unreliable protocol as the packets do not contain any connection information. The header contains only source, destination IP addresses, port numbers and message type. In case of UDP, an entry is created in the cache to buildup a virtual connection. The entry will contain the IP addresses and port numbers. So for a short period of time the IP packets coming from the same IP addresses and port numbers would be allowed.

When IDPS monitors the behavior of the application protocol, it performs decoding. While decoding it can detect the following anomalies:

  • Running a protocol or service on non standard ports.
  • Changes in the field values
  • Illegal commands usage.
  • COMPONENTS OF AN IDPS

The typical components of an IDPS solution are Sensors, Agents and manager (management server, database server).

  • Sensors:
  • Agents
  • Manager ( Management Server)

SENSORS

Sensor is the main repository in Network-Based IDPS. They are critical in intrusion detection and prevention architectures. Sensors are the beginning point of intrusion detection and prevention systems. They supply the initial data about the potentially malicious activity. Within a particular network architecture, sensors are usually (but not always) considered the lowest end components because sensors typically do not have very sophisticated functionality. They are usually designed to obtain only certain data and pass them on. The Network Interface Cards (NIC), monitoring the network data are placed into the promiscuous mode, which accept all incoming traffic regardless of their destinations.

There are two basic types of sensors:

  • Hardware Based/Appliance Based Sensors
  • Software Based Sensors

Hardware Based or Appliance Based Sensors

Appliance based or hardware based sensors are dedicated machines that monitor the network traffic. It comprises of specialized processors for optimize performance. They are efficient in capturing & analyzing the raw data for possible malicious activity.

Software Based Sensors

These are the software that can be installed on the hosts. They capture data from packets and prints packet header that match a particular filter expression. The packets parameter that are particularly useful in the intrusion detection & prevention are time, source & destination address, source & destination ports, TCP flags, initial sequence number from the source IP for the initial connection , ending sequence number, number of bytes and window size.

Previously the programs that most frequently use as sensors were TCP dump and Libcap. TCP dump is an application but Libcap is a library called by an application. The main function of Libcap is to gather packet data from the kernel of the operating system and then move it to one or more applications, for example Ethernet card may obtain packet data from a network. The operating system over which libcap runs will process each packet.

Starting with determining what kind of packet it is by removing the Ethernet header to get to the next layer up the stack. The next layer will be IP layer; if so, the IP header must be removed to determine the protocol at the next layer of the stack i.e. ICMP, TCP or UDP. If the packet is TCP, the TCP header is also removed and the contents of the packet are then passed on to the next layer up, the application layer. Libcap provides intrusion detection and prevention applications with this data so that these applications can analyze the content to look for attack signatures, names of hacking tools and so forth libcap provides a standard interface to these applications.

Sensors Deployment Considerations

Many sensors require that a host be running one or more network interfaces in promiscuous mode. Sensors can be placed outside of exterior firewalls, inside them or both.

Sensors that reside outside of exterior firewalls record information about internet attacks. Web server, FTP servers, extended DNS servers and mail servers often placed outside of the firewall, making them much more likely to be attacked than other hosts. Placing these systems within an organizations internal network potentially makes them lesser targets, because being within the internal network at least afford some protection such as filtering barriers provided by firewalls and screening routers.

On the other hand, at the same time, having these servers within the internal network will increase the traffic load for internal network and will also expose the internal network more if any of these servers become compromised. Given that servers placed outside of the internal network are more vulnerable to attack.

Sensors can be deployed in two modes:

  • Inline Mode
  • Passive Mode

Inline Mode: In this mode, they need to pass the network traffic through them in order to analyze the traffic for any malicious activity.

The basic reason, to put the sensor inline, is to stop attacks by blocking network traffic just like firewall.

Inline sensors are usually placed between two networks, for example, connections with external networks or at the borders between different internal subnets that needed to segregated.

Passive Mode: In this mode, the traffic does not pass through the sensor itself rather it analyze the copy of network traffic for malicious activity. The sensors in passive mode are typically deployed at important key locations in the network. For example, a passive sensor can be deployed at Demilitarized Zone in order to watch the traffic for web server. It can watch the network traffic through different methods:

  • Spanning Port: A ‘’spanning port’’ is a switch port that can see all the traffic passing through the switch, so attaching a sensor to the spanning port can enable it to monitor all traffic which is coming to or passing out of this port.
  • Network Tap: network refers to the direct communication between the sensor & the physical medium. The tap provides the sensor the copy of all network traffic being carried by the media.
  • IDS Load Balancer: An IDS load balancer directs the network traffic to the monitoring system (sensors). It receives copies of network traffic from different spanning port or ports and sends the copies of this network traffic to listening devices .e.g. sensors, based on the configured rules. Common configurations include the following:

If multiple sensors are deployed in the network to analyze the same activity, then it sends all traffic to multiple IDPS sensors.

If there is high volume of traffic then it dynamically split the traffic among multiple sensors.

Traffic is sent to the individual dedicated sensors On the basis of protocols or IP addresses. For example, one sensor might be monitoring the web activity while the other might be monitoring the traffic of a specific subnet.

Security Capabilities of Sensors

The security capabilities of each IDPS depend on the type of technology being used. The main security capabilities of Sensors include the following:

Information Gathering Capabilities

Sensors (Network based IDPS) can collect information on hosts & the network activity of those hosts for example;

  • An IDPS sensor may have the list of the hosts in a network on basis of IP or MAC addresses. This list can be used to identify new hosts on the network.
  • An IDPS sensor could use different techniques to identify the operating system used by the hosts. For example, it could have the ability of tracking the open ports on the host, which could indicate the family of the operating system. It could analyze the packet header for certain unusual characteristics, the technique is known as fingerprinting.
  • Sensor can identify the version of an application by keeping track of the ports.
  • To detect any change in configuration of the network, sensors gather general information about the network data.

Logging Capabilities

In the case of detected events, IDPS performs logging of data. This data is of great importance for further investigation of the incident. The commonly logged data are:

  • IP addresses of source & destination
  • Source & destination ports ( incase of TCP & UDP)
  • Message type (in case of ICMP)
  • Event or alert type
  • Time stamp

Detection Capabilities

Network based sensors has broad range of detection capabilities. IDPS can use either one or all of the following detection mechanisms:

  • Signature based detection
  • Anomaly based detection
  • Stateful protocol analysis

Depending on the type of mechanism they used, sensors can detect the following types of events:

  • Application Layer Reconnaissance Attacks
  • Transport Layer Reconnaissance Attacks: packet fragmentation, port scanning, SYN flood
  • Network Layer Attacks: Spoofed IP addresses, analysis of IP, ICMP and IGMP
  • Unexpected Application Services: Unexpected application services can be detected through stateful protocol analysis, for example, unauthorized applications running on hosts or any changes in the network through anomaly based detection.
  • Policy Violations: Security violations of an organization can be detected through network based IDPS. For example, unauthorized access to IP addresses, unauthorized access to ports, inappropriate web sites and use of forbidden application protocols.

Prevention Capabilities of Sensors

Network based IDPS sensors not only perform the detection but prevention is also an important function of such types of technologies. Following are the prevention capabilities:

  • By Ending the TCP Session (Passive): By ‘’Session Snipping’’ a passive sensor can terminate a TCP session. It sends a TCP reset packets to both endpoints. Both the end point thinks that the other end wants to end the session.
  • By Performing Firewalling (Inline): When sensors are placed inline, they behaved like a firewall. If they detect any activity as suspicious, they have the capability to drop or reject the suspicious activity
  • Bandwidth Allocation (inline): If a protocol is used for any malicious activity then IDPS sensor has the ability to limit the percentage of bandwidth that a protocol can use.
  • Repackaging or Replacing the Content (inline): If an attacker changed the content of the packet for any malicious activity, the inline sensor can replace the payload in new packets, thus normalizing the traffic.
  • Reconfiguring Other Devices: If an internal host has been compromised, IDPS sensor can instruct other network devices to block certain types of activity.
  • Using Other Programs: If an IDPS sensor doesn’t support the prevention action that is desirable by the administration, it can triggered programs specified by the administration in case of any detected attack.

Agents

Agent/s is the repository of a Host-based IDPS. Host-Based IDPS have detection software known as Agents installed on the hosts to monitor the activities of that single host or dedicated appliances running agent software installed on them. Each appliance monitor the network activity coming to and going from a particular host.

The primary function of the agents is the analysis of input provided by sensors. An agent can be defined as a group of processes that run independently and that are programmed to analyze system behavior or network events or both to detect anomalous and violations of an organization’s security policy.

Each agent performs a specialized function independently, for example some agents may examine network traffic and host-based events generally i.e. such as checking whether normal TCP connections have occurred, their start and stop times, the amount of data transmitted or weather certain services have crashed.

Other Agents might look at specific aspects of application layer protocols such as FTP, TFTP, HTTP and SMTP as well as authentication sessions to determine whether data in packets or system behavior is consistent with known attack patterns.

The independent running of Agents means that if one Agent crashes or it is impaired in some manner, the other will continue to run normally.

It also means that Agents can be added or deleted from an IDPS, although each Agent runs independently on the particular host on which it resides, Agents often co-operate with each other. Each Agent may receive and analyze only one part of the data regarding a particular system, network or device.

Agents normally share information they have obtained with each other by using a particular communication protocol over the network. When an Agent detects an anomaly or policy violation (such an attempt to root or a massive flood of packets over the network), the Agent will immediately notify the other Agents of what it has found. This new information, combined with the information the another Agent has, may cause that Agent to report that an attack on another host also occurred.

At a minimum, an Agent needs to incorporate three functions or components:

  • A communication interface to communicate with other components of an IDPS.
  • A listener that waits in the background for data from sensors and messages from other Agents and receives them.
  • A sender that transmits data and messages to other components, such as other Agents and the manage components using established means of communication such as network protocols.

Besides the above, the Agents can also provide a variety of additional functions for example; Agents can perform correlation analysis on input received from a wide range of sensors. In some Agents implementation, the Agents themselves generate alerts and alarms. In some other implementations, Agents access large database to launch queries to obtain more information about specific source and destination IP addresses associated with certain types of attacks, times at which known attacks have occurred, frequencies of scans and other types of malicious activity and so forth. From this kind of additional information, Agents can perform functions such as tracking the specific phases of attacks and estimating the threat that each attack constitutes.

Host-based IDPS often have extensive knowledge about characteristics and configuration and due this ability they can determine whether or not an attack against the host would succeed if not stopped.

Agents Deployment Considerations

Agent can and should be configured to the operating environment in which its run. In Host-Based intrusion detection each Agent generally monitors one host, sometimes sensors on multiple hosts send data to one or more central Agents.

Generally Host-based Agents are deployed to publically accessable servers. In Network-Based intrusion detection, Agents are generally placed in two locations:

  • Where They Are Most Efficient: Efficiency is related to the particular part of a network where connections to sensors and other components are placed. The more locally co resident the sensors and agents are, the better the efficiency.
  • Where They Will Be Sufficiently Secure: The threat of subversion of Agents is a major issue. Agents are typically much smarter than sensors. If an Agent is successfully attacked, not only will the attacker be able to stop or subvert the type of analysis that the Agent performs, but the attack can also be able to glean information that is likely to prove useful in attacking the other components of an IDPS. Compromised Agents thus can rapidly become a security liability.

The way Agents are typically deployed provides some level of defense against attacks that are directed at them. Agents (especially in network-based IDPS) are generally distributed through out a network or networks. Each Agent must therefore be individually discovered and attacked. This substantially increases the work involved in attacking Agents, something that is very desirable from a security perspective, each to some degree unique challenge to the attacker.

Agents need to be secure by doing many things that must be done to protect sensors hardening the platform on which they run, ensuring that they can be accessed by authorized persons and so on.

Security Capabilities of Agents

The security capabilities of Host-Based IDPS are as follows:

Logging Capabilities

One of the important functions of Agents or host-based IDPS is logging of data to detected events. The logged data is useful for the further investigation of the incident. The commonly logged information include the following:

  • Date & time
  • Alert type
  • IP addresses & port information
  • Applications, paths & filenames

Detection Capabilities

Host-based IDPS detect different types of events, depending on type of detection technology. Some of the techniques are:

Code Analysis: Agents can analyze attempts to execute code, which could be the signs of possible malware activity, by the following techniques:

  • Analysis Of Code Behavior: Before running the code on the host, it is executed in a virtual environment & the output is analyzed with the profiles of accepted or unaccepted behavior.
  • Buffer Overflow: Attempts to perform buffer overflow is detected by looking certain sequence of instructions & accessing to the memory that is not allocated to the process.
  • System Call Monitoring: Some malicious activities involve triggering of other applications or processes. Agent watch such attempts.
  • Lists of Applications & Library: If a user attempts to load an application or library, an agent monitors it by comparing with the list of authorized & unauthorized applications & libraries.
  • Analysis Of Network Traffic: Agents analyze the network, transport & application layer protocols for any malicious activity. In addition, they also do processing for some applications, for example, email clients.
  • Filtering Of Network Traffic: Agents can filter incoming & outgoing traffic from each application. Their traffic filtering capability prevent unauthorized access.
  • Monitoring Of File System: Different techniques are used to monitor the file system.
  • Checking Of File Integrity: This can be done by generating message digests or cryptographic checksums for files & comparing it with the reference values.
  • Checking Of File Attributes: It involves checking of file attributes, for example, ownership & permissions to specific file.
  • Attempts to File Access: An agent has policies regarding file access. It compares the current attempt or type of access with the policies. The attempt could be from a user or an application.
  • Monitoring Of Network Configuration: Agent can monitor a host’s network configuration & detect if it has been changed, For example, additional ports of a host being used or additional network protocols.

Prevention Capabilities of Agents

The prevention capabilities of the Host-Based IDPS Agents mainly depend on the detection techniques used by them.

  • Analysis of Code: To prevent the code from being executed, host-based IDPS uses code analysis technique, which could prevent malware and unauthorized applications. Some also prevent invoking of shells, which may cause certain type of attack, by stopping network applications.
  • Analysis of Network Traffic: By analyzing the network traffic, it stops the incoming traffic that might be malicious or outgoing traffic that might be unauthorized. The analysis can also prevent unauthorized downloading or transferring of files to a host. Network analysis technique might reject or drop the network traffic.
  • Filtering of Network Traffic: If the activity is identified by the IP address, ports, ICMP message type or unauthorized policy violation, it can stop unauthorized access.
  • Monitoring of File System: This capability prevents system files from being accessed, modified or deleted, which could stop any malicious activity.

Manager

The final component in multitier architecture is the “Manager” (also known as Server). The fundamental purpose of this component is to provide an executive or master control capability for an IDPS.

Functions

Sensors are usually low–level components and that Agents are usually more sophisticated components, at a minimum, analyze the data they receive from sensors and possibly from each other.

Although Sensors and Agents are capable of functioning without a master control component, but master component is extremely advantageous in helping all components work in a coordinated manner along with some other valuable functions:

Date Management: IDPS can gather massive amount of data. One way to deal with this amount of data is to compress (to conserve disk space), archive it, and then purge it.

Having sufficient disk space for management purposes is a major consideration. One good solution is RAID (redundant array of inexpensive disks), which writes data to multiple disks and provides redundancy in case of any disk failing. Another option is optical media, such as worm drives.

The manager component of an IDPS will also organize the stored data i.e. a rational database. Once a database is designed and implemented, new data can be added on the fly and queries against database entries can be made.

Alerting: Another important function that the manager component can perform is generating alerts whenever events that constitute high levels of threat occur. Agents are designed to provide detection capability, but Agents are normally not involved in alerting because it is more efficient to do so from a central host. Agents usually send information to a central server that sends alert whenever predefined criteria are met. This requires that the server not only contain the addresses of operators who need to be notified, but also have an alerting mechanism.

Alerts are either sent via email or Syslog facility, the message content is usually encrypted. The main advantage of Syslog facility is its flexibility, Syslog can send messages about nearly anything to just about everybody if desired.

Event Correlation: It is extremely important function of the manager component is correlating events that have occurred to determine whether they have a common source, whether they were part of a series of related attacks and so fourth.

High-Level Analysis: The Manager component can perform high-level analysis of the events that the intrusion-detection or intrusion-prevention tool discovers. The Manager component can track the progression of each attack from stage to stage, starting with the preparatory stage. Additionally, this component can analyze the threat each event constitutes, sending notification to the alert generation whether a threat reaches a certain specified value.

Monitoring other Components: Being the centralized, the Manager is ideal to perform this function. The Manger can send packets to each Sensor and Agent to determine whether each is up and running. If the Manager component determines that any other component has failed, it can notify its alerting facility to generate an alert.

Manager can also monitor each host to ensure that logging or auditing is functioning correctly.

Policy Generation & Distribution: Another important function of the Manager component is Policy Generation & Distribution. The term ‘’Policy’’ refers to the setting that affect how the various components of an intrusion-detection or intrusion-prevention system function.

Based on the data that the Manager component receives, it creates and then distributes a policy or a change in policy to individual hosts. The policy might tell each host to not accept input for a particular source IP address or not to execute a particular system call. The Manger component is usually in charge of creating, updating and enforcing policy.

Management Console: Manager Component also provides an interface to the users through management console. It displays critical information-Alerts, the status of each component, data in individual packets, audit log data etc and also allow the operator to control every part of an IDPS. For Example, if a sensor is sending corrupted data, an operator can quickly shutdown the sensor.

Manager Deployment Consideration

The most important deployment considerations for the Manger Component are the ensuring that it runs on extremely high-end hardware (large amount of physical memory & fast processor) and reliable operating system, redundant servers in case one fails are additional measures that can be used to help assure continuous availability.

The deployment of Manger component in a network should be based on the efficiency-the manager component should be in a location within the network that minimizes the distance from Agents with which it communicates- and on security.

Manager Security Considerations

Sensors are most vulnerable to attacks and compromised Agents can cause considerable trouble, but a single successful attack on the Management console is generally the worst imaginable outcome. Such attack can result in a multi-tiered architecture becoming compromised or unusable, so the hardening of the host on which it runs is indispensable.

Hardening includes measures to prevent denial-of-service attacks, shutting down of unnecessary services and it should not be located in a portion of a network that has particularly high level of traffics. The hardware platform on which the Manager Component runs should have to be dedicated for this function.

Unauthorized physical access is always a major concern in any system, but unauthorized access to the Management console is even more critical. Putting suitable physical access controls in place is thus imperative.

Authentication is also a special consideration for the Management Component. Password-based authentication has become increasingly ineffective in keeping out unauthorized users. Finally, providing suitable levels of encryption is critical. All communications between the Manager component and any other component need to be encrypted with strong encryption.

Section 4: Internals of IDP

IDPS can be a simple or complex, at the simplest level a packet capturing program can be used to dump packets to the files and then use of simple commands within scripts to search for strings of interest within the files. This approach is not practical given the sheer volume of traffic that must be collected, processed and stored for the simple level of analysis that could be performed.

In complex IDPS, sophisticated operations such as filtering out undesirable input, applying firewall rules, getting certain kinds of incoming data in a format that can be more easily processed, running detection routines on the data and executing routines such as those that shun certain source IP addresses would occur. In this case in more sophisticated internal events and processes would occur.

The following information will be focused on the flow of information in IDPS, detection of exploits, dealing with malicious code etc.

Raw Packet Capture

Internal flow of information starts with raw packet capture. This involves not only capturing the packets, but also passing the data to the next component of the system. In promiscuous mode, the NIC picks up every packet at the point at which it interfaces with network media.

In non promiscuous mode, NIC picks up only packets bound for its particular MAC address, ignoring the others. Non-Promiscuous mode is appropriate for Host-Based intrusion detection & prevention, but not for Network-Based intrusion detection & prevention.

A Network-Based intrusion detection & prevention system normally has two NICs—one for raw packet capture and the second to allow the host on which the system runs to have network connectivity for remote administration.

The IDPS must save the raw packets that are captured, so they can be processed and analyzed at same later point. In most cases, the packets are held in memory long enough so initial processing activities can occur and soon afterwards, written to a file or a data structure to make room in memory for subsequent input or discard.

TCPDUMP: TCPDUMP could well be called the first intrusion detection systems, which it was initially released in 1991.

TCPDUMP is capable of capturing, displaying and storing all forms of network traffic in a variety of output formats. The syntax for the tcpdump command is as follows:

Tcpdump [ - adeflnNOpqStvx ] [ - c count ] [ - F file ] [ -i interface] [ - r file ]

[ -s snaplen ] [ - T type ] [ - w file ] [expression]

The most commonly used options are described in table below:

Option Description

-c capture count packets, then exit.

-e print the link-level header

-I the name of the network interface to capture data from.

-n don’t convert IP addresses or port numbers to names.

-o don’t attempt to optimize the generated code

-p don’t put the interface in promiscuous mode

-r read packets from the tcpdump capture file

-s capture snaplen bytes of data from each packet

-S print TCP sequence numbers as captured 32 bit value

-t don’t print any timestamp

-tt print timestamp as standard Unix timestamp

-v produce more verbose output

-w write packets to file, in raw format

-x print the packet in hexadecimal

Filtering

No need for an IDPS to capture every packet necessarily exists. Filtering out certain type of packets could instead be desirable. Filtering means limiting the packets that are captured to certain logic based on characteristics, such as type of packets, IP source address range and others. Especially in very high speed networks the rate of incoming packets can be overwhelming and can necessitate limiting the type of packets captured.

Filtering raw packets data can be done in several ways. The NIC itself may be able to filter incoming packets. The driver for the network card may be able to take bpf rules and apply them to the card. The filtering rules specified in the configuration of the driver itself. This type of filtering is not likely to be as sophisticated as the bpf rules.

Another method of filtering raw packet data is using packet filters to choose and record only certain packets, depending on the way filters are configured.

Libpcap, for example, offers packet filtering via bpf interpreter. The bpf interpreter receives all the packets, but it decides which of them to send on to applications. In most operating systems filtering is done in kernel space. The operating systems with the bpf interpreter in kernel are, thus, often the best candidates for IDPS platforms.

Filtering rules can be inclusive or exclusive, depending on the particular filtering program or mechanism. For example, the following tcpdump filter rule (port http) or (UDP port 111) will result in any packets bound for an http port or UDP port 11.

Packet Decoding

Packets are subsequently sent to a series of decoder routines that define the packet structure for the layer two (datalink) data (Ethernet, Token Ring, or IEEE 802.11) that are collected through promiscuous monitoring. The packets are then further decoded to determine whether the packet is an IPv4 (which in case when the first nibble in the IP header is 4), an IP header with no options (which in case when the first nibble in the IP header is 5), or IPv6 (where the first nibble in the IP header is 6), as well as the source and the destination IP addresses, the TCP and UDP source and destination ports and so forth.

Packet decoding examines each packet to determine whether it is consistent with applicable RFCs. The TCP header size plus the TCP data size should, for instance, equal the IP length. Packets that can not be properly decoded are normally dropped because the IDPS will not be able to process them properly.

Some IDS, such as Snort, go even further in packet decoding in that they allow checksum tests to determine whether the packet header contents coincide with the checksum value in the header itself. Checksum verification can be done for one, or any combination of, or all of the IP, TCP, UDP, and ICMP protocols.

Storage

Once each packet is decoded, it is often stored either by saving its data to a file or by assimilating it into a data structure while, at the same time, the data are cleared from memory. Storing data to a file is rather simple and intuitive. New data can simply be appended to an existing file or a new file can be opened, and then written to.

Fragment Reassembly

Decoding makes sense out of packets, but this does not solve all the problems that need to be solved for an IDPS to process the packet properly. Packet fragmentation poses another problem for IDPS. A reasonable percentage of network traffic consists of packet fragments with which firewalls, routers, switches and IDPS must deal. Hostile fragmentation, packet fragmentation used to attack other systems or to evade detection mechanisms, can take several forms:

One packet fragment can overlap another in a manner that fragments will be reassembled so subsequent fragments overwrite parts of the first one instead of being reassembled in their natural sequential order. Overlapping fragments are often indications of attempt (if none of these know how to deal with packets of this nature, they would be unable to process them further).

Packets may be improperly sized, in one variation of this condition, the fragments are excessively large—greater than 65535 bytes and thus, likely to trigger abnormal conditions, such as excessive CPU consumption in the hosts that receive them. Excessively large packets thus usually represent attempts to produce DoS. For example, ‘’Ping of death’’ attack in which many oversized packets are sent to victim hosts, causing them to crash. Or, the packet fragments could be excessively short, such as less than 64 bytes. Often called a tiny fragment attack, the attacker fabricates and then sends packets broken into tiny pieces. If the fragment is sufficiently small, part of the header information gets displaced into multiple fragments, leaving incomplete headers. Network devices and IDPS may not be able to process these headers. In the case of firewalls and screening routers, the fragments could be passed through and on to their destination although, if they were not fragmented, the packet might not have been allowed through. Or, having to reassemble so many small packets could necessitate a huge amount of memory, causing DoS.

Still another way of fragmenting packets is to break them up, so a second fragment is contained completely within the first fragment. The resulting offsets create a huge program for fragment-reassembly process, causing the host that received these fragments to crash. This kind of attack is known as a teardrop attack.

A critical consideration in dealing with fragment packets is whether only the first fragment will be retained or whether the first fragment, plus the subsequent fragments, will be retained. Retaining only the first fragment is more efficient. The first fragment contains the information in the packet header that identifies the type of packet, the source and the destination IP addresses, and so on. Having to associate the subsequent fragments with the initial fragment requires additional resources. Some of the subsequent fragments likely are unlikely to contain information of much value to an IDPS.

Fragments reassembly can be performed in a number of ways:

  • The OS itself can reassemble the fragments
  • A utility can perform this function

STREAM REASSEMBLY:

Stream reassembly means taking data from each TCP stream and, if necessary, reordering it (on the basis of packet sequence numbers), so it is the same as when it was sent by the host that transmitted it and also the host that receives it. This requires determining when each stream starts and stops, something that is not difficult given that TCP communications between any two hosts begin with a SYN packet and end with either a RST (reset) or FIN/ACK packet.

Steam reassembly is important when data arrive at the IDPS in a different order from their original one. This is critical step in getting data ready to be analyzed because IDPS recognition mechanisms cannot work properly if the data taken in by the IDPS are scrambled. Stream reassembly also facilitates detection of out of sequence scanning methods.

Stream reassembly results in knowing the directionality of data exchanges between hosts, as well as when packets are missing (in case an IDPS will report this as an anomaly). The data from the reassembled streams are written to a file or data structure, again, either as packet contents or bytes streams, or are discarded.

Stream reassembly with UDP and ICMP traffic can also be done but both of these protocols are connectionless and sessionless and, thus, do not have the characteristics of TCP stream reassembly routines use. Some IDPS make UDP and ICMP traffic into ‘’pseudo session’’ by assuming that whenever two hosts are exchanging UDP or ICMP packets with no pause of transmission greater than 30 seconds, something that resembles the characteristics of a TCP session is occurring. The order of the packets can then be reconstructed.

e) Stateful Inspection of TCP Sessions

The stateful inspection of network traffic is a virtual necessity whenever the need to analyze the legitimacy of packets that transverse networks presents itself.

Attackers often try to slip packets they create through firewalls, screening routers, IDPSs by making the fabricated packets ( such as SYN/ACK or ACK packets) look like the part of an ongoing session or like one being negotiated via three-way TCP handshake sequence, even though a session was never established.

Generally, IDPSs perform stateful inspections of TCP traffic. These systems generally use tables in which they enter data concerning established sessions, and then compare packets that appear to be part of a session to the entries in the tables. If no table entry for a given packet can be found, the packet is dropped. Stateful inspection also helps IDPSs that perform signature matching by ensuring this matching is performed only on content from actual sessions. Finally, stateful analysis can enable an IDPS to identify scans in which OS fingerprinting is being attempted. Because these scans result in a variety of packets sent that do not confirm to RFC793 conventions, these scans stand out in comparison to established sessions.

Firewalling

The internal flow of information within an IDPS includes filtering packet data according to the set of rules. Filtering is essentially a type of firewalling. But, after stateful inspections of traffic are performed, more sophisticated firewalling based on the results of the inspections can be performed. While the primary purpose of filtering is to drop packet data that are not of interest, the primary purpose of firewalling after successful inspection is to protect the IDPS itself. Attackers can launch attacks that impair or completely disable the capability of the IDPS to detect and protect. The job of the firewall is to weed out these attacks, so attacks against the IDPS do not succeed.

Signature Matching

A signature is a string that is part of what an attacking host sends to an intended victim host that uniquely identifies a particular attack. Signature matching means input strings passed on to detection routines match a pattern in the IDPS’s signature files. The exact way an IDPS performs signature matching varies from system to system. The simplest, but most efficient, method is to use fgrep or a similar string search command to compare each part of the input passed from the kernel to the detection routines to lists of signatures. A positive identification of an attack occurs whenever the string search command finds a match.

Rule Matching

Rule-based IDPS are based on the rules. These types of IDPSs holds considerable promise because they are generally based on combination of indicators of attacks, aggregating them to see if a rule condition has been fulfilled.

Signature themselves may constitutes one possible indication. In some, a signature that invariably indicates an attack may be the only indicator of an attack that is necessary for a rule-based IDPS to issue an alert. In most cases particular combinations of indicators are necessary.

For example, an anonymous FTP connection attempt from an outside IP address may not cause the system to be suspicious at all. But, if the FTP connection attempt is within, say, 24 hours of a scan from the same IP rule, a rule based IDPS should become more suspicious. If the FTP connection attempt succeeds and someone goes to the /pub directory and starts entering cd.., cd.., cd.., a rule-based IDPS should go crazy, Because it is most likely dot-dot attack. Rule-Based systems generally have much more sophisticated.

Profile-Based Matching

Information about user’s session characteristics is captured in system logs and process listings. Profile routines extract information for each user, writing it to data structures that store it. Other routines build statistical norms based on measurable usage patterns. When a user action that deviates too much from the normal pattern, occurs the profiling system flags this event and passes necessary information on to output routines. For example, If a user normally logs in from 8:00 A.M to 5:30 P.M but, then one day logs in at 2 A.M, a profile-based system is likely to flag this event.

4.2 Malicious Code Detection

Malicious code is so prevalent and so many different types of malicious code exist, antivirus software alone cannot detail with the totality of the problem. Accordingly, another important function of intrusion detection and intrusion prevention is detecting the presence of malicious code in systems.

Types of Malicious Code

Viruses: Self-replicating programs that infect files & normally need human intervention to spread.

Worms: Self-replicating programs that spread over the network and can spread independently of humans.

Malicious Mobile Code: Programs downloaded from remote hosts, usually (but not always) written in a language designed for interaction with the web servers.

Backdoors: Programs that circumvent security mechanisms (especially authentication mechanisms)

Trojan Horses: Programs that have a hidden purpose: usually, they appear to do something useful, but instead they perform some malicious function.

User Level Rootkits: Programs that replace or change programs run by system managers and users.

Kernel Level Rootkits: Programs that modify the operating system itself without indication that this has occurred.

Combination Malware: Malicious code that crosses across category boundaries.

4.2.1 How Malicious Code Can Be Detected

IDPS generally detect the presence of malicious code in much the same manner as these systems detect attacks in general. This is how these systems can detect malicious code:

  • Malicious code sent over the network is characterized by signatures such as those recognized by antivirus software. IDPS can match network data with signatures, distinguishing strings of malicious code within executables, unless the traffic is encrypted.
  • Rules based on port activation can be applied. If, for example, UDP port 27374 in a Windows system is active, a good chance exists that the deadly SubSeven Trojan horse program is running on that system.
  • Worms often scan for other systems to infect. The presence of scans can thus also be indications of malicious code infections for rules-based IDPS.
  • Tripwire-style tools can detect changes to system files and directories.
  • Symptoms within systems themselves, as detected by host-based IDPS can indicate malicious code is present. Example includes the presence of certain files and changes to the registry of Windows systems, in which values can be added to cause malicious code to start whenever a system boots.

4.3 Output Routines

Once the detection routines in an IDPS have detected some kind of potentially adverse event, the system needs to do something that at a minimum alerts operators that something is wrong or to go farther by initiating evasive action that results in a machine no longer being subjected to attack.

Normally calls within detection routines activate output routines. Most current IDPS write events to a log that can easily be inspected. Evasive action is generally considerably more difficult to accomplish, however the following types of evasive actions are currently often found in IDPS:

Output routines can dynamically kill established connections. If a connection appears to be hostile, there is no reason to allow it to continue. In this case an RST packet can be sent to terminate a TCP connection. However sending a RST packet may not work. Systems with low performance hardware or that are overloaded may be unable to send RST packet in time. Additionally, ICMP traffic presents a special challenge when it comes to terminating ICMP sessions. The best alternatives for stopping undesirable ICMP traffic are one of the following ICMP options; --- icmp_host ( meaning to transmit an ‘’ICMP host unreachable’’ message to the other host)

icmp_net (resulting in transmitting an ‘’ICMP network unreachable’’ to the client.

icmp_port (causing an ‘’ICMP port unreachable’’ to be sent to the client.

Systems that appear to have hostile intentions can be blocked from further access to a network. Many IDPS are capable of sending commands to firewalls and screening routers to block all packets from designated source IP addresses.

A central host that detects attack patterns can recognize a new attack and its manifestations within a successful attacked system. The central host can change a policy accordingly. It can forbid overflow input from going into the stack or heap. It can also prevent recursive file system deletion commands from being carried out, given that commands to do either are entered on a system and then send changed policy to other systems, keeping them from performing these potentially adverse actions.

4.4 Incident Response

The number of attacks that an organization faces is growing quickly. To put an IDPS with no goals or plan for how to respond is just as risky as not having the systems in place at all. Tracking and responding to intruders on the network is very complex task that needs to be planned. There are two ways to deploy IDPS to detect incidents: attack detection and intrusion detection.

Response Types

The term response is used to refer to any action taken to deal with a suspected attack. In general, there are three types of responses that are made; automated responses, manual responses and hybrid responses.

Automated Responses: Automated responses that happen upon detection of a specific event. For example, a rule could be set up so that if someone connects to an active port and sends a specific attack string, that connection would be dropped. Automated responses allow the attack to be stopped immediately and the system returns to a safe state.

There are several automated responses that can be used:

Dropping the connection: This response involves stopping all communication on a port, typically at the firewall. The IDPS instructs the firewall to stop the connection. This is typically done if the attack matches a specific string of a known attack. It is important to make sure that the communication is not legitimate, because this response will stop the traffic. Also it will only affect that single host and an attacker may just use another host to attack from.

Throttling: This technique is used against port scans. Throttling adds a delay in responding to a scan and as the activity increases so does the increase in delay.

Shunning: This is the process of identifying an attacker and denying the attacking system any network access or services. This can be done on the attacked host or at any network checkpoint, such as a router or firewall.

Session sniping or RESETs: This technique is used when an attack signature is detected. The IDPS sends a forged RESET bit to both ends of the connection to cause the connection to stop. This will cause the buffers to be flushed and the connection to be terminated, averting the attack. Session sniping can be overcome by the attacker by setting the PUSH flag on the TCP packet, which will allow each packet to be pushed to the application as it arrives, which is not normally what happens. Session sniping is not foolproof but it can achieve moderate success.

Manual Reponse: Automated responses are great when they work, but the fact is that humans are still needed to verify and analyze the information. Each attack is different and humans will consider variables that an automated response cannot. IDPS are still immature technologies and need for human reaction is always crucial.

Hybrid Responses: Hybrid responses are the most common types of response, most IDPS to respond effectively needs combined efforts of human and technological intervention. A hybrid response is the combination of both automated and manual response. For example, a detection of a connection to active port 21 on the network that is coming from an unauthorized IP address. The firewall drops this connection as an automated response and the security staff needs to check through the logs for the same IP address & similar attacks.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Information Technology"

Information Technology refers to the use or study of computers to receive, store, and send data. Information Technology is a term that is usually used in a business context, with members of the IT team providing effective solutions that contribute to the success of the business.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: