Firewall Scheme For Dynamic And Adaptive Containment Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.


Due to the increasing threat of attacks and malicious activities, the deployment of firewall technology is an important milestone toward securing networks of any complexity and size. Unfortunately, the inherent difficulties in designing and managing firewall policies within the modern highly distributed, dynamic and heterogeneous environments might greatly limit the effectiveness of firewall security. It is therefore desirable to automate as much of the firewall configuration process as possible. Accordingly, this work presents a new more active and scalable firewalling architecture based on dynamic and adaptive policy management facilities enabling the automatic generation of new rules and policies, to ensure timely response in detecting unusual traffic activity and identify unknown potential attacks (0day). The proposed scheme, structured in a multi-stage modular fashion, can be easily applicable in a distributed security environment, and is not dependent from specific security solutions or hardware/software packages.


In recent times, the role of Firewalls in network security is getting wider and more varied than several years ago. The deployment of firewalling technology, to enforce segmentation of the risk space into different security domains and implement the security policies associated to each domain, is still the first milestone toward securing large and medium scale networks. Firewall systems, often consisting of several devices distributed across the network, filter out unwanted or unauthorized traffic, going to or coming from the secured network segments, on the basis of rules set according to domain-specific security policies and requirements. Security policies specify what is permitted and what is prohibited during normal operations, by defining constraints, limitations and authorization on data handling and communications. In a complex and rapidly evolving network environment, the increasing complexity of these security policies, together with their implementation and maintenance in large distributed security systems makes them more error-prone. The risk that security devices and polices lose effectiveness is real and twofold. On the first hand, poorly crafted rules can become a performance bottleneck, for example when less frequently triggered rules are unnecessarily checked very often because of an improper rule ordering. On the other hand, the effectiveness of firewall security may be limited or compromised by a poor management of firewall policy rules. One of the more interesting problems is how much the rules are useful, up-to-date, well-organized or efficient to reflect current characteristics and volume of network packets. For example, the network traffic trend may show that some rules are out-dated or not used recently. This may further lead one to consider removing, aggregating or reordering them to optimize the firewall policy effectiveness and efficiency. Also, classic access control list strictly based on network traffic observation can often results in conflicts between policies. Such conflicts can cause holes in security, and often they can be hard to find when performing only visual or manual inspection. Finally, the examination of server and network logs may invalidate or confirm that firewall policy rules are updated and consistent with the current network services and compliant to the associated security objectives. In any case, the task of manually managing firewall policy rules becomes very difficult and time-consuming, if not impossible, as the number of filtering rules increases drastically beyond the reasonable scope and scale of a manual process. This enormous task addresses the need for the effective management of firewall security with policy management techniques and tools that enable network administrators to easily generate, validate and optimize firewall rules in an almost totally automatic way. Accordingly, we propose a new more active and scalable firewalling architecture based on dynamic and automatic policy management facilities aiming not only at keeping policies efficient and up to date by minimizing (by optimizing and reorganizing) the associated rule sets but also at modifying the actual policies by automatically generating new effective rules, needed to cope in real-time with the current traffic profiles and extemporaneous security events. The resulting security scheme, structured in a multi-stage modular system, can be easily applicable, in a distributed fashion, at several network location and is not dependent from specific security solutions or hardware/software packages. For its dynamicity and adaptivity in automatically defining new security rules based on actual network events it would also be very effective against unknown (0day) viruses, worms or generic security outbreaks. The contribution of this paper is twofold. Whereas several techniques that can be useful for implementing the above policy management issues appeared recently in literature, to the best of our knowledge, this is the first attempt at building a unified architecture integrating all of the component ideas in a consistent framework. In addition, we focused our efforts on the overall system architectural and modeling aspects, rather than on specific implementation details. Also, by merging proven network security concepts and schemes with modern adaptive and automatic policy generation and optimization techniques, we address the "missing link" in the network security "big picture", that is, the concept of obtaining reactive and dynamic firewall services that are able to cope in real-time with the emerging Internet threats and security issues.

Related work

Firewalls have been given strong attention in the research community and many papers related to the issues discussed in this work have been focused on individual firewall security aspects, such as the gap between access control requirements and rule sets, the high complexity of rule set design and management, rule set consistency and redundancy. The process of comparing an access control policy against the firewall rule set is called conformity checking, and can be used before or after consistency checking, since it is a complementary process. This problem has been addressed by some authors by using automated and manual approaches [GUTTMAN]. The FANG system [MAYER] can reverse engineer a model of a policy from firewall configurations. A most recent work [ABEDIN] is focused on the generation of firewall rules as the result of the application of data mining techniques on firewall log files. Then these rules were generalized via a generalization model and further, an anomaly discovery algorithm was applied to the rules. Our work differs from theirs in many respects: our framework, being based on an abstract model, is more general with respect to the specific firewall used. At the same time, emphasizing system modularity, we extend the categories of data to be analyzed, including also system log files and warnings raised by external IDS/IPS. On the other hand, many research groups have proposed models and languages to model access control policies, with the objectives of simplifying the syntax, abstracting from the details of low-level firewall languages, and of separating the security policy from the network topology completely. A good survey of these languages can be found in [DECAPITANI]. Most works introducing models and languages include components dedicated to isolate and identify inconsistencies and redundancies. They lack, however, distributed conflict removal. In addition, there are graphical tools that aim to ease the creation of rule sets. One of the most complete ones is Firewall Builder [BUILDER], which creates an object-oriented firewall model and can compile it into many low-level firewall languages. The problem of firewall ACL consistency has been addressed by many works, which propose algorithms that work directly with rule sets. The authors of [HAMED] defined a complete inconsistency model for firewall rule sets. However, their approach can only detect and diagnose inconsistencies between pairs of rules and does not analyze problems with a combination of more than two rules. We took the best ideas from the above schemes and models and combined them in a uniform consistent firewall security framework, by proposing an integrated multi-stage architecture taking benefit from all the advantages of automatic generation, optimization, and deployment.

State-of-the-art firewall solutions

A firewall is a network element whose purpose is the selective control over flows traversing the boundaries of a secured network, thus implementing a specific security policy. A list of ordered filtering rules specifies the actions to be performed on flows, on the basis of specific conditions to be satisfied by the flows themselves. The matching part of a rule is composed of set of fields such as protocol type, source and destination IP addresses and ports, or header flags. The filtering fields of a rule indicate the possible values, or range thereof, that the corresponding fields in actual network traffic may have for the rule to be applicable. Once all the matching conditions of a rule are met, the filtering actions part of that rule define what to do with the flow under scrutiny. The action can either be to accept, forwarding the packets into or from the secure network, or to deny, which causes the packets to be discarded. If not all the clauses in the matching part are satisfied, the following rule is looked over, and so on until either a matching rule is found or a default action, usually denial, is performed. Although any field in IP, UDP or TCP headers can be used in the rule filtering part, the most commonly found matching fields in practice are: protocol type, source IP address, source port, destination IP address and destination port, with some fields such as TTL and the SYN flag being less frequently used for identifying particular flows.

Firewall can be classified into various types, according as the capabilities they have and as the protocol layer at which they act. Firstly, there are packet filter firewalls. Packet filtering focuses mainly on accepting or denying packets. It's not suitable for defense means against intruders and therefore just appropriate as another security measure. Main strengths of packet filter firewalls are their speed and flexibility. These systems can be used to secure nearly any type of network communication or protocol. They can be deployed easily into nearly any enterprise network infrastructure. However, they cannot prevent the network from elaborate attacks, because they do not examine upper-layer data. For instance, they do not support advanced user authentication schemes and cannot detect network packets in which the OSI layer 3 addressing information has been altered.

Secondly, stateful inspection firewalls add layer-4 awareness to the standard packet filter architecture. These systems share the strengths and weaknesses of packet filter firewalls. The actual stateful inspection technology is relevant only to TCP/IP. Moreover their use is very costly as the state of connection is monitored at all times. Although a stateful inspection firewall is able to add new transport-layer control capabilities within a network, it handles packets only statically. Through open ports, a firewall would not inspect/control packet willingly. To prevent malicious self-propagating worms/virus attacks from entering into intranets, dynamic and application-aware filtering of data packets is compulsory.

Accordingly, one of the more recent innovations in stateful firewall technology is the application of deep packet inspection or DPI. Deep Packet Inspection can be seen as the integration of Intrusion Detection (IDS) and Intrusion Prevention (IPS) capabilities within the traditional stateful firewall technology. In detail, Deep Packet Inspection is a term used to describe the ability of a firewall to look within the application payload of a packet or traffic stream and make decisions on the significance of that data based on the content of that data. The engine that drives deep packet inspection typically includes a combination of signature-matching technology along with heuristic analysis of the data in order to determine the impact of that communication stream. While the concept of deep packet inspection sounds very nice it is not so simple to achieve in practice. The inspection engine must use a combination of signature-based analysis techniques as well as statistical, or anomaly analysis, techniques. Both of these are borrowed directly from intrusion detection technologies. In order to identify traffic at the speeds necessary to provide sufficient performance newer ASICs will have to be incorporated into existing firewall designs. These ASICs, or Network Processors Units (NPUs), provide for fast discrimination of content within packets while also allowing for data classification. Deep Packet Inspection capable firewalls must not only maintain the state of the underlying network connection but also the state of the application utilizing that communication channel. Moving the inspection of the data into the network firewall provides network administrators greater flexibility in defending their systems from malicious traffic and attacks. Such firewalls do not eliminate the need for Intrusion Detection Systems, they merely collapse the IDS that should sit directly behind the firewall into the firewall itself. The need for this technology and this capability in firewalls stems from DoS (Denial of Service) attacks that can interrupt services by flooding networks or systems with unwanted traffic. Here, a service is denied either because the network/system is overwhelmed or because the network/system turns offline. The service will be denied until the source of the attack can be identified and calls from that source are blocked. Deep Packet Inspection provides some relief from each of these attacks, moving the detection and response directly to the firewall through immediate termination of the attack by cutting the line of communication at a network demarcation point. However, an attacker could spoof attacks from many sources and effectively deny everybody access to the server. A firewall would be of no help since it has no way of determining whether a request being sent to a web server is benign or malicious. While the firewall could stop traffic to ports that do not need to be publicly accessible, it is useless in the discussed situation.

Thirdly, application-proxy gateways/firewalls offer more extensive logging capabilities, are capable of authenticating users directly, and can be made less vulnerable to address spoofing attacks. These systems are, however, not generally well suited for high-bandwidth or real-time applications.

The reference architecture

The reference architecture for implementing the above adaptive firewall solution can be structured into five separate modules, operating in a pipeline (see Fig. 1), each implementing a specific task within the proposed security policy enforcement schema. More specifically, the Analyzer module has the role of extracting information from network traffic and log files, by means of data mining techniques. The results become input to the Generator, that integrates the supplied information with data coming from IDS and manual input associated with security alerts. The third module optimizes the generated rules, whereas the fourth module detects and removes any resulting timing conflict within rules, preparing the translation and the deployment in a distributed and heterogeneous network environment, performed by the final module. The benefits of a modular approach include: the possibility to independently implement and tune the separate components that realize the required functions. In addition, keeping in mind that some activities in the complete security management lifecycle are much more expensive than others, in particular requiring more computational time, modular design helps desynchronize the various activities between themselves. Separate thresholds can be set up for the various modules, effectively allowing the system to be fine-tuned to the characteristics, requirements, and policies of the operating environment. Another key requirement is the possibility to leverage upon multiple sources of information. In particular, we believe that the operators must have the possibility to simply specify particular events, or behaviors that should be monitored, or actions. Such information may result, for instance, from security bulletins or similar sources, leaving the possibility open to integrate the architecture with modules that handle automatic broadcasting of such information. Ideally the resultant module chain should be cross platform and be able to run on Unix-like systems. Main target systems and their corresponding firewall solutions can be, for example, ipfw and pf on FreeBSD, iptables and ipchains on Linux and ACL on Cisco or cisco-like devices.

The Analyzer module

The Analyzer module is the place when most of the adaptation to the operating environment happens. Its main task is bridging the gap between what is being observed in the network from traffic analysis and network device alerts and what is needed to be written in the security policy rules. Following the approach of [BASHAH], this activity is essentially accomplished through data-mining techniques on the traffic traces and network/systems logs. Meta-rules will specify parameters such as which log information to collect and search, how often the analysis should be performed, what patterns should be looked for, if the output of an IDS/IPS system should be considered, and so forth. Note that, in this context, by output of an IDS/IPS system we are referring to indications at the warning level, calling attention on some anomalous event that is under way but that may not be marked as downright dangerous. The analyzer, starting from the above events and matching the traffic observations with specific profiles and known trends, performs Apriori analysis to determine association rules that expose less evident correlations between network activities and measurements. These outputs have to be fed to the rule generator module, possibly after human verification, to trigger the production of the appropriate firewall rules across the devices belonging to the whole security system. Such verification phase is needed to check whether the global behavior of the network architecture and of the single devices will be consistent with the aim of the network administrator.

For example, if it is found that most attacks against a Web server are consistently preceded by anomalous activity on some nonstandard TCP port, then the system may issue a warning about that particular port being used as the control channel of some botnet. These associations should be transformed into tentative rules that must be integrated and made coherent with the global security policies portfolio. The only downside to this type of control is that some association may involve single IP addresses rather than a particular traffic type (e.g. protocol/port). This case raises the issue of how to clear out stale IP addresses (after the host is no longer a threat) and the possibility of spoofed packets DoS-ing a specific IP Address.

A practical difficulty is that traffic has high variability across different environments and changes wildly over time. To meet this challenge, systems should have some fairly loose thresholds, ensuring tolerance of anomalous behaviors, and should adapt their reference values during their operation.

Clearly, the breadth and depth of this analysis will have an impact on the module footprint in terms of memory and computational resources. In this respect, another important factor that hasn't been looked much before is speed of the updates. In traditional stateless firewalls updates are rarely needed (maybe once a day or even less) so that performance impact is negligible. However, in stateful firewalls (and home environments) rule updates are required more often. Worst case home scenario might require new rule for every new connection and with some peer-to-peer file sharing that might result in dozens (or even hundreds) of rule additions every second.

Finally, the level and granularity of information that should be reported to the local administrators is also an important parameter the can be used to better adapt he framework to each operating environment.

The rule Generator

The automated generation process is indispensable if no knowledge engineers exist to mine the data manually in order to acquire the deep knowledge. Automatic generation of rules is needed in the fields where it is important to assess and validate expert knowledge in a faster and more reliable manner, especially in applications where the lack of reliability is dangerous. Alert-level output from an IDS/IPS, non-ambiguously indicating that malicious activity is taking place, can be directly fed into this module, since these data are already significant and need no further investigation.

The basic strategy to automatically generate a rule set is to divide the network into two "inside the wall" and "outside the wall" parts. Initially both sides start off with the least possible privileges (deny all). Then all incoming flows targeted at commonly known services are permitted. Flows targeting high port numbers are only allowed as a response to outgoing flows. This quite lax basic configuration can then be refined by the administrator by either individually allowing or denying flows or by specifying wildcards on IP, protocol or port level.

The difficulty of writing and modifying a rule set increases with the number of rules. The same problem arises with rule modification. The process of inserting a new rule in the global security policy is performed in two steps. The first step is to identify the firewalls in which this rule should be deployed. This is needed in order to apply the filtering rule only on the relevant sub-domains without creating any inter-firewall anomalies. The second step is determining the security attributes to be checked to implement the filtering rules. The involved attributes may consist of protocol (TCP or UDP), direction (incoming or outgoing), source IP, destination IP, source port, destination port, and action (accept or deny). The third step is to determine the proper order of the new rule in each one of these firewalls so that no intra-firewall anomaly is created. In the second step, the order of the new rule in the local policy is determined based on its relation with other existing rules. In general, a new rule should be inserted before any rule that is a superset match, and after any rule that is a subset match of this rule.

Each rule in the firewall policy can be modeled by a single rooted tree that named the policy tree [Al-Shaer] This tree model provides a simple and apprehensible representation of the filtering rules and at the same time allows for easy discovery of relations and anomalies among the rules. Each node in a policy tree represents a field of the filtering rule, and each branch at this node represents a possible value of the associated field. The root node of the policy tree represents the protocol field, and the leaf nodes represent the action field, intermediate nodes represent other 5-tuple filter fields in order. Every tree path starting at the root and ending at a leaf represents a rule in the policy and vice versa. Rules that have the same field value at a specific node, will share the same branch representing that value. every rule should have an action leaf in the tree. The basic idea for building the policy tree is to insert the filtering rule in the correct tree path. When a rule field is inserted at any tree node, the rule branch is determined based on matching the field value with the existing branches. If a branch exactly matches the field value, the rule is inserted in this branch, otherwise a new branch is created. The rule also propagates in superset or superset branches to preserve the relations between the policy rules.

The policy tree is very useful to keep track of the correct ordering of each new inserted rule. We can start by searching for the correct rule position in the policy tree by comparing the fields of the new rule with the corresponding tree branch values. If the field value is a subset of the branch, then the order of the new rule so far is smaller than the minimum order of all the rules in this branch. If the field value is a superset of the branch, the order of the new rule so far is greater than the maximum order of all the rules in this branch. On the other hand, if the rule is disjoint, then it can be given any order in the policy. Similarly, the tree browsing continues evaluating the next fields in the rule recursively as long as the field value is an exact match or a subset match of the branch. When the action field is reached, the rule is inserted and assigned the order determined in the browsing phase. A new branch is created for the new rule any time a disjoint or superset match is found. If the new rule is redundant because it is an exact match or a subset match and it has the same action of an existing rule, the policy editor rejects it and prompts the user with an appropriate message.

Thus, as the last step in adding a new rule, the corresponding policy tree instances have to be passed to the optimizer module.

The optimizer module

In this phase core operations upon the single devices rule lists optimization will be performed. The aim of these operations is twofold: to restrict the number of rules in every rule list without changing the external behavior of the device and to optimize filtering performance. Optimization can happen in many places. First possibility is when rules are added to the firewall. This is somewhat rare event (when compared to filtering packets) so it can use more resources. However it shouldn't interrupt normal operations for too long. Second place for optimization is the rule checking. Every time packet arrives some algorithm must be used to check the rules. Optimization algorithms should exhibit quick runtime features so that firewalls can keep up with the current traffic demands. Downside is that firewalls may be external devices with very little memory so that puts some limits on these algorithms, also if the involved devices are completely hardware based firewalls, taking advantage of specialized processors. Basically, what goes under the wide name of Rule Optimization methods in the literature can be divided into three groups. Methods from first group are used only once when rules are changed. First group then contains algorithms, which try to optimize out the unnecessary rules and perhaps order them in more optimal order. The second group contains algorithms and methods for actual packet matching and the third group has algorithms for learning what kind of traffic is on the network and reordering the rules based on that (for second group algorithms which use ordered rules). In this work, we consider all these aspects, but in this section, we specifically mean for optimization only the methods in the first group. Since in our architecture most of the activities related with the third group are carried out in the first module, we decided to assume that firewall operation and rule lookup methods have to be considered as static parameters, also for the sake of focusing on vendor independence, and immediate applicability on regular, commercially available, solutions.

Reducing number of rules gives performance gains in every case. Rule rearrangement helps only if the algorithm results in multiple rule comparisons. Most of these algorithms tend to find smallest possible group of rules since their full comparisons are somewhat expensive.

While volume and frequency analysis of traffic would yield valuable information that could assist in the generation of efficient matching rules, such an analysis would also have the drawback of being massively time- and resource- hungry. All of the traffic must be scrutinized, since at the time of measuring there is no information about traffic that is authorized or not. We, instead, argue that placing such analysis at the optimization stage, thus acting on active firewall rules only, reduces the data size and therefore gains efficiency.

We recommend that fully dynamic optimization will not be performed, since the computational effort would be unpractical and adaptively reacting too quickly to extemporaneous traffic conditions may not be a good idea. In fact, real-world traffic changes quite often and unpredictably, so that the benefits of dynamic optimization would not be sufficient to compensate for the computation required. In addition, such scheme would be exposed to a DoS attack consisting in a sequence of apparently regular traffic flows that have the intent of altering the parameters, triggering extremely frequent updates.

We, instead, propose a "dampened" dynamic approach, where rule firing frequency information is available to the optimization module, and separate thresholds govern the triggering of rule generator and optimizer modules. In particular, when the generator module determines the need for a new rule, creates and inserts it at the lowest-ordered feasible position in the rule set. As long as the new rule is fired, counters will reflect its application frequency -and hence importance - and the optimizer module may decide, when an independent threshold is exceeded, to reorganize the rules pace to reflect the changes. The most frequently fired rules, will, so to say, "bubble up" in the rule space.

At the same time, the optimizer will downgrade the less frequently fired clauses. Eventually, rules that are not used for a too long time (according to another threshold determined by the meta-policy), and hence may be considered as unuseful, can be removed, thus reducing drastically the rule space dimension and, hence, the memory footprint.

In distributed firewall environments, removing a rule from a specific firewall may result in creating an inter-firewall anomaly. For example, if a "deny" rule is removed from the upstream firewall, this will result in spurious traffic flowing downstream, but if an "accept" rule is removed from the upstream firewall, it will block the relevant traffic, and all the related (exact, subset or superset) downstream rules will be shadowed. When the user decides to remove a rule from a certain firewall, the first step is to identify all the source and destination sub-domains that will be impacted by removing this rule. We use the same technique described in rule insertion process to determine the network path between every source-destination domain pair relevant to this rule. In the second step, we remove the rule from the firewall policy as follows. If the rule is an "accept" rule, then we remove it from the firewalls in all paths from source to destination. Otherwise, shadowing and/or spuriousness anomaly is created if the rule is removed from the upstream and/or the downstream firewalls respectively. However, if the rule is a "deny" rule, then we just remove it from the local firewall because it does not have any effect on other firewalls in the network.

The conflict remover module

Firewall policies can be periodically updated (by inserting, modifying or removing rules) to dynamically accommodate new security requirements and network topology changes. Consequently, rules should be periodically checked against the characteristics of network traffic, to verify that they are still useful, well organized, and consistent with the current traffic shape and volume parameters. In fact, a new filtering rule may not apply to every network sub-domain, therefore this rule should be properly located in the correct firewalls to avoid blocking or permitting the wrong traffic. Errors or inconsistencies in the configuration of security components, may lead to weak access control policies - potentially easy to be evaded by unauthorized parties. Moreover, as rules in a local firewall policy are ordered, a new rule must be inserted in a particular order to avoid creating intra-firewall anomalies. The same applies if the rule is modified or removed. Within a single firewall, intra-firewall anomalies [AHMED] occur when the same flow matches more than one local filtering rule. This often results in conflicts between policies, which may in turn provoke security flaws. Such conflicts can often be hard to find when performing only visual or manual inspection on numerous rules that may have been written by different people at various times. For example, if one finds out that some rules have not been recently used, that may lead to consider rule reordering, re-aggregation, or even removal. A common intra-firewall anomaly is known as shadowing. It occurs when a rule never applies because its matching conditions are always covered by other rules occurring before, and thus taken into consideration earlier. Alternatively, if a rule is not shadowed by other rules, but it has no effect in the sense that removing it does not change the policy, it is said to be redundant. Furthermore, it is very common to have multiple firewalls installed in the same enterprise network. This has many network administration advantages. It gives local control for each domain according to the domain security requirements and applications. For example, some domains might demand to block RTSP traffic or multicast traffic, however, other domains in the same network might request to receive the same traffic. Multi-firewall installation also provides inter-domain security, and protection from internally generated traffic. Moreover, because of the decentralized nature inherent to the security policy in distributed firewalls, the potential of anomalies between firewalls significantly increases. Even if every firewall policy in the network does not contain rule anomalies, there could be anomalies between policies of different firewalls. For example, an upstream firewall might block a traffic that is permitted by a downstream firewall or vice versa. In the first case, this anomaly is called inter-firewall "shadowing" which similar in principle to rule shadowing in intra-firewall anomaly. In the other case, the resulted anomaly is called "spurious traffic" because it allows unwanted traffic to cross portions of the network and increases the network vulnerability to denial of service attack. In a distributed environment comprising multiple firewalls, different firewalls in the same network path may perform different actions on the same flow, thus giving rise to inter-firewall anomalies. In this case, not only the relations between rules in a single firewall need to be analyzed so as to determine the correct rule ordering, but also the relations between rules in different firewalls must be taken into account to find out the proper placement of a particular rule onto a particular firewall. In addition, secure devices can be interconnected over an insecure network and this has to be considered when devising information distribution techniques

The basic idea that can be adopted for discovering anomalies (see [Al-shaer]) is by determining if two rules coincide in their policy tree paths. If the tree path of a rule coincides with the tree path of another rule, there is a potential (matching or redundancy) anomaly that can be determined based on the previous definitions of anomalies. If rule paths do not coincide, these rules are disjoint and they have no anomalies. When a new rule is introduced, or an existing rule is modified, also by only changing its order within the policy, the corresponding policy tree should be matched pairwise with all the other existing instances to discover any anomalous situation that occurred as a consequence of the action of the previous modules.

The conflict remover output should result in the final rules expressed according to a firewall-independent abstract modeling language with the expressive power of existing firewall-specific languages, but with significantly less complexity and specificity. The model represented by this abstract language will then be automatically translated into any of the existing low-level firewall languages by the Deployer module. Hence the final output of the Conflict remover will be expressed in an abstract language (AFPL [POZO] or FLIP [ZHANG]) so that the next module down the pipeline will be able to perform its job on the result. These languages can express in a consistent way and by keeping hidden the inherent complexities, stateful and stateless rules (although), positive and negative rules, overlappings, exceptions, and can be easily compiled to several market-leader firewall languages.

The Deployer module

Once access control rules have been specified, generated, optimized and checked against potential conflicts, they must be deployed to actual devices. In order to do this, rules must be translated from the abstract AFPL or FLIP syntax into the appropriate low-level firewall languages. Firewall platforms are very different from one vendor to another, and even among the available Open Source platforms there are noticeable differences. These range from differences in the number, type, and syntax of selectors that each platform's filtering algorithm can handle, to huge differences in rule-processing algorithms that can affect the design of the ACL. Fortunately, however, the vast majority of filtering actions can be expressed with any of the filtering languages and platforms, with the only difference on the number of rules needed, and/or in their syntax. Accordingly, in our pipelined architecture we introduced a final stage, the Deployer module, whose task is the translation of the already generated and optimized rule sets into the specific languages of the involved firewalls, adapting the rules to the specific conventions, limitations and characteristics of the target devices. Appropriate interfaces (CLI, SNMP or specific APIs) can be used to securely communicate with the end side network devices. Clearly, such communication needs to take place in both directions, i.e. for rule configuration and updating as well as for collection of statistical data.


Firewalls have been for long time the defense frontier for secure networks against attacks and unauthorized/malicious activities by filtering out unwanted network traffic coming from or going to the secured network. Although the deployment of firewalls is still the most important step in securing networks, the complexity of designing and managing firewall policies within the next generation optical-speed and highly heterogeneous networks might greatly limit the effectiveness of firewall security. Integrating techniques of different already available security systems and technologies appears to offer interesting possibilities to achieve a more dynamic, adaptive and flexible concept of firewall able to cope with the above problems.