This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Today, it is amazing at how the internet generation has turned both adult and young people alike into budding computer experts. Although most of us might not know all of the mechanics of computer security, we do understand that certain precautions need to be taken to protect ourselves online. One of the most important aspects of computer safety, especially for those who use broadband internet connections, is a firewall. We have all heard of a firewall and are sure that we probably need one, but we might not understand exactly what it is or how it works( Anon and Davis 1998)
According to Anon and Davis 1998, a firewall is system designed to prevent unauthorized access to or from a private network and basically limits access to a network from another network. Firewall that can be implemented in hardware or software, or a combination of both either denies or allows outgoing traffic known as egress filtering or incoming traffic known as ingress filtering.ÂÂ
Since their development, various methods have been used to implement firewalls. These methods filter network traffic at one or more of seven layers of the ISO (Open System Interconnection )network model which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers[online].In addition, researchers have developed some newer methods, such as protocol normalization and distributed firewalls, which have not yet been widely adopted.
Firewalls involve more than the technology to implement them. Specifying a set of filtering rules, known as a policy, is typically complicated and error-prone. High-level languages have been developed to simplify the task of correctly defining a firewall's policy. Once a policy has been specified, the firewall needs to be tested to determine if it actually implements the policy correctly. The predecessors to firewalls for network security were the routers used in the late 1980s to separate networks from one another.
A network which is not properly setup can cause problems on one side of the router was largely isolated from the network on the other side. In a similar vein, so-called \chatty" protocols on one network (which used broadcasts for much of their configuration) would not affect the other network's bandwidth [Avolio 1999; Schneier 2000].
From these historical examples we can see how the term \firewall" came to describe a device or collection of devices which separates its occupants from potentially dangerous external environments (e.g., the Internet). A firewall is designed to prevent or slow the spread of dangerous events.
In real life a firewall is a construction that will stop the fire from spreading straight throughout the building. In the world of networking a firewall is a device that will stop (or at least does its best effort to stop - as a real firewall) unwanted networking to go from a network to another - it is a controlled gateway between one network and another [Anon 1998]. In a normal case the unwanted network traffic could consist of break-in attempts, malicious software, and so on.
Currently it is getting crucial for every company to have an internet connection. The connection could anyhow be very dangerous, as I have stated above, the internet is full of anonymous perils that have no other tasks but to try to get into your network. The simplest solution is to filter all traffic between the company internal network (should that be LAN/MAN/WAN or whatever) and the internet. The firewall can monitor and restrict the traffic in various ways; that can help to stop the spiteful crackers. The firewall can for instance simply stop all the traffic but the one coming from some specific network address.
In an organizational setup, firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria. A firewall should be the first line of defense in protecting the availability, integrity, and confidentiality of data in the computing environment. While a company may use packet-filtering routers for perimeter defense and host-based firewalls as an additional line of defense, in the home environment, the personal firewall plays a key role by defending the network and individual host perimeters. If the whole network is connected openly to the Internet, anyone within the Internet can try to break into the computer systems in the network. If the passwords are weak, the password-files freely available (even in encrypted form), or some other security risk exists, it is pretty easy to find out a user id-password - combination and do anything within the system. The intrusion could be done with telnet, ftp or such protocol.(John 1995 )
The password security is pretty low in open networks; for instance telnet and ftp transfer the passwords in plain text-form, and it is very easy to spy/monitor the network traffic and obtain the password.
In the TCP/IP the origin, route and destination of a data packed is marked on the data packet. Firewalls usually provide a security feature to restrict the traffic to some specific network addresses. Nonetheless, it is very easy for a malevolent party to masquerade itself to a trusted network address and set the routing options so that the data is sent back to it. That could happen so that the attacker would change its host's IP address to match the trusted client. Then the attacker would create a source route that will use the attacker's network address as the last hop address. In the following example I would like to explain how the attacker could just send a request to the server under attaches and get the answer.
SMTP (Simple Mail Transfer Protocol) is a very vulnerable to spoofing, because with it, it is very easy to change the sender's identity. It could be used to send harmful information by some third party.
Domain Name Services are rather vulnerable to break-in attempts. From a DNS service it is usually pretty easy to find out the whole network topology and the trusted IP-address-space. With these data it is then easy for the twisted hackers to plan and implement a large-scale intrusion attempt. [Online]
Send mail is Unix-based emailing software that can pose a remarkable risk to the invulnerability of a network. The software consists of thousands of lines of code. And already now the flaws in the software have enabled some pretty "big" intrusions. [Online]
If the FTP service is not carefully implemented, it could be used to get to all kinds of information in the computer with an anonymous id. In 1996 the hackers got into a server of free net, and user-information of 65000 users was nearly removed. [Online]
With Satan software it is possible to investigate the weaknesses of several hosts. The Satan is freely available and could be very harmful when used by beginner. [Online]
Telnet, http, SMTP, as many other protocols transfer the data over Internet in a form that is not encrypted. It is rather easy to get this data with adequate monitors. From this data it is then easy to either read the contents or get some password-, credit card etc. information. There are some secure protocols to be used in not trusted networks, like https.
One risk that the twisted hackers pose to the systems they have got in is that they find their way up to the confidential information. The information could be then sold to the direct competitors, or could be spread throughout the internet. Just think what would happen if a competitor of a software developing company would get detailed information of its competitor's next generation software project.
The use of firewall is to protect the system from the hackers from logging into machines on network. Provide a single access point from where security and audit can be imposed. Act as an effective phone tap and tracing tool. Provide an important logging and auditing function and provide information about the nature of traffic and the number of attempts made to break into it.
There are four types of firewalls: Filtering gateways, circuit gateways, application gateways and hybrid or complex firewalls (Anon 1998).
A packet filtering firewall will filter the TCP/IP packets sent through the firewall. That means that for instance packets only from some specific network address get through. As we earlier found out the IP- addresses are rather easy to spoof. A second step in packet filtering is to allow/disallow some specific protocols.
The most risky protocols should be blocked or restricted to some specific computers only.
A circuit level firewall works so that all requests from the requesting computer are directed to a single computer acting as a firewall. The requests are then forwarded out from the network and they appear just like coming directly from the firewall. This means that the topology and IP-space of the internal network is masqueraded so that internal IP-addresses are not visible outside the network. This enables also that the whole network can access Internet with a single IP-number allocated from the Internet IP-space.
The circuit level firewall implementation may anyhow require some modifications to the computers in the network, that could turn out very hard to make or impossible. (Anon 1998)
Application level firewalls are often called proxies. The proxies act very much similarly to circuit level firewalls, but provide a higher level of filtering and security capabilities. A proxy may have a secondary authentication (which can be very secure, like when using Secure ID, which is a small device that is synchronized to the proxy and generates continuously new passwords [online]).
A proxy could even have some kind of a virus protection. The limitation of proxies is that the data throughput might not be high enough because of the advanced handling. One positive side of proxies is that they will in many cases provide an extensive logging of the traffic.
One very important feature that almost all commercially available firewalls provide is the ability to log and monitor the network traffic. Monitoring is very important for instance for noticing the attempts to break in to the system. There are different levels of monitoring systems available.
No firewall provides perfect security. Several problems exist which are not addressed by the current generation of firewalls.
A firewall is probably best thought of as a permeable membrane. That is, it is only useful if it allows some traffic to pass through it (if not, then the network could be physically isolated from the outside world and the firewall not needed).
Unfortunately, any traffic passing though the firewall is a potential avenue of attack. For example, most firewalls have some provision for email, but email is a common method of attack; a few of the many email attacks are described in [Cohen et al. 2001; Computer Emergency Response Team (CERT) 1999; 2000a; 2001b; 2001c; 2001d]. The serious problem of email-based attacks has resulted in demand for some part of the firewall to check email for hostile code. Products such as commercial email virus scanners have responded to this problem. However, they are only as good as the signatures for which they scan; novel attacks pass through without a problem.
A possibility of false positives exists with this scheme, but Martin et al. believe that this problem is less likely to occur than the <applet> tag appearing in non-HTML files. Finally block all files whose names end in class.
This solution is weak because Java classes can come in files with other extensions, for example, packing class files in a .zip file is common.
Their suggestion is to implement all three of these, and they write a proxy who does everything except look inside of .zip files. 13.2 Servers on the DMZ (demilitarized zone) are a physical or logical sub network that contains and exposes an organization's external services to a larger untrusted network, usually the Internet.
Because the networks inside of a firewall are often not secure, servers which must be accessible from the Internet (e.g., web and mail servers) are often placed on a screened network, called the DMZ (for demilitarized zone; for a picture of one way a DMZ may be constructed, see Figure 1, part C). Machines on the DMZ are not allowed to make connections to machines on the inside of the firewall, but machines
On the inside are allowed to make connections to the DMZ machines.
The reason for this architecture is that if a server on the DMZ is compromised, the attacker cannot directly attack the other machines inside. Because a server must be accessible to be of use, current firewalls other than signature-based ones can do little against attacks through the services offered. Examples of attacks on servers include worms (Danyliw et al. 2001).
The need for firewalls has led to their ubiquity. Nearly every organization connected to the Internet has installed some sort of firewall. The result of this is that most organizations have some level of protection against threats from the outside.
Attackers still probe for vulnerabilities that are likely to only apply to machines inside of the _firewall. They also target servers, especially web servers. However, these attackers are also now targeting home users (especially those with full-time
Internet connections) who are less likely to be well protected. Because machines inside a firewall are often vulnerable to both attackers who breach the firewall as well as hostile insiders, we will likely see increased use of the distributed firewall architecture. The beginnings of a simple form of distributed
Firewalls are already here, with personal firewalls being installed on individual machines.
However, many organizations will require that these individual firewalls respond to configuration directives from a central policy server. This architecture will simply serve as the next level in a sort of arms race, as the central server and the protocol(s) it uses become special targets for attackers.
Firewalls and the restrictions they commonly impose have affected how application- level protocols have evolved. Because traffic initiated by an internal machine is often not as tightly controlled, newer protocols typically begin with the client contacting the server; not the reverse as active FTP did. The restrictions imposed by firewalls have also affected the attacks that are developed. The rise of email based
Attacks are one example of this change.
An even more interesting development is the expansion of HTTP and port 80 for new services. File sharing and remote procedure calls can now be accomplished using HTTP. This overloading of HTTP results in new security concerns, and as a result, more organizations are beginning to use a (possibly transparent) web proxy so they can control the remote services used by the protected machines.
The future is likely to see more of this co-evolution between protocol developers and firewall designers until the protocol designers consider security when the protocol is first developed. Even then, firewalls will still be needed to cope with bugs in the implementations of these protocols.