This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
For network developers, security plays a key role in every stage of the network completion, from the design, to development, to deployment, and into daily use. More specifically, the security attributes must be part of the network from the beginning, reducing the need for future adjunct and expensive installations.
How developers can design systems for security from the very beginning is the question which this page answers, by providing some hints and exemplification of the most known security methods. Except from being aware of the dangers, there are some basic specifications to be taken into consideration:
A user, who has access to a local terminal, can try to breach the system even without using a mediate network. This user, called intruder, can reach this by pretending a legal user or by having illegal access to system's databases. Universally, intruders are sorted to three categories:
-Masqueraders: A person that has not permissions to use the computer but he breaks the access controls of a system to exploit the account of a legal user.
-Misfeasors: A legal user who access data, programs or sources with not permissions or by using mischievously some permissions and security clearances he has got.
-Clandestine users: A person who acquires general control of monitoring a system (supervisor) and uses it to avoid the monitoring and access checks.
As intruders aim to have access to a system's data and exercise all the rights a legal user has, the first that is important to have, is user (or users) password. Typically, systems maintain a file with the passwords and usernames. This file has to be securely saved so as to prevent intruders having access to it. There are two ways to lock this file:
1)one way cryptography- where passwords are being encrypted before being registered to file, but this action is not invertible- and 2) file access control- where access to this file is provided to only a few, or one, accounts.
Password cracking can be accomplished if intruders use the hackers' dictionary, email spoofing, packet sniffing or -the simplest- by guessing it.
Except from password cracking, there are lots of other frequently used techniques to invade to a system, like social engineering (or Trojan programs) or tapping. Some other methods don't demand an acquisition of password, like buffer overflows, remote administrator programs, cross-site scripting, etc.
The possible dangers that our system is exposed to, according to the techniques frequently used by intruders, are: Stealing of passwords, bugs and backdoors, authentication failures, protocol failures, Information leakage, denial of service etc. Not all of them demand connection to the Internet, although attacks happen more frequently if systems are based on the Internet.
All this attack methods are categorized in passive attacks and active attacks, according to RFC 2828 (http://tools.ietf.org/html/rfc2828).
The passive attack can be eavesdropping or a monitoring of a data transmission. It is hard to detect passive attacks as data is not being changed. A passive attack on a network may not be malicious in nature. For instance, some users that use passive attack may try to collect information only for personal use, characterizing this as "educational" activity. However, once this information is stolen and collected, may be used for later attack to the system it's associated with. Generally, even without modifying or malignly retransmitting information they obtain, passive attackers act against confidentiality rules and the privacy of communication.
â˜»The most common type of passive attack on networks is sniffing. If the packets are not encrypted, a sniffer can provide all the details of data inside a packet that is transmitted. Usually this information is placed into log files.
An attacker using a sniffer can analyze information transmitted through a network, to eventually cause the network to crash or even become corrupted. Some vulnerable protocols that are often sniffed, especially for passwords, are telnet, ftp, rlogin, IMAP, and POP.
â˜»An instance of a sniffer is Ethereal; a network protocol analyzer that sniffs data when put on promiscuous mode. What is more, it "knows" the protocols and structures the packets transmitted, in the appropriate format.
Active attacks include comprise modifications of data or the creation of a fake data stream and can sorted to these four categories: masquerade, replay, modification of messages and denial of service.
Masquerade exists when an entity pretends that is another entity. It is usually used in combination with some of the other active attacks' categories. Trojan horse is such an entity, usually a destructive program that masquerades as a benign application.
Replay includes the passive interception of a data unit and its retransmission so as to generate a not licensed result.
Modification of messages represents the messages shuffle, or their delay and generally, some changes that happen to messages so as a not licensed result be generated.
Denial of service prevents or blocks the normal use or management of communication structures. In other words, DoS attacks can deplete or misallocate the resources on a target server host so that it cannot serve its clients. These kinds of attacks affect on bandwidth, memory, cpu and common resources.
An example of denial of service is the temporary interruption of a whole network either by disabling it or by overloading it with messages queues, to decrease its performance. Client bombards server with SYN packets that are never ACK'ed. IP spoofing can be used to make packets look their coming from other places. What is immediately affected is bandwidth. Flooding can also be achieved by Ping Floods: this is a flooding of ICMP Echo Request packets.
A kind of denial of service can be reached by zombie networks-also known as bot-nets- against servers on the Internet. Zombie computers- maybe whole networks, like "trinoo" network- are backdoors or compromised in such a way that an attacker from a place somewhere in the Internet, can control them. Controlling zombies, the attacker can send instructions to send spam or pull of network attacks without it being easily tracked back to an IP associated with him.
Generally, passive attacks are easier to be detected but harder to be prevented, as the inhibition of them, demands a constant, physical protection of all the communication structures and channels.
â˜»A common kind of attacker that intrudes into the communication between the endpoints on a network to inject false information and intercept the data transferred between them is called "the man in the middle" (MITM). There are various techniques for stealing data that a MITM uses, that are classified the following three network environment types:
-Local Area Network
-From Local to Remote (through a gateway)
-Some scenarios of Local Area Networks attacks are: ARP poisoning (or spoofing), DNS spoofing, IP address spoofing, Port stealing and STP mangling.
Starting from the first, a MITM using it, can forge fake, or 'spoofed' ICMP packets(videlicet ARP messages) to an Ethernet LAN. The MITM's aim is to associate the attacker's MAC address with the IP address of another host. Any traffic meant for that IP address would be mistakenly sent to the attacker instead. Then, the Man In The Middle modifies the data before forwarding it or attacker could also launch a denial-of-service attack against a victim by associating a nonexistent MAC address to the IP address of the victim's default gateway.
If the attacker was a passive attacker, would only choose to forward stolen data to the actual default gateway or save them to a log file.
Seconldy, if the Man In The Middle uses DNS spoofing, starts by sniffing the ID of any DNS request, and then tries to reply to the target requests before the real DNS server.
The third method is similar to the first, as the attacker creates IP packets with a forged source IP address - as he does with the MAC address- in order to hide the real address of the original sender or to impersonate another computer system.
With the same way of thinking, a MITM using Port Stealing technique, impersonates victim host's port. Firstly, he starts flooding the network with layer 2 packets. These packets have source address equal to victim host's MAC address and destination address equal to attacker's host's MAC address. After that, when victim hosts send packets to attacker's host, he generates broadcast ARP request for the victim's IP address and so, when he receives it, he can start the attack or continue stealing the rest of the IP addresses on the network.
STP (Spanning-Tree Protocol) mangling refers to the method by which the attacker try to get his host be elected as the new root bridge of the spanning tree. The attacker may start either by forging BPDUs (Bridge Protocol Data Units) with high priority assuming to be the new root, or by broadcasting STP Configuration/Topology Change Acknowledgement BPDUs. By taking over the root bridge, the attacker will be able to intercept most of the traffic.
-There are also many techniques that are implemented through a gateway as 'From Local to Remote' attacks. Detailed reference to some of them follows:
ARP poisoning and DNS spoofing are based to the same thinking as for Local Area Network Attacks.
The DHCP spoofing attack starts when the attacker notices broadcast of DHCP requests. If the attacker replies before the real DHCP server it reaches to acquire the victim's IP address, the GW address assigned to the victim and the DNS address.
With ICMP redirection technique, the attacker forgers ICMP redirect packet in order to redirect traffic to himself. In success, the ICMP redirect is getting disabled.
A third technique is IRDP spoofing, by which the attacker can forge some advertisement packets pretending to be the router for a LAN. He can set perfect features, to be sure that hosts will choose it as the preferred router. On success, he disables IRDP on hosts.
The last technique that is being referred about this category, is route mangling. Here, the attacker can forge packets for the gateway, pretending to be a router with a good metric for a specified host on the internet. Attacker must be careful to choose a big enough netmask to win against other routes. The goal for the attacker is to send packets to the real destination, through gateway, after he chooses the best route. In success, attacker disables dynamic routing protocols and enables authentications on the protocols that support them.
Route mangling is also a method in the category of Remote attacks. In this case, the man In The Middle aims to hijack the traffic between the two victims A and B to collect useful information through traceroute or portscanning.
Another technique that belongs to the last category, is DNS poisoning. The attacker here, sends a request to the victim DNS asking for one host and then spoofs the reply which is expected to come from the real DNS. The spoofed reply must contain the correct ID. Attacker can act with a second way too. He can send a "dynamic update" to the victim DNS and if its accepts and processes it, it is even worst because it will be even more reliable.
From the analysis of these kinds of attacks, we understand that when weak protocols support the network, an intruder can hack easily. The security between a client-server connection, or between just two clients, relies on a proper configuration of the clients and the most significant, on the use of cryptography on all the layers of OSI, or TCP/IP protocol suite. Means that strong protocols with cryptography suites are demanded, such as IPsec in the network layer, SSLv for the transport layer, and PGP at the application layer.
Trojan horses, which were previously mentioned as an active attack example, can be characterized as malicious code. They are programs that have got a "secondary" non-obvious functionality. Two other kinds of malicious code are viruses and worms. A virus is a program that is being attached to an executable host program and is capable of infecting other executable programs. A worm is a program that self-replicates by itself over a network.
Generally, malicious code can be consisted of logic bombs or time bombs. Logic bombs are programs that trigger some action when a certain condition is satisfied when time bombs are programs that trigger some action at a certain time. Malicious software can also include a program with a trapdoor/backdoor, which is a program that has a functionality that is activated through some secret input.
Viruses: A virus can affect to anything that is executable (or can be
made executable) such as regular executable files, document files (word, excel etc.), libraries, and source files, object files.
Appealing characteristics of viruses are listed below:
â€¢ Hard to detect.
â€¢ Not easily destroyed or deactivated.
â€¢ Spreads infection easily.
â€¢ Can reinfect its home program.
â€¢ Easy to create.
â€¢ Machine - O/S independent.
Virus once activated, upper memory bound is reset below itself. It traps disk read interrupt by resetting the pointer to itself. It also traps boot read calls so that they return proper contents. Virus is stored in six disk sectors (including boot). With every read it inspects the boot sector and if it doesn't find itself it replicates.
Worms, as malicious software, like viruses, they self-replicate. A difference is that, when a virus needs a carrier to be activated whenever the carrier is activated, a worm does not need a carrier: when it is unleashed it either advances by itself or dies.
Trojan horses exist when you install software to obtain one functionality but you receive another. Any software you download and install in your computer can essentially do whatever it wishes to your system (up to the level of access it has). To be protected from Trojan horses, a user shall not install software from sources that does not trust. What is more, he shall check digital signatures if he downloads software (especially web-based).
Protection from viruses requires special virus protection software which maintains a database of virus signatures and it scans files against this virus signature database. To avoid problems, database must be maintained up to date and monitors files for modifications.
At last, against worms, a user can use firewalls with only necessary ports open, monitor the network or system and patch his operating system frequently as well as programs that are network enabled.
But what a firewall is?
A firewall is a system or group of systems that enforces an access control policy between two networks. The actual means by which this is accomplished varies widely, but in principle, the firewall can be thought of as a pair of mechanisms: one which exists to block traffic, and the other which exists to permit traffic. Some firewalls place a greater emphasis on blocking traffic, while others emphasize permitting traffic. Probably the most important thing to recognize about a firewall is that it implements an access control policy. If you don't have a good idea of what kind of access you want to allow or to deny, a firewall really won't help you. It's also important to recognize that the firewall's configuration, because it is a mechanism for enforcing policy, imposes its policy on everything behind it. Administrators for firewalls managing the connectivity for a large number of hosts therefore have a heavy responsibility.
A type of firewalls is the Network layer Firewalls or packet filters. These generally constitute a cheap part of the router (where routers are basic component of our network), and they are often part of a more-complete Gateway. Their functionality is based on the source and destination addresses and ports. As traditionally a simple router represents a network layer firewall, they maintain internal information about the state of connections passing through them and the contents of some of the data streams. Network layer firewalls can't get fancy and tend to be fast and very transparent to users.
The second type of firewalls is the application layer firewalls. These generally are hosts running proxy servers, which permit no traffic directly between networks, and which perform elaborate logging and auditing of traffic passing through them. Since the proxy applications are software components running on the firewall, it is a good place to do lots of logging and access control. Application level firewalls are safer but slower than these on network level, because they demand more work to build. They also tend to provide more detailed audit reports and tend to enforce more conservative security models.
There are some ways, except for firewall considerations, to defend from intruders, either when they act by using the Internet, or not. Two basic safety measures are i) the detection and ii) the prevention. The detection deals with the reporting of an attack, either before or after happens. On the other hand, the prevention is the most hard part of the security, because all possible attacks shall be prevented, when at the same time, attackers only have to detect the weakest link of the defensive chain and hit it.
Protocols for security:
Security in networks can be based as on cryptography on application data as also on security protocols. More specifically, use of proven protocols is essential. Someone who will roll his own protocol lays the system assets as a facile target for malicious clients. This is unavoidable, because open networks and distributed systems-where protocols are implemented- are vulnerable to hostile users who try to intrude to them by breaching the protocol.
Cryptographic protocols are a significant part of security in communication over insecure open networks. The Secure Sockets Layer (SSL) application-level security protocol is such an example, as also as Transport Layer Security (TLS) and Secure Shell (SSH).
Security-related protocols in lower layers of OSI and TCP/IP models are responsible for authenticating and encrypting IP packets, and for beginning of sessions between hosts in networks. The IPsec protocol suite may be considered an example of this, as it can be used to protect data streams between pairs of hosts. More specifically, the security that IPsec supplies on the layer of the IP protocol, satisfies also the security demands on the rest protocol layers. By implementing IPsec on the network data transmissions, security is provided for applications that come by safety features as also for applications with not safety providence. This safety-related protocol includes three basic functions. These are the authenticity verification, the privacy reassurance and the key management.
Procedures that provide authenticity verifications ensure that received packets are the original packets. Their origin (source address) is always defined in packet headers. Furthermore, they ensure that packets have not be modified through the transmission.
Procedures that provide the privacy reassurance, supply the information that the communicating hosts are encoding their messages.
And finally, the key management function has to deal with the secure key exchange.
The main characteristic of IPsec is that it provides cryptography capabilities and traffic tracking on IP layer. Consequently, all distributed applications such as telnet, client/server applications, e-mail and file transfer applications, can use cryptography issues.
The IPSec suite includes Authentication Header (AH), Encapsulating Security Payload (ESP) and Internet Key Exchange (IKE) protocols, as also as transforms. These protocols interact with each other and they are tied together to implement the capabilities the hosts and gateways should provide. For example, IPSec architecture requires the host to provide confidentiality using ESP, and data integrity using either AH or ESP and anti-replay protection. The ESP and the AH protocols define the packet processing rules. Specifically, they support the transport operation state and the tunnel operation state. The transport state provides security, mostly in upper lowers protocols, so security issues are extended to the payload part of IP packet. However, the tunnel mode provides security to the whole IP packet.
IKE generates keys for the IPSec protocols and it is also used to negotiate keys for other protocols that need keys. There are other protocols in the Internet that require security services such as data integrity to protect their data. One such example is OSPF (Open Shortest Path First) routing protocol. The payload format of IKE is very generic. It can be used to negotiate keys for any protocol and not necessarily limit itself for IPSec key negotiation. This segregation is achieved by separating the parameters IKE negotiates from the protocol itself. The parameters that are negotiated are documented in a separate document called the IPSec Domain of Interpretation.
In authentication and cryptographic algorithms, in IP protocol there is the basic concept of security association (SA), which is a one-way relationship between sender and receiver and provides safety services for the related traffic. It is set up by IKE, being responsible to handle negotiation of protocols and algorithms and to generate the encryption and authentication keys to be used by IPsec. Particularly, security association comes in with three parameters:
â˜ºThe Security Parameters Index, SPI -> an array of bits that defines this SA. The SPI pointer is being transferred to AH and ESP headers. So, the receiver's system is allowed to choose the SA with which it's going to process the receiving packet.
â˜ºThe IP destination address->that parameter constitutes the final destination of SA and it may be either a user's terminal or a network device such as a router or a firewall!
â˜ºSecurity protocol identifier-> it shows us whether the communication has to deal with the AH or the ESP protocol of the IPsec suite.
Another protocol, that we have previously mentioned, is the Secure Socket Layer (SSL) which provides safety services to TCP and TCP-related applications. This protocol was firstly designed to use the TCP and to provide safety issues in end-to-end communications. The concept of SSL session has to deal with the interrelation between client and server. Sessions are created by the Handshake Protocol and they constitute a group of security cryptographic parameters that can be shared among many connections. Furthermore, the concept of an SSL connection refers to a transfer, related with special type services. Connections are temporary and each is correlated to a session.
Generally, SSL provides Confidentiality with encrypted communication between clients and servers and Authentication. The authentication between server and client is supported by the use of a variety of different cryptographic algorithms. Ciphers also are being used by SSL for use in other operations such as transmitting certificates, and establishing session keys. Among its other functions, the SSL handshake protocol determines how the server and client negotiate which cipher suites they will use to authenticate each other, to transmit certificates, and to establish session keys.
The following algorithms include these cryptographic features.
â˜º Data Encryption Standard (DES)
â˜º Digital Signature Algorithm (DSA)
â˜º Key Exchange Algorithm (KEA)
â˜º Message Digest algorithm (MD5)
â˜º RC2 and RC4.
â˜º RSA: A public-key algorithm for both encryption and authentication.
â˜º RSA key exchange: A key-exchange algorithm for SSL based on the RSA algorithm.
â˜º Secure Hash Algorithm (SHA-1)
â˜º Triple-DES. DES applied three times.
Key-exchange algorithms like KEA and RSA, govern the way in which the server and client determine the symmetric keys they will both use during an SSL session. The most commonly used SSL cipher suites use RSA key exchange.
The SSL 2.0 and SSL 3.0 protocols support overlapping sets of cipher suites. Administrators can enable or disable any of the supported cipher suites for both clients and servers. When a particular client and server exchange information during the SSL handshake, they identify the strongest enabled cipher suites they have in common and use those for the SSL session. The Handshake Protocol is the most complicated part of SSL. This protocol allows client and server to verify their identities and to negotiate the cipher algorithms they are going to use, as also the cryptographic keys. To establish a logic session between server and client, there are four phases. At the first phase, o logic connection begins and keys are exchanged. Several methods are supported for the key exchange, such as the RSA which was previously mentioned, the Fixed Diffie-Hellman, the Ephemeral Diffie-Hellman and the Anonymous Diffie-Hellman method. After the key exchange, cipher specs (CipherSpec) follow.
At the second phase, the server sends a message with his certificate or a certificate array of X.509 type. Then, a server which is not using the Anonymous Diffie-Hellman method can ask from the client to send him a certificate too. This certificate request contains the certificate_type and certificate_authorities parameters. The last message in phase 2 that is always demanded is the server_done message, which defines the end of server_hello and of the message exchange.
At third phase, the client_key_excgange message is sent to determine the public Diffie-Hellman parameters or Fortezza parameters. Client can also send the certificate_verify message which includes a hash code signed by the client.
At last phase, the secure connection configuration is ended. Client sends the change_cipher_spec message and copies the open CipherSpec as the current CipherSpec. After that, the client sends the finished message, based on the new algorithms, keys and secrets. Finally, server sends his own change_cipher_spec message, changes the open CipherSpec to current and sends the finished message too. At this point, handshake is accomplished and application level data exchange can start.
Successor of SSL is the Transport Layer Security (TLS), which is an SSL version designed to become the standard protocol for secure transactions over the World Wide Web.
Man in the middle policy is based on the communication between the client and a server via SSL! The rogue program intercepts the legitimate keys that are passed back and forth during the SSL handshake, substitutes its own, and makes it appear to the client that it is the server, and to the server that it is the client.
The last security-related protocol I had previously referred to, is the Secure Shell (SSH) Protocol. SSH supports client server architectures too. Under this protocol, the client machine initiates all connections to a server. The SSH protocol provides some safeguards that are stated below:
â˜ºAfter an initial connection, the client verifies it is connecting to the same server during subsequent sessions.
â˜ºThe client transmits its authentication information to the server, such as a username and password, in an encrypted format.
â˜ºAll data sent and received during the connection is transferred using strong, 128 bit encryption, making intercepted transmissions extremely difficult to decrypt and read.
â˜ºThe client has the ability to use X11 applications launched from the shell prompt. This technique, called X11 forwarding, provides a secure means to use graphical applications over a network.
Because the SSH protocol encrypts everything it sends and receives, it can be used to secure otherwise insecure protocols. Using a technique called port forwarding, an SSH server can become a conduit to secure insecure protocols, like POP, increasing overall system and data security.
Using the SSH protocol offers additional strengths in security like the ability of notification when a host key changes that the two-way Authentication offers. Another example is that, Version Exchange can be used to exclude old clients as new bugs become known. Use of regenerated server key makes compromise of the host key unhelpful. A last advantage is that client chooses the cipher suite.
On the other hand, SSH cannot protect against compromise of the client or server. User keys are generally protected under DES, but this doesn't protect against root. Furthermore, Authentication is subject to a bootstrapping problem.
There are many other protocols over the upper or lower layers, with security features. Each is used to networks accordingly to the systems' demands and the managers' goals. At last, protocols use, is only a small part of the whole scope of security in networks or distributed systems and it cannot ensure full protection of intruders without combined with other safety issues, like the physical layer protection issues.
Security is a very difficult topic. Everyone has a different idea of what ``security'' is, and what levels of risk are acceptable. The key for building a secure network is to define what security means to your organization. Once that has been defined, everything that goes on with the network can be evaluated with respect to that policy. Projects and systems can then be broken down into their components, and it becomes much simpler to decide whether what is proposed will conflict with your security policies and practices.
Many people pay great amounts of lip service to security, but do not want to be bothered with it when it gets in their way. It's important to build systems and networks in such a way that the user is not constantly reminded of the security system around him. Users who find security policies and systems too restrictive will find ways around them. It's important to get their feedback to understand what can be improved, and it's important to let them know why what has been done, the sorts of risks that are deemed unacceptable, and what has been done to minimize the organization's exposure to them.
Security is everybody's business, and only with everyone's cooperation, an intelligent policy, and consistent practices, will it be achievable.
How intruders are categorized, what methods for attacking network systems are most known, and the use of firewall in all systems who demand security, are the scope of the page.
After internet has become mainstream (oksa where Internet usage is increasingly widespread.), more and more people try to exploit some weaknesses in the security region, and steal data or harm companies! So, nowadays, staff and representatives of those companies who depend on net, try to think like hackers or intruders and find the back holes of every system they create and attach to Internet. Even a network system will not connect to the Internet, data that travel through it are exposed to dangers, like hackers and sniffers.
As a result, cryptography and protocols that determine the network security, have reached a high level of efficiency, as also as firewalls.