Study On Strategies Used For Page Replacement Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Page replacement is where the system must decide which page in main memory should be replaced or removed in order to make room for new pages. This can be done by over-writing / modifying the memory space. List and explain in detail all the strategies used for page replacement.

Deitel et al. (2004) states that in a virtual memory system with paging, all of the page frames might be occupied when a process references a nonresident page. In this situation, the system does not only bring in a new memory page from the secondary page, it also must first decide which page in main memory that should be replaced, that is either the page in main memory will be removed or overwritten to make room for the incoming page. The decision making can be overcome by using the page-replacement algorithm. There are few strategies of page-replacement; the main purpose of page-replacement is to reduce the number of page faults a process experiences as it runs from the beginning to the end, and hopefully decrease the process's execution time.

The optimal page replacement or OPT is one of the page-replacement strategies. It works when a page requires to be exchanged in or the system needs a new memory page, the operating system or (OS) exchange out the page which will be referenced farthest in the future. For example, a page that will not be referenced in the next 5 seconds will be exchanged out over a page that is going to be referenced within 0.3 seconds. However, this strategy could be implemented because the OS could not calculate how long before a page is going to be referenced next. In spite of that, if a program is running on a simulator and keeping an eye on all page references, it will be possible to implement OPT on the second run by using the page reference data that was compiled during the first run. By doing this, the OS can use the data compiled during first run and will be able to decide which pages that needs to be swapped in or out. This means that only programs that have memory reference pattern which is consistent for every runs.

Another page-replacement strategy is the Random page-replacement or RAND. This strategy is easy to implement and it has lower overhead. An overhead are bit sequences or series in one data transmission other than the actual end user data. In this strategy, any page in the main memory can be a candidate for replacement of a new incoming page. However, RAND has a disadvantage of accidentally replacing a page that will be reference to the next page, which is nevertheless, the worse page to be replaced. In spite of this, the benefit of using this strategy is that it makes the OS to decide quicker and fairer on selecting which page to replace for the new incoming page.

Next, in the First in First Out page-replacement or FIFO strategy, the OS will replace the page that has been in the system the longest. In this strategy, the system keeps an eye on the order in which pages that enters the main memory. This strategy is appealing because it is reasonable, that is, the page which is in the system the longest has had its chance and it will be the time to give a new page a chance. However, the FIFO page replacement strategy can replace a heavily used page. An example of this is that on large timesharing systems, it is normal for several users to share a program to edit or correct programs. The FIFO page replacement strategy in this kind of system, it might decide to replace a heavily used program. It is a bad choice, because the page would have to be recalled to main memory instantly, which results an increased rate of page fault.

Deitel et al. (2004) have stated that the FIFO page replacement strategy's problem of replacing heavily used page that is in the system for a long time can be avoided by implementing FIFO with a referenced bit for each page and replacing a page only if its referenced bit is set to zero or 0. This modified FIFO page replacement is known as Second-Chance. This second-chance of FIFO will inspect the referenced bit of the oldest page and if the bit is off or zero, it will then instantly selects that page for replacement. However, if the referenced bit is on or 1, the page is moved to the tail of the FIFO queue. In due course, the page that was moved to the tail of the queue will slowly move to the head of the queue. When it reaches at the head of the queue, and if the referenced bit is off or 0, it will then be selected for replacement.

Moreover, active pages will be sent to back to the tail of the queue, because their referenced bit will be set, thus will still remain in the main memory. As for a modified page, it is written or flushed to the auxiliary storage before the system selects it as replacement, even though its referenced bit is set off or 0, it will remain temporarily irreplaceable until the system completes its transfer. Assuming a process references this page before it is flushed completely, it is then recaptured, which will save an expensive page-in operation from auxiliary storage.

The clock page replacement strategy, produces basically the same results as the second chance strategy, but arranges the pages in a circular queue instead of a linear queue. Whenever there's a page fault occurs, a list pointer will move around the circular queue which moves like the hand of a clock rotates. If a page's referenced bit is off or 0, the pointer is moved to the next element of the queue. In the clock page replacement strategy, if the first page referenced bit is set to off or 0, it will then be the replacement for the new incoming page.

Next, another strategy for page replacement is the Least-Recently-Used or LRU page replacement strategy. This strategy uses a process's recent past behavior as a good indicator of the process near future pattern. If the system needs to replace a page for the new incoming page, LRU strategy will replace the page that has been in the memory the longest without being referenced. Even though LRU perform better than FIFO strategy, the advantage of this strategy comes at the cost of system overhead. The LRU strategy works when a page frame is referenced, the system then places the page's entry to the head of the queue, which indicates that the page has recently referenced. While the older entries will be send towards the tail of the queue. If an existing page needs to be replaced to make room for new incoming page, the system will replace the page at the end of the queue. It is a good strategy, but it acquires substantial overhead due to the system needs to update the list every time a page is referenced.

The Least-Frequently-Used or LFU strategy is another strategy in page replacement in the system. It makes the system decide on how completely the page is being used. In this strategy, the system replace the page which is least frequently used or least referenced. The strategy focus on the idea that if a page is not intensively or completely referenced is probably not likely to be referenced in the future. However, this strategy has some setbacks that is, using ample of overhead because it needs to update the system from time to time and the system can easily select the wrong pages for replacement.

Another strategy used for page replacement is the Not-Used-Recently or NUR page replacement strategy. This strategy uses two hardware bits per page entry. Its referenced bit is set to 0 if the page is not referenced and set to one if it has been referenced, and modified is set to 0 if the page has not yet been modified and set to 1 if it is modified. This strategy works by initially setting all the pages referenced bit to 0. So when a page is referenced it will then set to 1. If the system needs to replace a page, NUR will attempt to find a page that is still set to 0 or has not been referenced. However, if no such page exists, it will then replace a referenced page.

In the Far page replacement, it uses graphs of predictable patterns of reference functions and data to make decision on which page can be replace in the main memory. This strategy has shown mathematically to perform at near optimal levels. The replacement algorithm of this strategy works in phases which are similar to the clock algorithm. The far page replacement strategy firstly marks all vertices in the access graph as not yet referenced. If the process access a page, the algorithm will then mark it as referenced the vertex that it corresponds that page. When the system needs to select a page to be replaced, it will choose an unreferenced page that is furthest away from any referenced page in the access graph. The focus of this strategy is that the unreferenced page which is furthest away from any referenced page is probably be used or referenced furthest in the future. The setbacks of this strategy is that it is complex and incurs significant execution-time overhead, it still has not been implemented in real system.

In conclusion, there are many different strategies present for page replacement, each of the strategies has their own benefits and setbacks which occurs when it is used, and some of the strategies are nearly impossible to be implemented.

Question 2

Name and explain all the security measures that can be taken to protect data and information in the computer or being exchanged in a network.

In many years, computers have evolved rapidly and beyond our realisation. Technology has become our necessity for everyday life and most of us just couldn't resist the temptation of getting more advanced or latest gadgets. As we all know, the famous social networks like Facebook, Twitter and Tagged, we share our information to the world and our opinion. However, sharing our privacy in these social networks has its risk and perpetrators might use our personal information for extortions or worse defame our image with obscene pictures. That is when security comes in. The rapid evolution of computers should also simultaneously evolve with security together to protect users from being victims of hackers.

There are some security measures that a user or an organization can be used to protect against breach in confidential information. Anti-virus is one of the security measures, involved in protecting users from malicious programs or unhealthy programs such as virus, worms and Trojan virus. It is programmed to prevent any malicious code either from users browsing the internet or opening an attached e-mail. It is also able to removed or delete the virus, which can cause harm to the user's personal computer. It can also prevent and remove spyware, adware and malware. Antivirus software has different kinds of identification methods.

Firstly is the signature based detection approach, and it is the most common method of identifying virus, spywares and malwares. The antivirus software relies on signatures to identify the virus. This is effective, yet the antivirus could not defend against the virus if only the samples have been obtained and signatures been created. Another approach is the heuristic based detection. It is used to identify unknown or new virus, malware and spyware. It is because some virus starts to infect files through mutation or modifications by attacker which can grow into different strains of the virus which is called variants. Even though this approach has good benefits, it will be faster to detect a virus family if a generic signature is created. The virus researchers will inspect the virus signatures and find its common areas, thus making a virus family signature. An antivirus can also scan rootkits, which is a type of malware that is designed to obtain administrative control over a computer system without being detected. This rootkit can change operating system functions and can tamper the antivirus process, which results the computer to be more vulnerable of malwares. The rootkits are hard to be remove, and in some situation it requires the re-installation of the operating system.

Moreover, anti-virus is like a shield which protects the users confidential data from being destroyed by the virus. Some hackers make a virus, with the intention of blackmailing its victims, thus leading to extortion. Having an anti-virus in personal computers and company's computers is the most common thing in today's world.

Another type of security measure is the firewall. Firewall is important, especially if the user is using or is connected to the internet. Firewall has two main types which is packet filtering and application firewall. The packet filtering firewall, it mainly inspects the ports that that information is intended for and it may either approved the information to sent through or prevent it to enter the computer and a special program may have to process it. In a corporate firewall, it protects a private network. The firewall hides the private network by only using one address on the internet or some addresses on the internet while having hundreds or computers behind the firewall. Most of the corporate firewalls are actually packet filtering firewalls. The network traffic is filtered based on a set of rules that was set by the network administrator of the organisation. This firewall permits some packets to go through while other packets are denied. As for application firewall, it actually stores a list of computer programs that could receive and transmit information through the internet or network. If a program is not in the list, it will not be permitted to receive or transmit information through the internet. Most of the personal firewalls functions this way in order to protect its users from being attacked by spyware or malware.

Next, biometrics is another security measure which protects privacy of users and organisation computers. The biometrics is actually an authentication mechanism which converts personal or human characteristics to digital code, and then comparing it with a digital code which is stored in the database. There are different types of biometric scanners which used for verification; these are fingerprints, retina or iris, voice, facial geometry, palm prints and hand geometry. Each scanner scans and identifies human characteristics. Nowadays, laptop producers are producing laptops with fingerprint scanners to heighten the security of the user's laptop. Some laptop model examples like, TravelMate 740 notebook, Acer Aspire 6920 and many more. The fingerprint scanner checks the edges of the fingers which have correctly arrayed ridges, and also it measures the blood flow.

Access control is another type of security measure. All computer system either personal computer or in an organisation must have the ability to restrict which users can access files in the computer. The administrator should properly set the restrictions on users to fully control the access of information, thus protecting unwanted users from accessing any information that is restricted by the administrator. A Network Access Control or NAC is a technology which works by authenticating the user and scan their computers or machines before they are allowed to access to the Local Area Network or LAN. This technology uses the authentication approach to identify and control the network access. This technology can help prevent illegitimate users from entering or accessing the LAN.

Encryption is one of the famous security measures which protect a user's personal information like credit card account, bank account, and so on. The encoded information can only be decoded with a key either from person or computer. The computer encryption is based on cryptography that has been used a very long time. The Greeks were the first to use ciphers to send secret messages. In today's world, cipher is now known as algorithm which is now the guide lines for encryption. The computer encryption system has two categories which is the symmetric-key encryption and public-key encryption.

In the symmetric-key encryption, computers send information with each other must have the same key in order to decode the information. Each computer must have a secret key or code so that it can encrypt the packet of information before sending it over the network. The computer that is receiving the encrypted information essentially must have the same secret code or key in order to decode the information it receives. However the setback of this encryption is that two users sending information to each other should communicate in a secure way, if not an attacker can easily obtain the data from the stream.

This is when public-key encryption was introduced to address the problem. The public-key encryption is also called asymmetric-key encryption. The asymmetric-key encryption uses two different kinds of keys which is the combination of private key and public key. A public key is the key given by the computer which wants to send information to the receiver and the private key is the key that is only known in the computer of the receiver. In order to decode the encrypted information, the computer must use the public key given by the sender and its own private key. Even though sending information from one computer to another is not secure because the public key is published and available to anyone, anyone who gets the encrypted could not read it without a private key. It is due to key pair is based on prim numbers of long length, which makes this encryption extremely secure because there is basically an infinite numbers of prime numbers which means that it is nearly an infinite possibilities for keys. An example of public-key encryption program is Pretty Good Privacy or PGP.

In conclusion, security measures is very important in this modern world, as technology evolve rapidly, computer security should also evolve together in order to protect users from hackers, virus and other factors that could destroy confidential data in the computer or worse, malfunctions the computer.