This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Whenever the main memory gets full with pages, it becomes more difficult when the user have to arrange the pages for the new pages to be saved, because the user must recognize which pages has to be kept or erased.
Page replacement is where the system must decide which page in main memory should be replaced or removed in order to make room for new pages. This can be done by over-writing / modifying the memory space. There are various types of page replacement strategies used to solve the problems.
Strategies help number of page faults to be reduced and they also help execution time for processors to be reduced.
2.0 What is page replacement?
In a computer operating system that uses paging for virtual memory memory management, page replacement used to decide which memory pages to be deleted or replaced when a page in memory need to be allocated. Page replacement is the process of deleting or replacing old pages with new pages. Various types of strategies are used to replace pages.
3.0 Types of Page-replacement strategy.
? Random page
? First in first out (FIFO) page
? Least recently used
? Least frequently used
? Not recently used
? Far page
? Second chance
3.1 Random page
Random page is done by randomly selecting pages from the main memory and replace with new pages. Since random page replacement strategy is done randomly, it saves some time to think which one to choose than it takes in another strategy. However, the page will be selected randomly to be replaced, so there is risk that some important pages would be chosen for replacement by mistake, which put user in more awful situation.
3.2 First in first out (FIFO) page
First in first out (FIFO) page replacement is another type of low-overhead algorithm which requires little bookkeeping on the part of the operating system. The idea is obvious from the name itself that, the operating system keeps track of all the pages in memory with the latest arrival at the back, and the most recent arrival in front. The oldest page will be chosen for the replacement, when replacement is required. While it is cheap and intuitive way of replacement, it performs relatively badly in practical application.
3.3 Least recently used
The least recently used page replacement works with the idea that the pages which have been used in the past few instructions will be used in the next few instructions too. Least recently used page replacement provides near-optimal performance in theory, but it is expensive to apply in practice. There are some implementation methods for this strategy which could reduce the cost while the performance can be kept as much as possible.
The most expensive method would be the linked list method that there is a linked list which contains all the pages in memory. This list contains least recently used page at the back and moat recently used page in the front. The cost of applying this method depends on the fact that items in the list will have to be moved about every memory reference, which takes a lot of time during the process.
Because of these implementation costs, user may consider using this strategy, such as those which follow, which are similar to LRU, but which offer cheaper implementations.
3.4 Least frequently used
Least frequently used page replacement strategy works with the idea that each of the page should have a counter of its own which is initially 0. At each clock interval, all the pages that are referred within that interval of their counter incremented by 1. But as a result, the counters keep track of how many times a page has been used. Therefore, the page with the lowest counter can be swapped out when necessary.
The main problem of this strategy is that it keeps the track of frequency of uses without the tracking the time of data used. Therefore, in a multi-pass compiler, pages which were used heavily during the first pass, but were not needed in the second pass will be favored over pages which are comparably lightly used in the second pass, as they have higher frequency counters.
3.5 Not recently used
Not recently used page replacement strategy works with the idea that pages that are used recently are kept. This strategy works with the following principles - If you refer to the page, marking it referred to the reference bit, is set for this page. Likewise, when the page (written to) is modified, the modified bit is set. It is also possible to do so at the software level, but the setting of bits is usually performed in hardware.
In particular fixedtime interval, clock interrupt trigger and removes all the bits that were referenced in page, only pages that are referenced within the current clock interval will be marked with a referenced bit. When the page needs to be replaced, the operating system separate the page into 4 categories:
? Category 0 - not referenced, not modified
? Category 1 - not referenced, modified
? Category 2 - referenced, not modified
? Category 3 - referenced, modified
It may not look possible for a page to be not referenced yet modified; this can be happened when a page with category 3 has its referenced bit removed by the clock interrupt. This strategy simply picks a page to be removed randomly from the lowest category.
3.6 Far page
Far page replacement strategy is different from other ways of replacing pages that function and data referenced in predictable pattern. Prediction for which page to be removed or replaced is done by using graphs. But it may not be well known among users because it is very difficult to implement and it involves high execution cost.
3.7 Second chance
Second chance page replacement strategy, well known as a modified form of the FIFO (First in First out) page replacement strategy that fares relatively better than FIFO at little cost of the improvement. It is usually done by looking at the front queue like FIFO strategy does, but it checks to see whether its referenced bit is set or not instead of immediately replacing that page. If it is not set, the page will be replaced or else, the referenced bit is removed, and the page will be inserted at the back of the queue, as if it was a new page, and this process will be repeated. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be replaced, as its referenced bit is removed now.
Basically, what second chance does is that as the name suggests, it gives every page a "second-chance" - an old page which has been referenced is most likely in use, and should not be replaced over a new page which has not been referenced.
The aging page replacement strategy is known as a descendant of the Least Frequently Used strategy, with some modifications to make it possible to be aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting emphasis equally on page references regardless of the time, the reference counter on a page is shifted right (divided by 2) first, before adding the referenced bit to that binary number on left. For example, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present have more impact that page references a long time ago in the past. It makes sure that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Therefore, when a page needs to be replaced, the page with the lowest counter will be chosen.
Aging strategy is different from Least recently used strategy in the sense that aging can only keep track of the references in the latest 16/32 time intervals. therefore, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.
Through this assignment, I have learned various types of page replacement strategy that are well known and used by a lot of computer users. Each type of replacement strategies have various ways to replace and remove old page to new one in the memory and also each of them have their own advantage and disadvantage based on implementation cost, execution time, safety for important page, and etc.
Even though the page replacement strategies make users easier for them to choose which ones to remove or replace for new page, but still I recommend to people who have a lot pages that should be kept safe not to use the replacement strategies because the page which should not erased can be erased by mistake.
Cybercrime is becoming a serious problem now a day that a lot of computer users are suffering from loss and stealing of data and information. As the cybercrime is increasing, a lot of solutions have came up to protect the data and information in the computer and or being exchanged in a network. But with holistic computer security measures, itï¿½ï¿½s primarily focused on preventing user error and malicious acts. Computer security is not complicated. It may seem that way, but the theory behind computer security is relatively simple. Hacking methods fall into just a few categories. And solutions to computer security problems are actually rather straightforward.
6.0 What is security?
Security is any measures taken to prevent the loss of userï¿½ï¿½s file, data, and information. Loss of things might occur because of userï¿½ï¿½s mistake, act of nature, hardware failure, and unauthorized access to the computer.
7.0 Security Attacks
There are variety ways that computer can be attacked by hackers, malicious code, or by users themselves. These are few types of security attack that should aware of ?
Cryptanalysis is the studying how to take encrypted data, and un encrypt without using the key to it. It looks up for weaknesses in encrypted data to break the code. It is usually being used by hackers with bad intentions.
2) Malicious code
Malicious code is a type of Internet threat that cannot be controlled by antivirus software alone. In contrast to viruses that require a user to execute a program in order to cause damage are auto-executable applications. Once they are inside of the network or workstation, they can enter network drives and propagate. They can also cause network and mail server to be overloaded by sending email messages, stealing data and passwords, deleting document files, email files or passwords, and even re-formatting hard drives.
3) denial of service attack
Denial of service attack is usually done by hackers to prevent users from using the network. Denial of service attacks may target users or organization to prevent outgoing connection on the network such as web pages.
4) Software Exploitation
Software exploitation is the attacks launched against applications and higher-level services. they access to data by using weaknesses of data access objects in data base or flows from a service.
4) System penetration
System penetration is when hackers attempt to break into the operating system and software to steal data and information or corrupting it. The value of penetration system is that attack and defense have different mindsets. People who used defend against the system are normally not good at finding ways to access into system and vice versa.
8.0 Attack Prevention and Security solutions.
8.1 Security on your computer
Cryptography is science of writing using secret code and is also an ancient art. Cryptography is not only protecting, but can also used for user authentication. In general, there are two types of cryptography which are secret key cryptography and public key cryptography.
8.1.2 Secret-key cryptography
Secret-key cryptography which is sometimes called symmetric cryptography is the more traditional form of cryptography that a single key is used to encrypt and decrypt a data. Secret-key cryptography does not only deal with encryption, but it also deals with authentication.
8.1.3 Public-key cryptography
A public key which is sometimes called asymmetric cryptography, is a value provided by some designated authority as an encryption key that is combined with a private key derived from the public key, is used to encrypt messages and digital signatures in effective way.
Authentication is the process of identifying users usually using username and password. In security systems, authentication is the process of giving users access to system objects based on their identity. Authentication only ensures that the user is who he or she claims to be, but it says nothing about the access rights of the users.
8.3 Password Salting
Passwords stored in model in an encrypted form are usually called as 'md5 hash'. Password salting is a process of making passwords more secured by adding a random string of characters to passwords before their md5 hash is calculated, which makes them harder to reverse
Biometrics is science and technology of measuring and analyzing biological data. In information technology, biometrics is technology to measure and analyse the human body characteristic such as the DNA, eye retinas and irises, fingerprints, voice patterns, facial patterns and hand for authentication purposes
8.5 Smart cards
A smart card is a plastic card with the size of a credit card, with microchip embedded on the surface that can be used to hold data, used for electronic cash payments, telephone calling, and other applications.
Kerberos is one of security measure that authenticates a request for a service in a computer network. Kerberos let users request an encrypted "ticket" from the authentication process that can be used to request a particular service from a server. The user's password does not have to pass through the network.
8.7 Access Control
Access Control is any method by which systems accept or refuse the right to access some data, or perform some action. Normally, a user first login to a system, using some Authentication system. Then, the Access Control mechanism controls what operations the user may or may not make by comparing the User ID to an Access Control database.
9.0 Securities in Communications
9.1 Secure Communication.
Five basic requirements:
? Integrity ? it is ability to ensure that information transmitted/received over the Internet has not been altered in any way by an unauthorized party
? Nonrepudiation ? it is ability to ensure that users do not deny (repudiate) online actions
? Authenticity - it is ability of identifying the identity of a user on the network.
?Confidentiality ? it is ability to ensure that messages and data are available only to those authorized to view them
? Privacy ? it is ability to control use of information a user provides about himself or herself on network
9.2 Agreement Protocol
It is the process of exchanging keys between 2 parties using unsecured medium ?
Digital envelope - The digital envelope usually consists of an encrypted message using secret-key cryptography and an encrypted secret key. While digital envelopes usually use public-key cryptography to encrypt the secret key, this is not necessary.
Key management - Key management deals with the distribution, secure generation, and storage of keys. Secure methods of key management are really important that once a key is randomly generated, it must be kept secret to avoid unfortunate mishap. In practice, most attacks on public-key systems may be aimed at the key management level, rather than at the cryptographic algorithm itself.
Digital Signatures - A digital signature is an electronic signature that is used to authenticate the identity of the signer of a document or the sender of a message, and used to ensure that the original content of the message or document that has been sent is kept unchanged. The ability of ensuring the original signed message arrived means that the sender cannot easily repudiate it later.
9.3 Other method
Public key infrastructure - it enables users of a basically unsecure public network like the Internet to exchange data and money in secured and private way through the use of a public and a private cryptographic key pair that is acquired and shared through a trusted authority. The public key infrastructure provides a digital certificate that can identify an individual or an organization and directory services that can store and, when necessary, refuse the certificates. Although the components of a Public key infrastructure are generally understood, a number of different vendor approaches and services are emerging.
Digital certificate - it is like electronic "credit card" that establishes user credentials when doing business or other transactions on the Web. It is issued by a certification authority. It contains user name, a serial number, expiration dates, a copy of the certificate holder's public key that is used for encrypting messages and digital signature, and the digital signature of the certificate-issuing authority so that a recipient can verify that the certificate is real or not. Digital certificates can be kept in registries so that authenticating users can look up other users' public keys
Certificate Authority or Certification Authority ? it is an entity, which is center of many ï¿½ï¿½Public Key Infrastructureï¿½ï¿½ plans, which has the purpose to issue Digital Certificates to use by other parties. Some Certificate Authorities may charge a fee for their service while some other Certificate Authorities are free. It is also became common thing for government and institutions to have their own Certificate Authorities.
Security problem is getting serious that it is common things to know some security measure for general knowledge purpose as the threats from internet is getting serious. Through this assignment I have learned so many types of security measures that have different jobs on different situation like security measures for computer and another for computer network.
It is also important to know when to use which security measures. I prefer cryptography because it prevents from understanding of data so even when it is stolen, it will take a lot of time to decode it