This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Threads are important in distributed system because they are more efficient than processes. Threads are applied in both server and client processes. Threads are also useful for clients as well as server.
Technically, a thread is defined as an independent stream of instructions that can be scheduled to run as such by the operating system. To the software developer, the concept of a "procedure" that runs independently from its main program may best describes a thread. To go one step further, imagine a main program (a.out) that contains a number of procedures. Then imagine all of these procedures being able to be scheduled to run simultaneously and/or independently by the operating system. That would describe a "multi-threaded" program.
Before understanding a thread, one first needs to understand a UNIX process. A process is created by the operating system, and requires a fair amount of "overhead". Processes contain information about program resources and program execution state, including Process ID, process group ID, user ID, and group ID, Environment, Working directory, Program instructions, Registers, Stack, Heap, File descriptors, Signal actions, Shared libraries and Inter-process communication tools (such as message queues, pipes, semaphores, or shared memory).
THREADS WITHIN A UNIX PROCESS
Threads use and exist within these process resources, yet are able to be scheduled by the operating system and run as independent entities largely because they duplicate only the bare essential resources that enable them to exist as executable code. This independent flow of control is accomplished because a thread maintains its own Stack pointer, Registers, Scheduling properties (such as policy or priority), Set of pending and blocked signals and Thread specific data.
Threads mechanism in UNIX.
Unix is the OS mostly preferred mostly by the professionals, because of the added plugins and features that are required by the users. Windows is the OS which indeed is mostly used by the people all over the world, but it doesn`t have as such a great professional use like that of Unix, MAC etc, but rather than that it is used more for the domestic purpose rather than any other OS, due to its user friendliness feature.
UNIX threads implementation is system dependent. The UNIX Threads Interface does not define the implementation. However, it does provide for both multiplexed and bound threads. Both implementations can support exactly the same APIs. The relations between user-level threads and kernel LWPs may be 1-to-1, M-to-1 (Many-to-One) or M-to-N (Many-to-Many).
POSIX threads are portable among UNIX systems that are POSIX compliant. It is known as IEEE 1003.1c AKA Threads. Some other threads implementations are Light Weight Kernel Threads (LWKT) in BSDs, Native POSIX Thread Library (NPTL) for Linux, Win32 Threads, GNU Portable Threads, Mac OS Threads, Solaris Threads and Java Threads.
Solaris makes use of four separate thread-related concepts Process of the normal UNIX process and includes the user's address space, stack, and
Process control block. User-level threads of Implemented through a threads library in the address space of a process, these are invisible to the operating system. User-level threads (ULTs)1 are the interface for application parallelism. Lightweight processes of A lightweight process (LWP) can be viewed as a mapping between ULTs and kernel threads. Each LWP supports one or more ULTs and maps to one kernel thread. LWPs are scheduled by the kernel independently and may execute in parallel on multiprocessors. Kernel threads of the fundamental entities that can be scheduled and dispatched to run on one of the system processors.
Provide processes every thread that the kernel manages has its own memory space. Other operating systems, for example most modern unix systems, allow processes to contain multiple threads of execution: they provide a kernel-level notion of threads.
Threads mechanism in Windows 7
AÂ threadÂ is the entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of structures the system will use to save the thread context until it is scheduled. TheÂ thread contextÂ includes the thread's set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread's process. Threads can also have their own security context, which can be used for impersonating clients.
An application can use theÂ thread poolÂ to reduce the number of application threads and provide management of the worker threads. Applications can queue work items, associate work with wait able handles, automatically queue based on a timer, and bind with I/O. EachÂ processÂ provides the resources needed to execute a program. A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called theÂ primary thread, but can create additional threads from any of its threads.
Windows schedules threads, not processes Scheduling is preemptive, priority-based, and round-robin at the highest-priority 16 real-time priorities above 16 normal priorities. Scheduler tries to keep a thread on its ideal processor/node to avoid perf degradation of cache/NUMA-memory. Threads can specify affinity mask to run only on certain processors. Each thread has a current & base priority. Base priority initialized from process. Non-real-time threads have priority boost/decay from base. Boosts for GUI foreground, waking for event. Priority decays, particularly if thread is CPU bound (running at quantum end). Scheduler is state-driven by timer, setting thread priority, thread block/exit, etc. Priority inversions can lead to starvation balance manager periodically boosts non-running run able threads.
Critical comparison of threads mechanism in UNIX.
UNIX uses fork to create a new copy of a running process and exec to replace a process's executable file with a new one. In UNIX, a developer creates a new process by using fork. The fork function creates a child process that is an almost exact copy of the parent process. The fact that the child is a copy of the parent ensures that the process environment is the same for the child as it is for the parent. For a UNIX application to change the executable file run in the child process, the child process must explicitly call an exec function to overwrite the executable file with a new application. The combination of fork and exec is similar to, but not the same as, Create Process.
A line of execution in a process. Every process has one main thread, which may (depending on the particular platform) be allowed to start others. Multiple threads are usually more efficient than multiple processes but synchronization and security (since any thread can modify the process environment) considerations make thread programming more difficult.
Signals can also be used in the case of threads but usually the target has to make a specific system call first, for example, a signal could be used to "wake up" a thread which has called sigsuspend. Operating systems which substitute the spawn call tend to use multiple threads in the daemon rather than spawned offspring whoever the task requires much shared data. While this requires less operating system overhead it is also inherently less secure because all threads will have the same root privileges which the main thread has, any thread can change the global data or environment of the daemon, so a programming mistake can be difficult to find since it may appear only after the process has run for some time, and signals must be used more carefully when dealing with a process which has multiple threads.
But using threads is not an option confined to non-Unix operating systems. Almost all modern versions of Unix support standard POSIX thread calls. The modern practice in Unix is to use a parent process with multiple child processes if security is the primary concern and to use multiple threads in the parent process if efficiency is the primary concern. One can say that the Unix process control constructs, specifically the fork, exec, wait, exit, SIGCHLD cycle together with the option of POSIX threads, allows maximum flexibility with regard to all aspects of process control.
Critical comparison of threads mechanism in windows 7.
The critical comparison of threads in windows 7 are User-mode schedulingÂ (UMS) is a lightweight mechanism that applications can use to schedule their own threads. An application can switch between UMS threads in user mode without involving theÂ system schedulerÂ and regain control of the processor if a UMS thread blocks in the kernel. Each UMS thread has its own thread context instead of sharing the thread context of a single thread. The ability to switch between threads in user mode makes UMS more efficient than thread pools for short-duration work items that require few system calls.
AÂ fiberÂ is a unit of execution that must be manually scheduled by the application. Fibers run in the context of the threads that schedule them. Each thread can schedule multiple fibers. In general, fibers do not provide advantages over a well-designed multithreaded application. However, using fibers can make it easier to port applications that were designed to schedule their own threads.
Microsoft Windows supportsÂ preemptive multitasking, which creates the effect of simultaneous execution of multiple threads from multiple processes. On a multiprocessor computer, the system can simultaneously execute as many threads as there are processors on the computer. AÂ job objectÂ allows groups of processes to be managed as a unit. Job objects are namable, securable, sharable objects that control attributes of the processes associated with them. Operations performed on the job object affect all processes associated with the job object.
The user level thread implementations are better than kernel level thread implementations in a distributed network.
There are two distinct models of thread controls, and they areÂ user-level threadsÂ andÂ kernel-level threads. The thread function library to implement user-level threads usually runs on top of the system in user mode. Thus, these threads within a process are invisible to the operating system. User-level threads have extremely low overhead, and can achieve high performance in computation. However, using the blocking system calls likeÂ read(), theÂ entireÂ process would block. Also, the scheduling control by the thread runtime system may cause some threads to gain exclusive access to the CPU and prevent other threads from obtaining the CPU. Finally, access to multiple processors is not guaranteed since the operating system is not aware of existence of these types of threads.
A system can offer both kernel-level and user-level threads; this is known asÂ hybrid threading. User- and kernel-level threads each have their benefits and downsides. Switching between user-level threads is often faster, because it doesn't require resetting memory protections to switch to the in-kernel scheduler and again to switch back to the process. This mostly matters for massively concurrent systems that use a large number of very short-lived threads, such as some high-level languages (ErlangÂ in particular) and theirÂ green threads. User-level threads require less kernel support, which can make the kernel simpler. Kernel-level threads allow a thread to run while another thread in the same process is blocked in aÂ system call; processes with user-level threads must take care not to make blocking system calls, as these block all the threads of the process. Kernel-level threads can run simultaneously on multiprocessor machines, which purely user-level threads cannot achieve.
The primary advantages of user-level threads are efficiency and flexibility. Because the operating system is not involved, user-level threads can be made to use very little memory, and can be created and scheduled very quickly. User-level threads are also more flexible because the thread scheduler is in user code, which makes it much easier to schedule threads in an intelligent fashion -- for example, the application's priority structure can be directly used by the thread scheduler.
Like a kernel thread, a user-level thread includes a set of registers and a stack, and shares the entire address space with the other threads in the enclosing process. Unlike a kernel thread, however, a user-level thread is handled entirely in user code, usually by a special library that provides at least start, swap and suspend calls. Because the OS is unaware of a user-level thread's existence, a user-level thread can not separately receive signals or use operating system scheduling calls such as sleep(). Many implementations of user-level threads exist, including: GNU Portable Threads (Pth)Â , FreeBSD's userland threads, QuickThreadsÂ  and those developed by us for the Charm++ systemÂ .
A kernel thread, sometimes called a LWP (Lightweight Process) is created and scheduled by the kernel. Kernel threads are often more expensive to create than user threads and the system calls to directly create kernel threads are very platform specific.
A user thread is normally created by a threading library and scheduling is managed by the threading library itself (Which runs in user mode). All user threads belong to process that created them. The advantage of user threads is that they are portable.
The major difference can be seen when using multiprocessor systems, user threads completely managed by the threading library can't be ran in parallel on the different CPUs, although this means they will run fine on uniprocessor systems. Since kernel threads use the kernel scheduler, different kernel threads can run on different CPUs. Many systems implement threading differently.
In summary, in the UNIX environment a thread were Exists within a process and uses the process resources, Has its own independent flow of control as long as its parent process exists and the OS supports it, Duplicates only the essential resources it needs to be independently schedulable, May share the process resources with other threads that act equally independently (and dependently), Dies if the parent process dies - or something similar and Is "lightweight" because most of the overhead has already been accomplished through the creation of its process.
Because threads within the same process share resources with, Changes made by one thread to shared system resources (such as closing a file) will be seen by all other threads, Two pointers having the same value point to the same data and Reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer.