Give Flexibility To Their Staff Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Today Threads has been a boon to the computing world. Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems [1]. Today most of the languages have started to support Threads with little or no syntactic changes and also the system and architecture efficiently support Threads. Almost all Operating Systems today use Threads and the Operating system and architecture have evolved to support these Threads.

Today Multithreading is used for all Operating Systems and has increased and enhanced the efficiency of the System. Multithreading is preferred by Technologists as it allows one to take advantage of predicted increases in parallelism in computer architectures.

I feel that Threads have a significant task in the Operating System but also have their drawbacks in certain areas such as Concurrency and Thread Pool usage. In this paper I will be talking about Threads, Processes, its creation and Issues and Risks based on Threads.


Thread is the smallest unit of execution in an Operating System. They are program instructions

that can be managed by the Operating System scheduler .It is basically a Light weight Process.

A thread is contained inside of a Process. There are two types of Threads

i)User Level Threads

ii) Kernel Level Threads.

Multithreading is the process in which multiple threads exist in the same Process and share memory while other processes do not share resource.

Threads these days are manipulated with the help of a System call interface.

Threads in a Process can execute any process code and can also execute the part of code currently being executed by another thread of the same process. Besides basic information about a thread, including its CPU register state, scheduling priority, and resource usage accounting, every thread has a portion of the process address space assigned to it, called a stack, which the thread can use as scratch storage as it executes program code to pass function parameters, maintain local variables, and save function return addresses. So that the system's virtual memory isn't unnecessarily wasted, only part of the stack is initially allocated, or committed and the rest is simply reserved. Because stacks grow downward in memory, the system places guard pages beyond the committed part of the stack that trigger an automatic commitment of additional memory (called a stack expansion) when accessed [1].


Process is an instance if a computing program that is being executed [2].Depending on the Operating system, a process is made up of multiple threads of execution that executes instructions concurrently. Several processes may be associated with the same program.

When Processes are executed simultaneously it is said to be called Concurrency,

When Processes interfere with each other they usually or might cause failures i.e Thrashing and Deadlocks. Hence we use IPC (Inter Process Communication). Communication of Processes happens through the IPC and hence handles concurrency issues and faults.

The Information of the active processes are held by the Operating system in data Structures called Process Control Block.

Context switch is a method used to load the process into the Processor and hence Process scheduling is done. Processes have different states i.e Ready, Running, Blocked, Suspended and Waiting.

Fig. Showing Different States in a Process [2]


Usage of Threads in a Process vary from different Operating system. Creation of threads is much easier compared to creating a Process. Inter thread communication(ITC) is sharing if data between threads and of the same process.

In windows applications create threads even though they have default initial thread as certain process with a GUI i.e Graphical User Interface [1].

Typically Threads here are created to execute work so that main threads usually remains responsive to the user input and windows commands.

Basically usage of threads in Process allows applications to take advantage of multiple processors for scalability or that want to continue executing while threads are tied up wwaiting for synchronous I/O operations to complete[2].

In WINDOWS world a process is not an alternative to threads. Here Context switch occurs between threads on different processes by allowing the OS to remap virtual address space and do a TLB flush[1].

Let us consider an e.g. A Shockwave plugin fails in Internet Explorer to load, if they are placed in the same process but are run by different threads then also Internet explorer itself will crash, hence we load it into a separate process. Process and Threads in Windows have their own limitations

Today Microsoft supports Preemptive processing which helps to create the effect of simultaneous executions of multiple threads from multiple process[3].

Fig- Difference between use of Single Thread and Multithreading in Process


"Thread Local Storage(TLS) is a computer programming method that uses static or global memory local to a thread[1]",

This is usually required as all threads in a process usually same the share address space

Which is sometimes not reliable.Data that is present in static or global variables are normally located at the same memory address location when threads from the same process refer it.

TLS or Thread Local storage us a replacement for Lock strategies. Instead of using a Lock (flag)

Before accessing a resource, it is possible to have a unique copy for every thread (in a pool) and when processing is complete to merge the data if required.

Usage of TLS is good but more the TLS used by WINDOWS makes it expensive to create new threads and hence might be an issue of concern.




Fibers are particularly light weight thread of execution. Fibers are similar to threads in sharing same address space. Fibers use cooperative multitasking while threads use preemptive multitasking[1].

Today to avoid risks raised due to threads sometimes fibers are used. Use of fibers help in security compared to threads as the multi task cooperatively. Synchronization constructs including Spinlocks and atomic operations are unnecessary when writing Fibered Code[1].

A main disadvantage of fibers is that it cannot utilize multiprocessor machines without also using preemptive threads. But these days a M:N threading model has been implemented with no more preemptive threads than CPU[2].

On MICROSOFT WINDOWS, fibers are created using the convertThreadtoFiber andcreateFiber calls;a fiber that is currently suspended may be resumed in any thread. Fiber local storage is analogous to Thread local storage and may be used to create unique copies of variables[3].


Most server oriented applications, like Data servers, Web based servers, filedata servers, or mail servers, process large number of tasks that arrive from some remote source. A single request arrives at the server in some manner, which may be through a network protocol (such as HTTP, FTP, or POP), through a JMS queue, or perhaps by polling a database[1]. Regardless of how the request arrives, it is often the case in server applications that the processing of each individual task is short-lived and the number of requests is large.

A thread pool offers a solution to both the problem of thread life-cycle overhead and the basic problem of resource thrashing. By reusing threads for multiple tasks, the thread-creation overhead is spread over many tasks. As a bonus, because the thread already exists when a request arrives, the delay introduced by thread creation is eliminated. Thus, the request can be serviced immediately, rendering the application more responsive. Furthermore, by properly tuning the number of threads in the thread pool, you can prevent resource thrashing by forcing any requests in excess of a certain threshold to wait until a thread is available to process it[1].


Thread pools is a concept in which there is a creation of thread for every single process which would take its own time to execute. This may lead to performance disintegration in cases where the spawned threads run for a long time ( which is not that usual in most of the servers today ).

An alternative to this technique of handling multiple threads, considering the problems in thread pools, is to have just one thread running in the background, while many other threads of a certain type run in accordance with a task queue. Swing and AWT, for example, use this model. But since they use many User Interface tasks that usually run long, they might require additional threads to complete the overall task faster, since, a single thread viz. GUI thread, AWT thread would take very long to finish the execution completely. [1]



While the thread pool is a powerful mechanism for structuring multithreaded applications, it is not without risk. Applications built with thread pools are subject to all the same concurrency risks as any other multithreaded application, such as synchronization errors and deadlock, and a few other risks specific to thread pools as well, such as pool-related deadlock, resource thrashing, and thread leakage [1].


With any multithreaded application, a risk of deadlock can occur. As we have learnt most thread deadlocks occur when each is waiting for an event that only another process in the set can cause. The simplest case of deadlock is where thread X holds an exclusive lock on object DATA(A) and is waiting for a lock on object DATA(B), while thread Y holds an exclusive lock on object DATA(B) and is waiting for the lock on object DATA(A). In this case the deadlock threads will wait forever unless the LOCK is broken in which case JAVA does not support.

While deadlock is a risk in any multithreaded program, thread pools introduce another opportunity for deadlock, where all pool threads are executing tasks that are blocked waiting for the results of another task on the queue, but the other task cannot run because there is no unoccupied thread available[1]. Most times it happens when thread pools are used to implement simulations involving many interacting objects, and the simulated objects can send queries to one another that then execute as queued tasks, and the querying object waits synchronously for the response[2].

Resource thrashing

The thread pool size must be tuned properly for the scheduling mechanism to work. Threads consume numerous resources, including memory and other system resources. Each thread requires two execution call stacks, which can be large besides the memory required for the Thread object. In addition, the JVM will likely create a native thread for each Java thread, which will consume additional system resources[1].

The scheduling overhead of switching between threads is small, but with many threads the task of context switching results in a considerable drag on your program's performance. If a thread pool is too large, the resources consumed by those threads could have a significant impact on system performance [1]. With increasing number of threads and switching between threads there may rise a situation of Resource starvation. This happens because the pool threads are consuming resources that could be more effectively used by other tasks. Resources are needed for JDBC connection, Socket and files and are limited resources, hence there arises a case of having too many concurrent requests which cause failures.

Concurrency errors

Thread pools and queuing mechanisms mostly rely on the use of wait() and notify() methods used in Work queue creation. If coded incorrectly, it is possible for notifications and data to be lost, resulting in threads remaining in an idle state even though there is work in the queue to be processed.

Thread leakage

The most common problem that occurs in thread pools is the problem of thread leakage. Thread leakage occurs when a thread is taken from a thread pool to perform a task but is not returned back to the pool after the task completes. One way this happens is when the task throws an Exception (Runtime Exception) or an error message. If these exceptions and errors are not taken care or handled by the pool class then the thread will exit and the size pf the pool class reduces by 1. When this happens enough times, the thread pool will eventually become empty, causing the system to stall because there are no threads available for processing tasks.

Tasks that permanently stall, such as those that potentially wait forever for resources that are not guaranteed to become available or for input from users who may have gone home, can also cause the equivalent of thread leakage [1]. If a thread is permanently consumed with such a task, it has effectively been removed from the pool then tasks such as these should either be given their own thread or must wait only for a limited time [1].

Request overload

With increasing number of requests, there may not be enough queue space to add every incoming request to the work queue. This is because the tasks queued for execution may consume too many system resources and can cause resource starvation. It must then be decided whether to throw the request away, relying on higher level protocols to retry the request after a period of time. We can also provide a refusal to the request stating that the server is temporarily busy.


Thread pools can be an extremely effective way to structure a server application, as long as you follow a few simple guidelines:

Don't queue tasks that wait synchronously for results from other tasks. This can cause a deadlock, in this case all the threads are occupied with tasks that are in turn waiting for results from queued tasks that can't execute because all the threads are busy [1].

Be careful when using pooled threads for potentially long-lived operations. If the program must wait for a resource, such as an I/O completion, specify a maximum wait time, and then fail or requeue the task for execution at a later time. This guarantees that eventually some progress will be made by freeing the thread for a task that might complete successfully [1].

Understand the tasks that are being queued and what must be done so as to tune the thread pool size effectively. Must see if they are CPU bound or Memory bound. Each thread pool must be tuned to make sure that different classes of tasks with radically different characteristics use multiple threads.


The operating system defines an application hang as a UI thread that has not processed messages for at least 5 seconds. Obvious bugs cause some hangs, for example, a thread waiting for an event that is never signaled, and two threads each holding a lock and trying to acquire the others. You can fix those bugs without too much effort. However, many hangs are not so clear. Yes, the UI thread is not retrieving messages - but it is equally busy doing other 'important' work and will eventually come back to processing messages.

However, the user perceives this as a bug. The design should match the user's expectations. If the application's design leads to an unresponsive application, the design will have to change. Finally, and this is important, unresponsiveness cannot be fixed like a code bug; it requires upfront work during the design phase. Trying to retrofit an application's existing code base to make the UI more responsive is often too expensive. The following design guidelines might help.

Make UI responsiveness a top-level requirement; the user should always feel in control of your application

Ensure that users can cancel operations that take longer than one second to complete and/or that operations can complete in the background; provide appropriate progress UI if necessary.


Queue long-running or blocking operations as background tasks (this requires a well-thought out messaging mechanism to inform the UI thread when work has been completed)

Keep the code for UI threads simple; remove as many blocking API calls as possible

Show windows and dialogs only when they are ready and fully operational. If the dialog needs to display information that is too resource-intensive to calculate, show some generic information first and update it on the fly when more data becomes available.

A good example is the folder properties dialog from Windows Explorer. It needs to display the folder's total size, information that is not readily available from the file system. The dialog shows up right away and the "size" field is updated from a worker thread:


Unfortunately, there is no simple way to design and write a responsive application. Windows does not provide a simple asynchronous framework that would allow for easy scheduling of blocking or long-running operations.

10. CSRSS - Its use and Harms towards the system

"Client/Server Runtime Subsystem, or csrss.exe, is a component of the Microsoft Windows NT operating system that provides the user mode side of the Win32 subsystem and is included in Windows 2000, Windows XP, Windows 2003, Windows Vista, Windows Server 2008 and Windows 7"[1].

CSRSS is a user mode system service and is mainly responsible for GUI shutdown and handling the win32 console. When a user-mode process calls a function involving console windows, process/thread creation, or Side-by-Side support, instead of issuing a system call, the Win32 libraries (kernel32.dll, user32.dll, gdi32.dll) send an inter-process call to the CSRSS process which does most of the actual work without compromising the kernel [1].

As such if you realize Csrss.exe provides the critical functions of the operating system, and its termination can result in the Blue Screen of Death being displayedCsrss.exe controls threading and Win32 console window features. Threading is where the application splits itself into multiple simultaneous running tasks. Threads supported by csrss.exe are different from processes in that threads are commonly contained within the process, with various threads sharing resources within the same process [2].

In some cases there exists another instance of the CSRSS.exe file in the System. In such cases , we cannot delete the csrss.exe file as it manages the GUI shutdown and may cause the blue screen of death. This occurs due to a Trojan penetration into the Process.

Basically how the Trojan or Malware is injected in the csrss process is it installs itself as a default debugger that is injected into the execution sequence of a target application. Once the threat/malware is installed as a default debugger, it will be run every time a target application is attempted to be launched - either to mimic it and hide its own presence (e.g. an open port or a running process), or simply to be activated as often as possible[19].