A Survey On Contention Management Techniques Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Abstract-Parallel applications are good in performance.But because of the communications which occur unpredictably contention of the shared objects will occur.Eventhough the contention management can be considered as a task scheduling problem ,contention managers differ in many ways .In Software Transactional Memory (STM), the contention management systems are meant for mechanisms used to ensure forward progress of the transactions to avoid livelock and starvation, and to promote throughput and fairness. In Software Transactional Memory (STM), contention management systems meant for mechanisms used to ensure forward progress of the transactions to avoid livelock and starvation, and to promote throughput and fairness. Priority-based approaches requires that reads be visible to all transactions. When there is a changing load, the operating system make arbitrary scheduling decisions, leading to long serialization delays until the preempted thread resumes execution.Serialization and Time slice extension methods that overcomes the spinning and blocking disadvantages is also studied.

Keywords-Content management,load control


Content Management is orthogonal to the problems of load managing problems. So a load control mechanism is used to avoid any contention that will occur when the communication occurs between different parallel processes unexpectedly. One of the solutions is to provide a load control mechanism that can be used for light and heavy loaded system without any modification at the operating system level. This can be implemented as a library which serves as a module which can be used for extending an existing code.In a multitask processing system application performance can be increasingly sensitive to the synchronization primitives and contention management policies. Synchronization primitives resolve contention by busy waiting (spinning) until the requested resources became free or by rescheduling the waiting thread(blocking) or the combination of the two. Exponential back off and queue based primitives case pressure on caches and memory systems. But this techniques wastes CPU time and waiting threads compete with the other requests. The problem is too critical when there is a high load in the system. The weakness of spinning or blocking or a hybrid technique will also has many challenges of balancing fast response times removing threads to avoid deadlocks. A load is defined as the runnable threads in the system. If the number varies from 1 to 192 threads. Figure 1 shows the state of art and spin lock implementation.

Figure1.Weakness in state-of art synchronization primitives which use blocking and spinning.


The dynamic software transaction memory Software transactional memory is a software implementation of a hardware based scheme. Dynamic Content Management transactions operator on blocks of memory that performs a initialize,access,acquire and Commit steps. Transactions that conflict frequently will result in many aborts. The ability of STM to scale depends on the workload. If the number of threads exceeds the number of cores, then threads can be preempted during execution. This increases the risk conflicts. Then the transaction throughput will decrease

Transitional Memory is a paradigm for developing concurrent applications. When implemented on a multiple core system and work without any problem, when there are only few conflicts in the transactions. In contrast if there is frequent writes in a long transaction or when many threads content for a small number of cores, conflicts will occur. Precision of the implementation will also decrease as a result. Transactional memory is used for lock based programming in the concurrent based application. If a conflict occurs, content manager decides how it should be resolved. Transaction are made winner or loser transactions.If the transaction results in a conflict ,here the repeated aborts will become a problem . These prevent useful work being done on a computer's CPU, which is a multiword system. In such a system the content manager will allow a serialization, in which a winner thread is executing & loser will wait for the winner to complete.STM allows developer to combine sequences of concurrent operations into an atomic transaction that automatically rollbacks and restarts its execution. The different libraries that can implement this system include i)Tiny STM ii)Ennal's STM iii)McRT STM.Tiny STM provides eager write conflict detection and invisible reads ie, W-R and W-W conflicts are checked during access time and Read-Write conflicts are checked during commit time.

Each transcatable object is represented as TMObject which contains a pointer to a locator object.Locators point to descriptors for transaction.When transaction attempts to access an object we need to read the locator pointer of the object






TM object

Fig2.transaction object accesses

The contention management interface for DSTM include notification methods or hooks for various events of the transaction


Polite:Uses potential back off technology to resolve conflicts for a random amount of time with a mean function of n where n is the number of conflicts for the object.

Karma:It attempts to judge the amount of work that a transaction did based on its decision of abortions

Eruption:It adds transaction priority to the enemy transaction when it is blocked.

Timestamp : It record the current system timestamp at the beginning of the transaction


Serialization is a contention management strategy that prevents a thread running loser transactions until winner transactions are executed.


ABegin start(tx)


If( conflict(tx,tx'))

Abort (tx)


Upon commit(tx)


The Algorithm shows the implementation of the serialization .At the start of the transaction it is mapped as the current thread .When a conflict is detected it is aborted and is made to wait for the winner transaction .At the commit of the transaction it releases any waiting transaction.The implementation optimally depends on the numbere of threads of execution , the number of cores and the level of contention.The shared memory based implementation of serialization is implemented in two parts STM and Kernel level.

A.Implementation (User- level):Serialization is implemented by causing the loser transaction to wait in a spinlock held by the winner .Loser transaction is prevented from performing any operations that will further create conflicting transactions.

B.Implementation(kernel level):Here the conflicting thread is blocked ,allowing other threads to acess the CPU.This is implemented when parallelism is needed and there is more thread than the cores .To implement this, a thread is removed from Kernel scheduler's ready queue is needed.This results in the need of an communication between user and the kernel level. To enable the communication system calls are invoked .Locks are used until the winner transaction ends .Figure 3 shows the architecture based on a memory region shared between the scheduler and STM library.The shared memory region contains a table of MaxThreads elements .Each of the element describes information for a given thread.Subsequent requests are linked to the next available memory in an array during the initialization process.This is implemented in two levels :STM and in the Kernel level.

The overhead in the above two blocking based strategies is due to the interactions for restoring the loser transaction after the commit of a winner transcation .

Figure 3.Layout of the interactions of Kernal and Application.


This is an orthogonal approach to the Serialization.It commits a transcation as soon as possible disallowing the chance of transaction from a long run and lead to a conflict with another transaction.A counter is associated with each thread .Only a maximum number of 'N' extensions before it is suspended.If a thread is

successful ,it yields after the next commit to ensure the

processor is not assigned as if to a single processor .A shared memory based impentation sets a flag indicating that a thread is in transaction.It pevents a thread from being preempted.On each invocation of the scheduler ,in a potential pre- emption point a checking is done for a running thread.If so a second checking is done for the number of extensions received for a transcation.If its is less than N ,the extension counter is incremented.In a operating system like Linux with a priority based scheduler a thread is suspended when the time slice has expired .


In priotity based contention management problem, each thread has a base priority BP relative to other executing threads. Each threads cumulative throughput is proportional to the base priority. A thread with base priority 3 should complete 50% more work in any given period of time than a thread with the base priority .Lower priority threads are pushed to guarantee progress of at least one thread even though they are difficult to achieve. Modularity is achieved here and control is bitterly managed

Contention Management Methods


Karma,Eruption ,Polka Contention Manager

Considers ratio of prioroties

TimeStamp variants

Retains the transacation timestamps


Randomize updates to a list of transactions

Table1.Comparison Of Methodologies Based On Prioritization


The contention manager has three components for a transaction state is:

i)When the transaction begins, a timestamp is assigned which is retained even if it is aborted & restarted. A transaction with an earlier timestamp.

ii)A transaction status field indicates the status of active, commited or aborted. A compare and swap instruction is applied to commit the transaction.

iii)A Boolean waiting field indicates whether it is waiting for another transaction

Greedy contention Manager has two simple rules. If a transaction A discovers to conflict with transaction B

a)If B is a lower priority transaction and waiting then A aborts B

b)If B is a higher priority transaction and is not waiting then A waits until B commits.


In this type of STM we use a table of ownership records ,extendable timestamps ,and a hashable indexed write set.Lazy STM will avoid pathologies that degrade throughput. The strategy depends on (a) lazy acquisition of ownership,and (b) way to capture both priority and conflicts.This strategy uses two mechanisms i) using Bloom filters, the other using visible read bits A careful data structure design minimizes the cost of the instrumentation needed for a transaction to read its own writes.

a)Write Set Lookup Latency

In earlier lazy STMs, the cost of write-set lookups has been high in large transactions. They use a linear buffer to represent the write set,.After W writes by a transaction, each of R subsequent reads must perform a O(W) lookup in the set. resulting in O(RW) aggregate overhead.So a hash table is used to map addresses to indexes in the linear write log. The hash table keeps versioned buckets which resolves collisions with linear probing.

b)Prevention of aborts

A "Passive" contention management strategy, aborts transactions when they encounter a lock. The combination of extendable timestamps and lazy acquire, suggests a refined strategy that reduces the incidence of self-abort. If the lock is the first instance of conflict, then, assuming extandable timestamps, the waiting transaction can continue once the lock holder



.Conflict resolution important because threads which encounter contention must wait for the lock to be released. As specified above ,there are two fundamental contention management approaches -spinning and blocking. Blocking has the benefit of freeing the CPU until the waiting thread can make progress again.The scheduler can cooperate with blocking synchronization, like descheduling threads which wait for a preempted lock. Blocking is an expensive operation, however, because it requires two context switches (with corresponding OS scheduling decisions), adding 10-15μs to the critical path of the system.

Figure4 :Problem scenario for contention management.

Spinning is for quick lock hand-offs, while blocking reduces competition among threads for processor cycles.Scheduler can cooperate with blocking synchronization.A longer critical path increases conflict with other threads forming cycle of slow lock handoffs called convoys.


Load control depends on three facilities of the Operating system: the ability to (a)schedule periodic daemon thread wakeups independent from the system clock tick (b) measure load accurately and (c) deschedule and wake threads efficiently.To measure load dynamic tracing facilities are used.In Linux SystemTap is used .Solaris uses lwp_park system call removes a thread from the OS scheduler.Linux uses futex system call for adding a thread to the sytem queue.


The sleep slot buffer requires concurrent execution of threads causing undue contention,spinning threads to find the slot availability and no race conditions.Figure below shows communication of an application thread and a controller via a sleep slot buffer.To achieve fast response time we use a circular buffer.The load controller maintains a sleep slot buffer with a sleep target T and another counter which maintains the number of threads ever slept. Threads have to register with the sleep slot buffer.

The load control mechanism comprises of application visible spinlock and a dedicated control that uses sleep slot buffer which is used to notify spinning threads how to block in response to the overload.The controller adjust this by adjusting the size of sleep slot buffer.Spinning thread checks for empty slots .When the buffer grows it reduces the load.When the buffer shrinks controller invokes threads which need be slept at the time.Each of the thread checks whether controller cleared its slot before sleep.High resolution timers are used to wake up the threads .Inorder to deschedule the threads efficiently we use light weight system call primitives in Linux and Unix.Compare and swap method is used to make transactions atomic.Registration of the threads help them to invoke slept threads when they are underload.

By applying the spinning mechanism and blocking we can achieve high responsive lock handoffs.Bottleneck locks are also reduced .

Figure5:Load Controller with sleep slot buffer

The controller uses already existing OS services .Accounting facilities are used to monitor a sensor value .if it is overloaded .ie,the number of excess runnable threads in the system,the controller makes scheduling decision the thread is removed from a core to reduce the overload.If it is underload a rescheduling is done to make the load balanced.


Various Content management sytems are studied and the bottlenecks and requirements in implementing the systems are checked.Content management and scheduling are important for parallel execution .But higher number of interactions lead to the poor performance of the systems.Dynamic content management systems .Load control allows improved performances .Contention manager looks like a concurrency manager in the database management system. Dynamic Content Management transactions operator on blocks of memory that performs a initialize,access,acquire and Commit steps.Serilaization is used as an improved technique over Content managers.Time Slice extension is an improvement over the serialization method .In priority based contention management throughput is based on the priority.