An Embedded system has become indispensable in our lifes: cars, aeroplanes, powerplant control systems, telecommunications systems, all contain digital computing systems with dedicated functionality. Most of them are real-time systems which response to their timeliness constraints. The timeliness requirement has to meet under influence of unpredictable stochastic behavior of the system. In this thesis we address stochastic task execution times.
In scheduling of tasks in real-time systems, it is usually assumed that the tasks consume their WCET in every invocation. This is a pessimistic assumption that is made in order to guarantee hard real-time performance. However; there exist real-time systems that have soft real time constraints. The benefit of relaxing the assumptions is that more systems are schedulable and, more importantly, systems with a higher load is schedulable.
The goal of the project is to:
* Implementing Stochastic WCET and the least slack first scheduling algorithm in a real-time system simulator.
* Experiments should be performed in order to investigate experimentally the benefits in terms of number of schedulable systems using stochastic wcets compared to using fixed wcets.
The results show the performance and the advantage gained by least slack first scheduling algorithm which has lowest possible slack time.
RTOS, Simulation, Tasks, Response time, Worst case response time, worst case execution time.
begreppen Bäddat in systemen har bli oumbärlig i vår livsform: bil, flygmaskinerna powerplant kontroll systemen telecommuncations systemen, de all innehålla digital arbete med computer systemen med hängiven funktionellitet.
Högst omen dem de/vi/du/ni är verklig - tid systemen vilken svaren till deras timeliness tvången. Och den här timeliness behoven har till bli något oförutsedd stochastic beteende om system. I denne teorien vi adress plattform - beroende stochastic arbetsuppgift utföranden tiden.
I planläge av uppgiften i verklig - tid systemen, den er vanligtvis anta så pass uppgiften förtära den wcet i varje åkallan. Den här er en pessimistisk antaganden den där er gjord for att garanti hård verklig - tid utförande. Hur... än; där finnas verklig - tid systemen så pass har mjuk verklig tid tvången. Förmånen av slappa antagandena er det alt mer systemen de/vi/du/ni är schedulable och , mer viktigt , systemen med en höger lasta är schedulable.
Målet om projekt är till:
* Stochastic wcet och den minst slö första planläge algoritmen i en verklig - tid system simulanten.
* Experimentera skulle bli utfört for att undersöka experimental den förmånen i termen av antal av schedulable systemen användande stochastic wcets jämförde med användande fästad wcets.
Den resultaten utställning utföranden och fördelen vinna vid minst slö första planläge algoritmen vilken har lägst mullig slö tid.
List of Abbreviations
PTDA ---} Probabilistic time demand analysis.
STDA ---} Stochastic time demand analysis.
EP ---} Execution profile
SN ---} Switching number
This chapter gives the motivation for the work done in this thesis in section 1.1, and then objectives in section 1.2, and structure of the thesis in section 1.3.
Embedded systems have become common use in our life: household appliances, cars, airplanes, power plant control systems, medical equipment, telecommunication systems, space technology, they all contain digital computing systems with dedicated functionality. Most of them, if not all, are real-time systems, i.e. their responses to stimuli have timeliness constraints. The timeliness requirement has to be met despite some unpredictable, stochastic behavior of the system.
The main objective of this thesis is to develop
1. Implementing stochastic wcet and the least slack first scheduling algorithm in a real-time system simulator.
2. Experiments should be performed in order to investigate experimentally the benefits in terms of number of schedulable systems using stochastic wcets compared to using fixed wcets.
3. Get trained in using Research methodology for solving a state of art problem in an area important for the Masters program.
4. Understand how the work is expected to be documented and practice it in writing a Masters thesis.
1.3 Thesis Outline
Chapter 2, describes the theoretical background about the stochastic behaviour of real-time systems and least slack first scheduling algorithm.
Chapter 3, outlines the problem formulation.
Chapter 4, describes the solution.
Chapter 5, demonstrates the evaluation of the simulation.
Chapter 6, shows the related works.
Chapter 7, follows conclusions from the results obtained.
This chapter introduces basic concepts and notations needed for understanding the remainder of the thesis. Section 2.1 presents the main concepts of real-time and embedded systems. Section 2.2 presents the concepts of least slack first algorithm and their types.
2.1 Real-Time and Embedded Systems
Real-Time Operating System, an operating system designed to be used in real time systems.
A Real time systems has been defined as
Any information processing activity or system which has to respond to externally generated input stimuli within a finite and specified delay.
The basic characteristics of a real-time systems or embedded computer systems have been considered. They were:
1. Largeness and complexity
2. Manipulation of real numbers
3. Real-time control.
4. Efficient implementation.
5. Extreme reliability and safety.
Systems, in which the correctness of their operation is defined not only in terms of functionality but also in terms of timeliness, form the class of real-time systems.
Hard Real time systems: Timeliness requirements may be hard meaning that the violation of any such requirement is not tolerated.
In a hard real-time system, if not all deadlines are guaranteed to be met, the system is said to be unschedulable.
To understand, design, predict, and analyze safety critical applications such as plant control and aircraft control, therefore the community focused on hard real time systems, where breaking timeliness requirements are not tolerated.
The analysis of such system gives a yes/no answer to the question if the system fulfils the timeliness requirements. Hard real time analysis relies on building worst-case scenarios. Hard real time analysis cannot afford but to assume that worst case scenarios always happen and to provision for these cases. This approach is the only one applicable for the class of safety critical embedded systems, even if very often leads to significant under utilization of resources.
Soft Real time systems: Systems classified as soft real-time may occasionally break a real-time requirement provided that the service quality exceeds prescribed levels.
The nature of real-time embedded system is typically heterogeneous along multiple dimensions. For example, an application may exhibit data, control and protocol processing characteristics. It may also consist of blocks exhibiting different categories of timeliness requirements, such as hard and soft.
In the case of soft real-time systems however, the analysis provides fitness estimates, such as measures of the degree to which a system is schedulable, rather than binary classifications.
Simulation is a method which can be used for analysis of response time. When we are using simulation, a detailed model of the system is executed in simulating a system before it is implemented helps reducing risks of failure.
A process in a real time system usually with some deadline and a period.
2.1.4 Response time
The time in which system gives output after taking input.
2.1.5 Worst case response time
The maximum possible response time of a task.
2.1.6 Worst case execution time (WCET)
The longest possible execution time of the task.
Stochastic model uses in:
It improves schedulability of tasks compared to assuming their execution times are always equal to their wcets.
It uses well known Techniques of Deterministic Analysis, such as blocking in shared resources, task priority assignment.
2.2 Least slack first Scheduling Algorithm:
Least Slack Time scheduling is a Scheduling algorithm. It assigns priority based on the slack time of a process. It is also known as Least Laxity First and most common use is in embedded systems, especially those with multiple processors.
2.2.1 Slack time
This scheduling algorithm first selects those processes that have the smallest "slack time". Slack time is defined as the temporal difference between the absolute deadline, the ready time and the run time.
More formally, the slack time for a process is defined as:
(d - t) - c'
d is the process deadline
t is the real time since the cycle start
c' is the remaining computation time.
It's Suitable in:
LST scheduling is most useful in systems comprising mainly aperiodic tasks, because no prior assumptions are made on the events' rate of occurrence. The main weakness of LST is that it does not look ahead, and works only on the current system state. Thus, during a brief overload of system resources, LST can be sub-optimal and it will also be suboptimal when used with uninterruptible processes.
It is optimal in:
1. Processor preemption is allowed.
2. No contention for resources.
3. Single processor.
4. Arbitrary release times.
5. Arbitrary deadlines.
2.2.2 Related works on LST
When an algorithm contains same least slack time, it causes lots of unnecessary switching so bad performance will be there, so to restrict that we use least slack first heavily.
LSF scheduling algorithm implemented by threshold; a novel Dynamic Fuzzy Threshold Based least Slack First (DFTLSF) scheduling algorithm is presented.
DFTLSF algorithm uses the linguistic set (uncountable set) to describe the period and the slack time which contains uncertain characters. The threshold coefficient gotten by fuzzy rules assigns the threshold of the running task dynamically.
Any one who wants to preempt this task must have the smaller slack time than the threshold.
The results of the simulations show that, comparing to the traditional LSF
Algorithm, the switching number (SN) is much smaller
22.214.171.124 DFTLSF Fuzzy Threshold:
Two characters are considered to judge the priority of a task in DFTLSF:
1. Slack Time.
The most important of the task to the system is,
The small coefficient results in the small threshold which causes the hard preemption by other tasks. Once the task gets the CPU, its slack time reduces to its preemption threshold level which is computed.
It won't get back until the task is done or the task is preempted by another task.
The algorithm integrates the advantages of the preemption scheduling algorithm and the non-preemption one. It results in a dual priority system that is good for the tasks executing successfully and reducing the switching number.
The method makes the schedule and the preemption flexible and reasonable according to the situation the task faces. When the threshold coefficient is 0, the algorithm is the method becomes the LSF while the threshold coefficient gets its biggest one which is 1.
In DFTLSF scheduling algorithm, the dynamic fuzzy threshold coefficient is proposed. It improves the schedulability by adding a fuzzy threshold coefficient.
To look for the threshold coefficient, some fuzzy rules are made; the threshold coefficient is used for the running tasks in the system. It is compared with other tasks slack time to decide which one to run first.
It decreases the switching number among tasks when the slack time of the tasks is almost the same. As a result, it avoids the thrashing (swapping) in the system and improves the schedulability.
Another improvement is the critical value of slack time. It is introduced into the system to ensure the tasks which are nearly finished can't be preempted by other tasks.
126.96.36.199 Least Laxity First Scheduling:
This can be a coprocessor capable of implementing dynamic scheduling algorithms which are, until now rarely used because of their complex computations at schedule time.
LLF is an optimal scheduling methodology that allows detection of time constraint violations ahead of reaching a tasks deadline, but has the disadvantage of showing poor runtime behavior in some special situations (“thrashing”)
The Least-Laxity-First algorithm (LLF) is a dynamic scheduling method, i.e. it makes the decision about which task to execute next at schedule time.
Another great advantage of the Least-Laxity-First algorithm is the fact that except schedulability testing no further analysis, e.g. for assigning fixed priorities to the tasks, has to be done at development time.
Furthermore, Least-Laxity-First shows poor performance in situations in which more than one task have the smallest slack.
188.8.131.52 Enhanced Least Laxity First Scheduling:
This algorithm preserves all advantages of LLF while improving the run time behavior by reducing the number of context switches.
Computation time of this device is rather a matter of time resolution than of the number of tasks.
This is of high importance as LLF in certain situations causes a big number of unnecessary context switches that can dramatically increase operating system overhead.
ELLF algorithm represents a passive scheduling coprocessor, i.e. the device determines the task to be executed next only after an external start signal.
The aim of this improvement is to ensure that in a situation when some tasks would normally start to thrash, they are executed consecutively without preempting each other.
This can not be done by simply making the whole system temporarily Non-preemptive. With such a non-preemptive LLF-algorithm, tasks may miss their deadlines.
Advantages of Enhanced Least-Laxity- First Scheduling:
1. It responses the time analysis of thrashing tasks.
2. Number of Context Switches.
184.108.40.206 Modified Least Laxity First Scheduling:
A Modified Least-Laxity-First (MLLF) scheduling algorithm is to solve the frequent context switches problem of the LLF scheduling algorithm.
The MLLF scheduling algorithm allows the laxity inversion where a task with the least laxity may not be scheduled immediately.
If the laxity-tie occurs, MLLF scheduling algorithm allows the running task to run with no preemption as far as the deadlines of other tasks are not missed.
Laxity Inversion Duration at time t is the duration that the current running task can continue running with no loss in schedulability even if there exist a task (or tasks) whose laxity is smaller than the current running task.
Hence, MLLF scheduling algorithm avoids the degradation of systems performance.
Since the application domain of this thesis is embedded systems, this chapter starts in section 3.1 with a discussion on the existing scheduling algorithms based on Real time systems. Section 3.2 presents the need of Least Slack First Scheduling Algorithm in stochastic wcet.
3.1 Scheduling Algorithms in Real-time Systems
For a given set of jobs, the general scheduling problems asks for an order according to which the jobs are to be executed by satisfying with various constraints. Typically, a job is characterised by its execution time, ready time, deadline, and resource requirements. The execution of a job may or may not be interrupted over a set of jobs and there is a precedence relation which constraints the order of the execution, especially with the execution of a job cannot begin until the execution of all its predecessors is completed.
Types of Real-Time Scheduling
For example the systems on which the jobs are to be executed is characterised by the amount of resources available [22, 59, 30, 32, 27, 12].
The following goals should be considered in scheduling a real-time system: [30, 32, 27].
* Meeting the timing constraints of the system
* Preventing simultaneous access to shred resources and devices.
* Attaining a high degree of utilization while satisfying the timing constraints of the system.
* Reducing the cost of context switches caused by preemption.
* Reducing the communication cost in real-time distributed systems.
In addition, the following items are desired in advanced real-time systems:
* Considering a combination of hard, and soft real time system activities, which implies the possibility of applying dynamic scheduling policies that respect to the optimality criteria.
* Task scheduling of applying dynamic scheduling policies that respect the optimality criteria.
* Covering reliability, security, and safety.
Basically, the scheduling problem is to determine a schedule for the execution of the jobs so that they are all completed before the overall deadline [22, 59, 30, 32, 27, 12].
Given a real-time system, the appropriate scheduling approach should be designed based on the properties of the system and the tasks occurring in it. These properties are as follows [22, 59, 30, 32]:
_ Soft/Hard/Firm real-time tasks
The real-time tasks are classified as hard, soft and firm real-time tasks.
Periodic tasks are real-time tasks which are activated (released) regularly at fixed rates (periods). Normally, periodic tasks have a constraint which indicates that instances of them must execute once per period.
Aperiodic tasks are real-time tasks which are activated irregularly at some unknown and possibly unbounded rate. The time constraint is usually a deadline.
Sporadic tasks are real-time tasks which are activated irregularly with some known
bounded rate. The bounded rate is characterized by a minimum inter-arrival period, that is, a minimum interval of time between two successive activations. The time constraint is usually a deadline.
An aperiodic task has a deadline by which it must start or finish, or it may have a constraint on both start and finish times.
In the case of a periodic task, a period means once per period or exactly units apart.
A majority of sensory processing is periodic in nature.
For example, a radar that tracks flights produces data at a fixed rate [32, 29, 27, 12].
_ Preemptive/Non-preemptive tasks
In some real-time scheduling algorithms, a task can be preempted if another task of
higher priority becomes ready. In contrast, the execution of a non-preemptive task should be completed without interruption, once it is started [32, 30, 27, 12].
Multiprocessor/Single processor systems
The number of the available processors is one of the main factors in deciding how to
Schedule a real-time system.
In multiprocessor real-time systems, the scheduling algorithms should prevent simultaneous access to shared resources and devices. Additionally, the best strategy to reduce the communication cost should be provided [32, 27].
Fixed/Dynamic priority tasks
In priority driven scheduling, a priority is assigned to each task. Assigning the priorities can be done statically or dynamically while the system is running [22, 59, 30, 32, 12].
For scheduling a real-time system, we need to have enough information, such as deadline, minimum delay, maximum delay, run-time, and worst case execution time of each task.
A majority of systems assume that much of this information is available a priori and,
hence, are based on static design. However, some of the real-time systems are designed to be dynamic and flexible [22, 59, 30, 32, 12].
_ Independent/Dependent tasks
Given a real-time system, a task that is going to start execution may require to receive
the information provided by another task of the system. Therefore, execution of a task
should be started after finishing the execution of the other task. This is the concept of dependency.
3.2 Implementing Least Slack First in stochastic behavior:
The laxity of a process is defined as the deadline minus remaining computation time. In other words, the laxity of a job is the maximal amount of time that the job can wait and still meet its deadline. The algorithm gives the highest priority to the active job with the smallest laxity. Then the job with the highest priority is executed. While a process is executing, it can be preempted by another whose laxity has decreased to below that of the running process.
A problem arises with this scheme when two processes have similar laxities. One process will run for a short while and then get preempted by the other and vice versa. Thus, many context switches occur in the lifetime of the processes. The least laxity first algorithm is an optimal scheduling algorithm for systems with periodic real-time tasks. If each time a new ready task arrives; it is inserted into a queue of ready tasks, sorted by their laxities. In this case, the worst case time complexity of the LLF algorithm is where the total number of the requests in each hyper-period of periodic tasks in the system and is the number of aperiodic tasks. The execution time of a task depends on application dependent, platform dependent, and environment dependent factors. The amount of input data to be processed in each task instantiation as well as its type (pattern, configuration) are application dependent factors. The type of processing unit that executes a task is a platform dependent factor influencing the task execution time. If the time needed for communication with the environment is to be considered as a part of the execution time, then network load is an example of an environmental factor influencing the task execution time.
Execution time probability density function
' shows the execution time probability density of such a task. An approach based on a worst case execution time model would implement the task on an expensive system which guarantees the imposed deadline for the worst case situation. This situation however will occur with a very small probability. If the nature of the system is such that a certain percentage of deadline misses is affordable, a cheaper system, which still fulfills the imposed quality of service, can be designed.
For example, such a cheaper a system would be one that would guarantee the deadlines if the execution time of the task did not exceed a time moment t. It can be seen from the ', that there is a low probability that the task execution time exceeds t and therefore, missing a deadline is a rare event leading to an acceptable service quality.
Design and Implementation
This chapter presents the design and implementation of stochastic wcet and LSF scheduling algorithm in section 4.1 respectively.
4.1 Design of Least Slack First Algorithm:
Hard real-time scheduling can be thought of as an issue for embedded systems where the amount of time to complete each burst is subject to these parameters:
Amount of work (W), amount of slack time (S)
Assume that the numbers are specified in terms of processor ticks (timer interrupts). The deadline (D) is the sum of W + S, i.e., slack time precisely represents the amount of time which in which the process can be preempted while completing its burst in order to achieve the deadline. When a number of processes are attempting to achieve their deadlines the following computation takes place at each tick (1):
--W; // for the current running process
--S; // for all processes on the ready queue
Namely, the running process has completed another tick of work towards its deadline and the others have one less tick of slack time available.
In scheduling algorithms we imagine for such a system would not be time-sharing, but would be priority-based, where the priority is measured by some sense of urgency towards completing the deadlines.
Least Slack First (LSF): when a process completes a burst or a new one becomes ready, schedule the process whose value S is the smallest. Or, it can focus on completing of the overall deadline.
Both represent reasonable notions of satisfying process urgency. Here is a simple example which illustrates the differing behavior:
Process idle time burst
------- --------- -----
A 0 (W=10, S=8)
B 3 (W=3, S=11)
C 5 (W=3, S=6)
Using the LSF algorithm, we would complete these bursts as follows:
Time run ready
---- --- -----
0 A (10,8) ()
3 A (7,8) (B(3,11) )
5 C (3,6) ( A(5,8), B(3,9) )
8 A (5,5) ( B(3,6) )
13 B (3,1) ()
Based on the above example the code has been generated and explanation for the above example:
Iteration 1: At time 0
A will be (10, 8)
Where as 10 is W (current running process), 8 is S (ready queue).
Iteration 2: At time 3
We have 2 stages:
1. Run stage:
At this stage the process A will (7, 8) because
For 7: W - idle time
8: ready queue
2. Ready stage:
At this stage the process B will (3, 11) because
For 3: idle time
11: ready queue
Iteration 3: At time 5
Same as like Iteration 2 here also we have 2 stages
1. Run stage:
At this stage the process C will (3, 6) because
For 3: idle time
6: ready queue
2. Ready stage:
Process A will be (5, 8) and B will be (3, 9)
For 5: 10-5 i.e. W - idle time
8: ready queue
3: idle time
9: W+S+idletime - idle time - idle time
Iteration 4: At time 8
In Run stage A will be (5, 5)
Because one cycle is executed so A ready queue will be minimized by 3
In Run stage B will be (3, 6)
6: W-idle time -idle time
There will be a context switches.
Iteration 5: At time 13
In run stage B (3, 1)
1: ready queue of iteration 3 -idle time.
4.1.1Comparing of Slack tasks:
To compare slack tasks in LSF with different conditions the code has been written.
4.1.2Implementation of Execution times:
In an actual execution time, the execution block consumes a "guessed" execution time that the scheduler is using in its scheduling decisions. In the function execution, the class Computation need to use the actual execution. In the LSF comparator we must make sure the "guessed" execution time is being used.
Lets denote the actual execution time as C_to_be_executed_time and it is a
data member of the class Computation. this.C_to_be_executed_time = distr.sample(); // time that will be consumed by the execution block
Let's denote the "guessed" execution time as C and it is also a data member of the class Computation.
this.C = distr.sample();// Assumed WCET to be used by LSF scheduler
We must now ensure that the execute method consumes C_to_be_executed_time time units and the LSF comparator uses C.
Further, ensuring the execution times that are assigned in the constructor of Computation class lie in the range of 0 and some positive upper bound.
4.1.3Implementation for scheduling periodic tasks and workload:
The below code mention to implements the periodic tasks.
Periodic p1 = new Periodic(0,31, 0, "T1");
p1.installConditionedComputation(new Computation(new Normal(10,5), p1));
The workload can be calculated by,
Workload = max execution time/ period time.
Evaluation of Simulation
This chapter describes the performance evaluation of the simulator. Section 5.1 presents the simulator foundation, while in section 5.2
5.1 Eclipse and the Eclipse Foundation
Eclipse is an open source community; projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle.
The Eclipse Foundation is a not-for-profit, member supported corporation that hosts the Eclipse projects and helps cultivate both an open source community and an ecosystem of complementary products and services.
The Eclipse Project was originally created by IBM in November 2001 and supported by a consortium of software vendors. The Eclipse Foundation was created in January 2004 as an independent not-for-profit corporation to act as the steward of the Eclipse community. The independent not-for-profit corporation was created to allow a vendor neutral and open, transparent community to be established around Eclipse. Today, the Eclipse community consists of individuals and organizations from a cross section of the software industry.
In general, the Eclipse Foundation provides four services to the Eclipse community:
1) IT Infrastructure.
2) IP Management.
3) Development Process and,
4) Ecosystem Development.
Full-time staffs are associated with each of these areas and work with the greater Eclipse community to assist in meeting the needs of the stakeholders.
Eclipse - an open development platform
Eclipse is an open source community; projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle.
A large and vibrant ecosystem of major technology vendors, innovative start-ups, universities, research institutions and individuals extend, complement and support the Eclipse platform.
1. Enterprise Development.
2. Mobile + Device development.
3. Application framework, language ide.
Eclipse is a collection of open source projects built on the Equinox OSGi run-time.
Eclipse started as a Java IDE, but has since grown to be much, much more.
Eclipse projects now cover static and dynamic languages; thick-client, thin-client, and server-side frameworks; modeling and business reporting; embedded and mobile systems.
5.2 simulator setup:
On a high level, the simulator simulates a computer system by using objects that encapsulate different functionality and decide on parameters on the objects.
The following classes are important:
2.CPU: A CPU that is bound to the computer system
DataDependencyGraph g = new DataDependencyGraph();
g.insertData(1, 0, 0);
g.insertData(2, 0, 0);
g.insertData(3, 0, 0);
A data dependency graph that describes data items in the system and their relationship. Think of the relationships a edges between nodes in a directed acyclic graph. Constructs a data dependency graph of three data items. These data items have no relationships.
3. ConditionedExecution: At least one conditioned execution that is bound to a task.
Periodic p1 = new Periodic(0, 100, 0, "T1");
p1.installConditionedComputation(new IfTime(1, 10, 1, p1, g));
p1.installConditionedComputation(new Computation(10, p1));
Periodic p2 = new Periodic(0, 200, 0, "T2");
p2.installConditionedComputation(new IfTime(1, 10, 1, p2, g));
p2.installConditionedComputation(new Computation(10, p2));
Periodic p3 = new Periodic(0, 103, 0, "T3");
p3.installConditionedComputation(new IfTime(2, 10, 1, p3, g));
p3.installConditionedComputation(new Computation(10, p3));
Constructs three periodic tasks. Each task has two conditioned executions that execute in the order they are bound to the task.
4.Tasks: At least one task that is bound to a CPU.
Vector<CPU> c = new Vector<CPU>();
CPU cpu = new CPU(new WinOverSlack());
Instantiates a CPU and bounds the tasks to the CPU.
5.Events: At least one data item that may be used by an execution
ComputerSystem cs = new ComputerSystem(c, trace);
Constructs a computer system and bounds the array of CPUs to it. The method eventLoop starts the simulation and the simulation finishes when it reaches time point 10000.
Trace trace = new Trace(new OutputStreamWriter(System.out));
Instantiates a trace where the output of the simulation will be written. This trace writes to standard output, which makes the output to appear in the console in Eclipse.
In simulation, I used stochastic execution times on a kind of conditioned execution that is bound to a task. This means that every time the conditioned execution
executes it consumes different amounts of time. Thus, when an instance of a task starts we can take a guess how much time it will consume. The system uses LSF that use the guessed execution time.
This chapter focuses on alternative approaches and related research works namely stochastic task execution times. Hence, in the below section we discussed related works on stochastic worst case execution times.
Some of the related work in stochastic task execution times:
Burns et al.: [BPSW99] addresses the problem of a system by breaking its timeliness requirements due to transient faults. In this case, the execution time variability stems from task re-executions and the shortest interval between two fault occurrence such that no task exceeds its deadline and is determined by sensitivity analysis.
The probability that the system exceeds its deadline is given by the probability that faults occurs at a faster rate than the tolerated one.
Broster et al.: [BBRN02] Determines the response time of a task; it re-executes K є N times due to faults in order to obtain the probability distribution of the response time, and it compute the probability of the event that K faults occur. The fault occurrence process is assumed to be a poisson process in both of the cited works.
But Burns et al.: Extend broster's approach by adding statistical dependencies among execution times. His approach are applicable to systems with sporadic tasks, which are unsuited for the determination of task deadline miss probabilities of tasks with generalized execution time probability distributions ,and also confined to sets which are independent tasks implemented by using monoprocessor systems.
Bernat et al.: [BCP02] Address different problem which determines the frequency with which a single task executes for a particular amount of time, called execution time profile and this was performed by based on the execution time profiles of the basic blocks of the task. The strength of this approach is that they consider statistical dependencies among the execution time profiles of the basic blocks.
But however, this approach would be difficult to extend to the deadline miss ratio analysis of multi-task systems because of the complex interleaving with the characteristics of task executions in such environments.
Atlas and Bestavros: [AB98] extends the classical rate monotonic scheduling policy with an admittance controller in order to handle tasks with stochastic execution times. It analyses the quality of the service of the resulting schedule and its dependence on the admittance control parameters.
The approach is limited to monoprocessor systems, rate monotonic analysis and assumes the presence of an admission controller at run-time.
Abeni and Buttazzo's [AB99] work addresses both scheduling and performance analysis of tasks with stochastic parameters. It focuses on how to schedule both hard and soft real-time tasks on the same processor, in such a way that the hard ones are not disturbed by ill-behaved soft tasks.
Tia et al. [TDS95] assume a task model composed of independent tasks. There are two methods for performance analysis they were, one of them is just an estimate and is demonstrated to be overly optimistic. In the second method, a soft task is transformed into a deterministic task and a sporadic one. The sporadic tasks are handled by a server policy. The analysis is carried out on this particular model.
Gardner et al, [GAR99, GL99] in their stochastic time demand analysis, introduce worst-case scenarios with respect to task release times in order to compute a lower bound for the probability that job meets its deadline. It doesn't contain data dependencies among tasks and applications implemented on multiprocessors.
Zhou et al. and Hu et al. [ZHS99, HZS01] root their work in Tia's., they do not intend to give per-task guarantees, but characterize the fitness of the entire task set. Because they consider all possible combinations of execution times of all requests up to a time moment, the analysis can be applied only to small task sets due to complexity reasons.
De Verciana et al. [BPSW99] address a different type of problem. Having a task graph and an imposed deadline, its goal is to determine the path that has the highest probability to violate the deadline. In this case, the problem is reduced to a non-linear optimization problem by using an approximation of the convolution of the probability densities.
Diaz et al. [DJG00] derives the expected deadline miss ratio from the probability distribution function of the response time of a task. The response time is computed based on the system-level backlog at the beginning of each hyper period, i.e. the residual execution times of the jobs at those time moments. The stochastic process of the system-level backlog is Morkovian and its stationary solution can be computed.
It contains sets of independent tasks and the task execution times may assume values only over discrete sets. In this approach, complexity is mastered by trimming the transition probability matrix of the underlying Markov chain or by deploying iterative methods, both at the expense of result accuracy.
Kalavade and Moghe [KM98] consider task graphs where the task execution times are arbitrarily distributed over discrete sets. Their analysis is based on Markovian stochastic processes too. Each state in the process is characterized by the executed time and lead-time. The analysis is performed by solving a system of linear equations. Because the execution time is allowed to take only a finite (most likely small) number of values, such a set of equations is small.
Kim and shin [KS96] consider applications that are implemented on multiprocessors and modeled them as queuing networks. It restricts the task execution times to exponentially distributed ones, which reduces the complexity of the analysis. The tasks were considered to be scheduled according to a particular policy, namely first-come-first-served (FCFS).
Conclusion & Future works
This chapter gives conclusions in section 7.1 and discusses issues for the future work in section 7.2
Now a days, systems controlled by embedded computers become indispensable in our lives and can be found in lot of application. And the area of embedded real-time systems introduces the aspects of stochastic behaviour of real-time systems. In my thesis I deal with platform specific stochastic task.
Because of rapid growth in embedded systems by day to day, the tasks in a system are incomplexed manner in a real time system and it is usually assumed that the task consume wcet in every invocation. And it is pessimistic assumption that is made in order to guarantee hard real-time performance. But we have also soft real-time constraints so that pessimistic assumption could be relaxed.
In my thesis I worked on relaxing the pessimistic assumption so that more systems are schedulable and more over it is very important for a system to work on a higher work load where it is to be schedulable. By this I can make this system to miss their deadlines.
7.2 Future work
Based on my thesis work, In certain areas it can be improve further by implementing Modified Least Laxity Scheduling Algorithm. By, the help of this algorithm we can minimise the context switches. By minimising it, we cannot find any deadline misses and there will be hundred percent utilisation of system which contains higher work load.
Last updated: Oct 17, 2007.
 http://www.answers.com/topic/least-slack-time-scheduling, Article licensed under GNU Free Documentation License.
 Ba wei, Zhang Dabo.., A Novel Least Slack First Scheduling Algorithm Optimized by Threshold.., china, July 26 -31, 2007.
 Jens Hildebrandt, Frank Golatowski, Dirk Timmermann.., Scheduling Coprocessor for Enhanced Least-Laxity-First Scheduling in Hard Real-Time Systems.., Germany.
 Sung-Heun Oh, Seung-Min Yang.., A Modified Least-Laxity-First Scheduling Algorithm for Real-Time Tasks.., Korea.
 Using components to facilitate stochastic schedulability analysis. --- Malardalen University
 Using iterative simulation for timing analysis of complex real time systems. --- Yue Lu
 Analysis and optimization of real time system with stochastic behavior. --- sorin manolache.
 A. Atlas and A.Bestavrous.Statistical rate monotonic scheduling. In proceedings of the 19th IEEE Real-Time Systems Symposium, pages 123-132, 1998.
 L. Abeni and G.Butazzo. Qos guarantee using probabilistic deadlines In proceedings of the 11th Euromicro Conference on Real-Time Systems, pages 242-249, 1999.
 I.Broster, A.Burns, and G.Rodriguez-Navas.Probabilistic analysis of CAN with faults. In proceedings of the 23rd Real-Time Systems Symposium, 2002.
 G.Bernat, A.Colin, and S.Petters.WCET analysis of probabilistic hard Real-Time Systems Symposium, pages 279-288, 2002.
 A. Burns, S.Punnekkat, L.Strigini, and D.R.Wright.Probabilistic scheduling guarantees for fault-tolerant real-time systems. In proceeding of the 7th International Working Conference on Dependable Computing for Critical Applications, pages 339-356, 1999.
 G.de Veciana, M.Jacome, and J-H.Guo. Assessing probabilistic timing constraints on system performance. Design Automation for Embedded Systems, 5(1):61-81, February 2000.
 M.K. Gardner.Probabilistics Analysis and Scheduling of Critical Soft Real-Time Systems. PhD thesis, University of Illinois at Urbana- Champaign, 1999.
 M.K. Gardner and J.W.S.Liu.Analysing Stochastic Fixed Priority Real-Time Systems, pages 44-58.Springer, 1999.
 X.S.Hu, T.Zhou, and E.H.M.Sha. Estimating Probabilistic timing performance for real-time embedded systems.IEEE Transactions on Very Large Scale Integration Systems, 9(6):833-844, December 2001.
 A.Kavavade and P.Moghe. A tool for performance estimation of networked embedded end-systems. In Proceedings of the 35th Design Automation Conference, pages 257-262, 1998.
 J.Kim and K.G.Shin. Execution time analysis of communicating tasks in distributed systems.IEEE Transactions on Computers, 445(5):572-579, May 1996.
 T.S.Tia, Z.Deng, M.Shankar, M.Storch, J.Sun, L-C.Wu, and J.W.S.Liu. Probabilistic performance guarantee for real-time tasks with varying computation times. In Proceedings of the IEEE Real-Time Technology and Applications Symposium, pages 164-173, May 1995.
 T.Zhou, X. (S.) Hu, and E.H.M.Sha. A probabilistic performance metric for real time system design. In Proceedings of the 7th International Workshop on Hardware-Software Co-Design, pages 90-94, 1999.
In this chapter we present the timing diagrams of the schedules provided by some real-time scheduling algorithms, namely the earliest deadline first, the rate-monotonic and least laxity first algorithms, on given sets of tasks.
The timing diagram of task t1 before scheduling
The timing diagram of task t2 before scheduling
The timing diagram of task t3 before scheduling
Considering a system consisting of three tasks and that have the repetition periods, computation times, first invocation times and deadlines are defined in above table. The tasks are preemptive.
Earliest Deadline First Algorithm
As presented in below ', the uniprocessor real-time system consisting of the tasks Set defined in Table 3 is not EDF-schedulable, because while the execution of the first invocation of the task t2 is not finished yet; the new invocation of the task arrives. In other words, an overrun condition happens.
Rate Monotonic Algorithm
As shown in below ', the uniprocessor real-time system consisting of the tasks set defined in above table is not RM-schedulable. The reason is that the deadline of the first invocation of the task t3 is missed. The execution of the first invocation is required to be finished by time 6, but the schedule could not make it.
Least Laxity First Algorithm
Below ' presents a portion of the timing diagram of the schedule provided by the least laxity first algorithm on the tasks set defined in above table. As shown in the ', the deadline of the third invocation of the task t1 cannot be met. we conclude that the uniprocessor real-time system consisting of the tasks set defined in table is not LLFschedulable.