Resource Constrained Schedule Generation Schemes Accounting Essay


In many real-life situations, delays in the execution time of certain activities occur when resources required by these activities are not available in sufficient quantities during the time interval when they are schedule to take place. This particular problem is known as the "Resource Constrained Project Scheduling Problem" (RCPSP) in the literature. It can be described as follows: A project with n+2 activities where activities 0 and n + 1 are dummies, have no duration, and represent the initial and final activities each activity has to be processed in order to complete the project. Let J = {0, 1, . . . , n, n + 1} denote the set of all activities to be scheduled and K = {1, . . . , k} the set of resources.. The activities are interrelated by precedence and resource constraints i.e. precedence constraints force each activity j to be scheduled after all predecessor activities Pj are completed; and activities require resources with limited capacities.

For being processed during every time instant of its non-preemptable duration dj, activity j requires rj,k units of resource type k ∈ K. At any point in time resource type k has a limited capacity of Rk. The parameters dj , rj,k, and Rk are assumed to be integer, deterministic, and non-negative. For the project start and end activities, we have duration d0 =dn+1 =0 and resources r0,k =rn+1,k =0, for all k ∈ K. The problem consists in finding a schedule of the activities, taking into account the resources and the precedence constraints with the objective to minimize the makespan Cmax.

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The RCPSP is an important and challenging problem that has been widely studied over the past few decades. Some surveys provided by Herroelen et al. (1998), Brucker et al. (1999) and Kolisch and Padman (2001) and Montoya-Torres(2009), and the books on Project Scheduling by Weglarz (1999) and Demeulemeester and Herroelen (2002). As a job shop generalisation, the RCPSP is NP-hard in the strong sense (see Blazewicz et al., 1983) therefore justifying the use of heuristics when solving large problem instances. Only small-sized problem instances with up to 60 activities can be solved exactly in a satisfactory manner, at least for the KSD set (Kolisch et al., 1995). Therefore, heuristic solution procedures remain as the only feasible method of handling practical resource-constrained project scheduling problems. Recent overviews of heuristic approaches for the RCPSP can be found in Kolisch and Hartmann (1999), Hartmann and Kolisch (2000), Valls et al. (2004, 2005), and Kolisch and Hartmann (2006).

This paper has classified the multitude of heuristic procedures for the RCPSP with respect to their building blocks such as e.g. schedule generation scheme (SGS), metaheuristic strategy, and solution representation. This research has been continued in [26] about new methods and performed a rigorous computational study in order to compare the heuristics, to assess the significance of the building blocks, and to evaluate the impact of problem characteristics such as e.g. the scarcity of resources. The objective of this paper has four goals: First to summarize new heuristics for the RCPSP and try to point out promising approaches which promote the progress in the field. Finally, we provide a critical discussion of the test design and its usage by other authors. To restrict the scope of this survey, we consider only heuristics developed for the classical RCPSP. Nevertheless, we have included papers for generalizations of the RCPSP if computational results for standard test instances of the classical RCPSP are given.

The remainder of the paper is organized as follows: Section 2 presents the classical metaheuristics. In Section 3 describes about the non-standard metaheuristics: and other heuristics in Section 4

Schedule generation schemes

Schedule generation schemes (SGS) build a feasible schedule from scratch by stepwise extension of a partial schedule. A partial schedule is a schedule where only a subset of the n+2 activities have been scheduled. There are two different SGS available which can be distinguished with respect to activity- and time-incrementation i.e. serial SGS performs activity-incrementation whereas the so-called parallel SGS performs time-incrementation. For details, cf. [19].

Serial SGS: The serial scheduling scheme dates back to a paper by Kelley (1963). It sequentially adds activities to the schedule until a feasible complete schedule is obtained. In each iteration, the next activity in the priority list is chosen and for that activity the first possible starting time is assigned such that no precedence or resource constraint is violated.

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

For a given priority list, the application of the serial scheduling scheme requires O(n2·K) time(Pinson et al. (1994)). Its proven by Kolisch (1996b) that any schedule generated by the serial scheduling scheme belongs to the set of active schedules which contains at least one optimal solution. Hence, the serial SGS does not exclude optimal schedules a priori.

A variant of the serial SGS is list scheduling. In list scheduling project activities are first ordered within a list λ=[j1,j2,…,jn] where jg denotes the activity is at position g in the list. This list has to be according to the precedence constraint, i.e., each activity has all its network predecessors as list predecessors (cf. [11]). For given λ, the activities are planned in the order of the list at the earliest precedence- and resource-feasible start time. As a special case of the serial SGS, list scheduling has the same properties as the serial SGS and the reason it generates active schedules. Therefore, there is always a list λ* for which list scheduling will produce an optimal schedule when a regular measure of performance is considered.

Parallel SGS. The parallel SGS does time incrementation. Contrary to the serial scheduling scheme, the parallel scheduling scheme (Brooks and White (1965)) iterates over the different decision points at which activities can be added to the schedule. These decision points correspond with the completion times of already scheduled activities and thus at most n decision points need to be considered in the parallel scheduling scheme. At each decision point, the unscheduled activities whose predecessors have completed are considered in the order of the priority list and are scheduled on the condition that no resource conflict originates at that time instant.The time complexity of the parallel SGS is O(n2·K) (cf. [19]). The parallel SGS generates (precedence- and resource-feasible) non-delay schedules which are optimal for the resource-unconstrained case (cf. [17]). A non-delay schedule is a schedule where, even if activity preemption is allowed, none of the activities can be started earlier without delaying some other activity (cf. [35]). The set of non-delay schedules is a subset of the set of active schedules. It thus has, on average, a smaller cardinality. But it has the severe drawback that it might not contain an optimal schedule for a regular performance measure. Hence, the parallel SGS might exclude all optimal solutions a priori.

2.2. Priority Rule

X-pass methods

X-pass methods(priority rule based heuristics) utilize one or both of the SGS in order to construct one or more schedules. Depending on number of schedules generated, we distinguish single pass methods (X=1) and multi-pass methods (X>1). Each time a schedule is generated, X-pass methods start from scratch without considering any knowledge from previously generated solutions. In order to select at each stage of the generation procedure one activity to be scheduled, a priority rule is employed. This is followed by mapping which assigns each activity j in the eligible set a value v(j) with an objective stating whether an activity with a large or a small v(j)-value is desired.

Single pass methods. Single pass heuristics select in each iteration the activity which maximizes or minimizes the v(j)-value. If a tie occurs in determining the priority list, the tie-breaker will be the smallest activity number for the forward priority list and the largest activity number for the backward priority list. An overwhelming amount of research on priority rules for the RCPSP has been done; an extensive survey is given in [19]. These priority rules can be classified in five big categories (Lawrence (1985)), based on the type of information that is required to calculate the priority list. These five categories are:

1) Activity based priority rules, which consider information that is related to the activity itself, not to the rest of the project; 2) Network based priority rules, which only consider the information that is contained in the network (no information about the resource use might be used for the priority rules in this category); 3) Critical path based priority rules, which are based on the forward and backward critical path calculations; 4) Resource based priority rules, which consider the resource use of the different activities; 5) Composite priority rules, which can be obtained as a weighted sum of the priority values obtained by priority rules from the previous three categories. Whereas the two priority rules which have shown favourable results in the experimental studies of Alvarez-Valdés and Tamarit [2], Davis and Patterson [10] and Kolisch [15 and 17] are frequently used for the computational studies: LFT (latest finish time) and WCS (worst case slack). LFT is a well-known priority rule. WCS has been introduced by Kolisch [16] for the parallel scheduling scheme only. The rule calculates for an activity j the slack which remains in the worst case if j is not selected in the current iteration.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Multi-pass methods. SGS and priority rules can be combined to a multi-pass method in many possible ways. The most common ones are according to Kolisch and Hartmann [19. R. Kolisch and S. Hartmann, Heuristic algorithms for the resource-constrained project scheduling problem: Classification and computational analysis. In: J. W glarz, Editor, Project Scheduling - Recent Models, Algorithms and Applications, Kluwer Academic Publishers, Boston (1999), pp. 147-178.19] multi-priority rule methods (cf. [4], [37], [39],), forward-backward scheduling methods (cf. [26], [30]), and sampling methods (cf. [1], [9], [17]).

Multi-priority rule methods employ the SGS several times. Each time a different priority rule is used. Forward-backward scheduling methods employ an SGS in order to iteratively schedule the project by alternating between forward and backward scheduling.

Sampling methods typically make use of one SGS (usually using forward scheduling) and one priority ruleIn order to avoid the generation of identical schedules, the selection of the next activity to schedule is determined probabilistically, based on the priority values for the different activities, so, in addition to a priority value v(j), a selection probability p(j) is computed. Different probabilities computation distinguishes between random sampling, biased random sampling, and regret based biased random sampling (cf. [15]). In random sampling the probabilities for each activity in the schedulable set are identical, namely 1 over the number of activities in the schedulable set. As a result, random sampling is totally independent of the priority rule.

Biased random sampling calculates the probabilities as a function of the priority values of the activities in the schedulable set. If the objective of the priority rule is to select the activity with the highest priority value vi then the probability pi of choosing activity i from the schedulable set S equals

If the activity with the smallest value for the priority value vi has to be chosen, the probabilities can be calculated (amongst other choices)

as Biased random sampling methods were presented by Alvarez-Valdés and Tamarit (1989), Cooper (1976) and Schirmer and Riesenberg (1997).

Regret based biased random sampling (RBRS) uses the priority values indirectly via regret values; i.e., if, again, the objective is to select the activity with the largest priority value, the regret value r(j) is the absolute difference between the priority value v(j) of the activity under consideration and the worst priority value of all activities in the eligible set. Before calculating the selection probabilities based on the regret values, the latter are modified by adding ε>0. This assures that each activity in the eligible set has a selection probability greater than 0 and thus every schedule of the population can be generated. Schirmer and Riesenberg [34] propose a variant of RBRS where ε is determined dynamically.

So-called adaptive RBRS proposed by Kolisch and Drexl [18] as well as Schirmer [33]. The essence of adaptive sampling is to select the SGS, the priority rule, and the way the selection probabilities are calculated based on characteristics of the problem instance at hand. The method of Kolisch and Drexl [18] applies the serial SGS with the LFT-priority rule and the parallel SGS with the WCS-priority rule while employing deterministic and regret based sampling activity selection. The decision on the specific method is based on an analysis of the problem at hand and the number of iterations already performed. Schirmer [33] has extended this approach by employing both schedule generation schemes together with four different priority rules and two different sampling schemes (RBRS and a variant of RBRS).

Classical Metaheuristics

Over the last two decades, a growing number of metaheuristic procedures have been presented to solve hard optimisation problems. This section highlights the meta-heuristics approaches which have been proposed in the literature i.e. simulated annealing (Slowinski et al., 1994; Boctor, 1996; Bouleimen and Lecocq, 2003), tabu search (Pinson et al., 1994; Thomas and Salhi, 1998), local search-oriented approaches (Fleszar and Hindi, 2004; Palpant et al., 2004; Leon and Ramamoorthy, 1995), and genetic algorithms (Leon and Ramamoorthy, 1995; Lee and Kim, 1996; Hartmann, 1998, 2002; Kohlmorgen et al., 1999; Kim et al., 2003; Valls et al., 2008; Mendes et al., 2009).

Genetic Algorithm(GAs)

A genetic algorithms, is a problem solving technique based on the mechanisms of biological evolution and natural genetics. The method has been introduced by Holland (1975) and tries to implement the idea of survival of the fittest in the field of combinatorial optimisation. The major difference of GA with respect to other meta-heuristics (like tabu search (TS) or simulated annealing (SA)) is that GA maintains a population of solutions rather than a unique current solution. Solutions are coded as finite-length strings called chromosomes and a measure of their adaptation (the fitness) is computed by an engine. New solutions are created by combining two existing solutions (neighbourhood operators) and by applying a mutation on a newly created solution. A mutation typically consists of a unary neighbourhood operator that is applied with a small probability The main advantage of GA is its intrinsic parallelism, which allows the exploration of a larger solution space (Sevaux and Dauze`re-Pe´re`s, 2003). A well-designed GA allows for the efficient and effective exploration and exploitation of the problem's search space of feasible solutions in an effort to identify the global optima, or near-optimal, solution to difficult problems. Early applications of GA's are found in the literature to solve complex combinatorial optimization problems (see Back, 1995; Fogel, 1995).

Leon and Balakrishnan (1995) describe a genetic algorithm based approach that is based on a version of the random key representation scheme and that uses a modification of the parallel scheduling scheme as a decoding function. The change operator of Storer et al. (1992) (see Section is used as a unary neighbourhood operator, while the standard one-point crossover is used as a binary one.

Lee and Kim (1996) also present a genetic algorithm approach that was based on the random key representation scheme. They used the parallel scheduling scheme as a decoding function and the standard one-point crossover operator.

Lee and Kim (1996) make a comparison of genetic algorithms with tabu search and simulated annealing for the classical RCPSP (with makespan minimization). Their encoded the solution as a string of number representing priorities of activities. The random method is used to obtain initial population, and three operators are performed to produce a next generation with better chromosomes. Reproduction is done by randomly selecting and duplicating a chromosome with a probability which is proportional to the fitness value. Crossover is performed by randomly selecting two chromosomes and swapping substrings after crossover point (the process is repeated until all chromosomes in the current population are mated). For Mutation two chromosomes are randomly selecting and their values are changed with each other with a probability. The fitness value of solution i is computed as exp(-hvi), where vi is the makespan value of the solution i and h = 0:004 is a parameter to be chosen to make the fitness value within the range [0, 1]. Finally, these authors the execution termination is defined once a certain number of generations do not improve the current solution.

Hartmann (1998) presented three genetic algorithm approaches for the RCPSP, one based on the priority list representation scheme, one based on the priority rule representation scheme and one based on the random key representation scheme. All three approaches use the serial scheduling scheme as a decoding procedure and the two-point crossover operator in order to define the neighbourhood.

Kohlmorgen et al. (1999) also presented a genetic algorithm approach that uses the random key representation scheme, the serial scheduling scheme and the two-point crossover operator. Their approach is tested on a massively parallel computer.

Kim et al. (2003) proposed a hybrid genetic algorithm with designing the genetic operator based on a fuzzy logic controller and the initialized by a serial method. Also computational test considered experimenting with various genetic operators such as compounded partially mapped crossover, position-based crossover, swap mutation, and local search-based mutation. Another hybrid meta-heuristic procedure, named ANGEL, is proposed by Tseng and Chen (2006) by combining ant colony optimization (ACO), genetic algorithm (GA) and local search strategy. They start with the ACO to search the solution space and generate activity lists to provide the initial population for the genetic algorithm. Next, GA is executed and the pheromone set in ACO is updated when GA obtains a better solution. After GA terminates, ACO use new pheromone set for another search. In such a way, ACO and GA search alternately and cooperatively in the solution space and to improve the solution a local search procedure is also incorporated. A final search procedure is applied upon the termination of ACO and GA.

The hybrid genetic algorithm proposed by Valls et al. (2008) introduces several changes in the GA paradigm. These changes are: a crossover operator specific for the RCPSP, a local improvement operator that is applied to all generated schedules, a new way to select the parents to be combined, and a two-phase strategy by which the second phase re-starts the evolution from a neighbor's population of the best schedule found in the first phase.

Mendes et al. (2009) combine a genetic algorithm and a schedule generator procedure that generates parameterized active schedules. Also introduced a measure to compute a modified makespan as a fitness function in their GA. Chromosomes represents the priorities of the activities and delay times. For each chromosome, two phases are applied: First phase is responsible for transforming the chromosomes supplied by the genetic algorithm into the priorities of the activities and delay times; and the second phase makes use of priorities and the delay times defined in the first phase and constructs parameterized active schedules.

Tabu Search(TS)

The steepest descent approach has been extended by Glover (1989, 1990) to tabu search, which basically boils down to a steepest descent/mildest ascent approach. As in the steepest descent approach, all solutions in the neighbourhood of the current solution are evaluated and the solution with the smallest project makespan is chosen. However, tabu search is continuing the search even if the new solution does not improve the previous one. This characteristic of tabu search creates the possibility that the series of solutions includes cycles, meaning that a subseries of solutions is repeated time and time again. A mechanism has therefore to be put in place in order to avoid this cycling. A tabu list specifies those neighbourhood moves that might lead to cycling, namely those moves that might undo recently performed moves. Any neighbourhood move that belongs to the tabu list is therefore forbidden, except if that move would lead to a new overall (or local) best solution. This phenomenon is called an aspiration criterion.

Pinson et al. (1994) present three tabu search heuristics that are based on the priority list representation scheme and that use the serial scheduling scheme as a decoding function. The three heuristics differ in the neighbourhood operator that is used: one is based on the pairwise interchange operator, a second is based on the (general) interchange operator and a third is based on the shift operator.

Lee and Kim (1996) present a tabu search procedure that was based on the random key representation scheme. They used the parallel scheduling scheme as a decoding function and a restricted version of the pairwise interchange operator.

Baar et al. (1998) develop two tabu search heuristics. A first one is based on the priority list representation scheme, using the serial scheduling scheme as a decoding procedure. Three types of shift operators are introduced in order to define the neighbourhood. A second tabu search heuristic is based on the schedule scheme representation scheme and uses the decoding procedure. Four types of neighbourhood moves are defined, which are based on the transformation of a flexibility relations into a parallelity relations and vice versa.

Simulated annealing

The fastest descent approach has been extended by Kirkpatrick et al. (1983) to simulated annealing. The ideas of this approach date back to a paper by Metropolis et al. (1953), who describe an algorithm to simulate the cooling of material in a heat bath, a process known as annealing. Essentially, their algorithm simulates the change in energy of the system when subjected to a cooling process, until it converges to a steady 'frozen' state. Simulated annealing thus starts out with an initial solution, obtained by some constructive heuristic. A new solution is created in the neighbourhood of the current solution and it is accepted if it is better than the current solution. If the new solution does not improve upon the current solution, it is accepted with a probability that depends on the magnitude of the deterioration and on a parameter that is called the temperature. This temperature typically starts at a relatively large value and is reduced during the simulated annealing procedure in order to decrease the probability that nonimproving solutions are accepted. This process is repeated until time runs out, a number of solutions have been created or no new solution is accepted for a certain time. Sampson and Weiss (1993) describe a procedure that can be described as a variant of simulated annealing. The procedure is based on the shift vector representation scheme and uses the change operator.

Boctor (1996b) proposes an adaptation of simulated annealing that is based on the priority list representation scheme. The serial scheduling scheme is used as a decoding function and neighbours are obtained by using the shift operator.

Lee and Kim (1996) present a simulated annealing procedure that was based on the random key representation scheme. As for their tabu search procedure, they used the parallel scheduling scheme as a decoding function and a restricted version of the pairwise interchange operator. Cho and Kim (1997) modify this procedure by extending the random key representation in order to allow the delay of schedulable activities within an adapted version of the parallel scheduling scheme.

Bouleimen and Lecocq (1998) describe a simulated annealing approach that is based on the priority list representation scheme and that uses the serial scheduling scheme as a decoding function.

Bouleimen K, Lecocq H. A new efficient simulated annealing algorithm for the resource-constrained project scheduling problem and its multiple mode version. European Journal of Operational Research 2003;

Colak S, Agarwal A, Erenguc SS. Resource-constrained project scheduling problem: a hybrid neural approach. In: Weglarz J, Jozefowska J, editors. Perspectives in modern project scheduling. 2006.

Neural Network-based Approach:

The neural-networkbasedapproachwasfirstproposedby Agarwal et al. [53]. Colaketal. [2] applied thatapproachforthe RCPSP. Inthisapproachachosenpriorityrule(suchasLFTor RWK) andachosenSGS(i.e.serialorparallel)isappliedmultiple times. Thebestsolution,afteracertainnumberoftrials,issaved as thefinalsolution.Ineachiteration,theactivitylist L is different. Howisthenewactivitylistgenerated?Usingaweight vector W=(w0, w1,y, wn+1). Suppose Q represents thevectorof parameter usedinthechosenpriorityrule.Forexample,ifthe chosen priorityruleis''latestfinishtime'',thenlet qi represent the LFT foractivity i and Q=(q0, q1,y,qn+1) representsthevectorof latest finishtimesofallactivities.Ifthechosenheuristicissay remaining work,then qi in Q represents theremainingworkfor activity i. Let Qw represent avectorofweightedparameters (w0 *q0 ,y, w1 *q1 ,y, wn+1*qn+1). Ifweassumeaunitvector W, then Qw=Q. Forthefirstiterationweobtain L using Q. For subsequent iterations,weuse Qw to obtainadifferent L. Aftereach iteration W is updatedusingaweightupdatestrategytogivea new Qw, whichinturngeneratesanew L, whichproducesanew solution. This NN-basedapproachisbasicallyalocalsearchapproach because theperturbedvector Qw produces aperturbedactivitylist in thelocalneighborhoodoftheoriginalactivitylist.The approach iscalledNN-basedbecauseofitssimilaritywiththe traditional neuralnetworksinwhichaweightvectorisusedas the perturbationmechanism.Ifagoodpriorityruleandagood SGS areusedtoproducetheinitialsolution,thelocalsearch around thisoriginalsolutionproducesverycompetitiveresultsas shown inColaketal. [2]..

Other methods

This section summarizes those heristics that can neither be classified as X-pass construction methods nor as metaheuristics. Previously considered methods of this category are those of Alvarez-Valdes and Tamarit [3], Bell and Han [6],Mausser and Lawrence [45], Oguz and Bala [51], Pollack-Johnson [60], Shaffer et al. [66], and Sprecher [67].

2.4.1. Forward-backward improvement (FBI)

Tormos and Lova [73] develop a heuristic which applies forward-backward improvement to schedules computed by sampling. In each iteration, either the serial or the parallel SGS is used to generate a schedule by regret-based sampling with the latest finish time (LFT) priority rule. The resulting schedule is then improved by a backward-forward pass. In the backward pass, the activities are considered from right to left and scheduled at the latest feasible time (i.e., they are shifted to the right). Subsequently, in the forward pass, they are considered from left to right and scheduled at the earliest feasible time (i.e., they are shifted back to the left).

Tormos and Lova [74] enhance this approach. A so-called selective mechanism executes backward- forward improvement passes only if the schedule constructed by sampling is better than the average of the solutions generated by sampling so far.

Tormos and Lova [75] present a few refinements of [74]. In addition to backward-forward improvement passes also forward-backward improvement passes can be executed. The number of passes to be applied to a schedule is selected on the basis of the quality of the schedule.

Valls et al. [78] employ a so-called double justification procedure to improve schedules found by other heuristics. It shifts all activities within a schedule to the right and subsequently to the left in order to obtain a better schedule. In order to demonstrate its power and general applicability, it is tested by adding it to various sampling methods and metaheuristics from the literature as well as to new approaches (cf. Sections 2.2 and 2.3). The best results are reported for the extension of the activity list-based genetic algorithm of Hartmann [24].

Due to the apparently strong similarity of the justification procedure of Valls et al. [78] and the backward-forward pass of Tormos and Lova [73], we refer to both approaches by the notion of forward-backward improvement (FBI).

Test instance generators

When testing exact or heuristic methods for project scheduling, test instances are necessary. In recent years, several parameter-driven instance generators for the RCPSP and many of its extensions have been developed. For the classical RCPSP as well as multi-mode extension Kolisch et al. [113] develop a generator called ProGen. It has been applied to generate several sets with test instances for these problem classes. These instances have been used by many researchers, and they are available from the project scheduling library PSPLIB in the internet. (Kolisch and Sprecher and Kolisch et al) have been used in a huge number of studies. For further extension in RCPSP some modifications has been done in ProGen by Schwindt [161] develops a version called ProGen/max in order to include minimal and maximal time lags. Also ProGen/max can produce activities with multiple modes as well as instances for the resource investment and the resource leveling problem. For Drexl et al. [64] presented another version called ProGen/px which deals with partially renewable resources, mode identity, mode-dependent and sequence-dependent setup times (called changeover times here), mode-dependent minimum time lags, and forbidden periods for activities.

Agrawal et al. [3] suggest a generator called DAGEN for activityon- arc project networks. Demeulemeester et al. [54] develop Ran- Gen, a generator for single-mode and multi-mode project instances which is based on different control parameters than ProGen. Vanhoucke et al. [186] provide RanGen2, which extends RanGen by incorporating further topological network measures. Browning and Yassine [28] propose a generator for problems consisting of multiple projects where the activities are associated with due dates.