Economic Design Of Variable Sampling Size And Control Limit Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

control chart is one of the most applicable multivariate control charts used to monitor processes with more than one correlated quality characteristic. It is implemented simply but detects small or moderate shifts in process mean vector slowly. Recent studies have shown that control chart with variable sample size and control limit when sampling intervals are fixed (VSSC), eliminates this defect. In this paper, we design VSSC control chart in economically way when mean vector and variance-covariance matrix of process quality characteristics are unknown. A Markov chain approach is used to facilitate developing cost model and genetic algorithms are used to find optimal design parameters.

Keywords: VSSC control chart, Multivariate control charts, Economic design, Markov chain approach, Genetic algorithms.

1. Literature review

Control charts proposed by Shewhart for the first time in 1924, are used to monitor processes to detect any change may cause to decrease the quality of the process. Quality of the processes is characterized by random variable(s) called quality characteristics. In many applications, quality of process is characterized by a single random variable called quality characteristic but some cases occur that process is characterize by more than one quality characteristic that are usually correlated and jointly distributed. Controlling each of these quality characteristics independently using a univariate control chart to each one results in a wrong solution (Chou, Chen, Liu & Huang (2003)). Accordingly, multivariate control charts have been proposed to investigate this issue. There are a number of procedures in multivariate case to monitor mean vector of the processes. Among these procedures, Hotelling's chart is probably the most well known way proposed by Hotelling (1947). control chart is implemented simply but it is slow in detecting small or moderate process shifts.

The sample size, the sampling interval, and the action limit(s) are three design parameters must be determined in every control charts. Duncan (1956) proposed the first economic model to determine three design parameters of Shewhart control chart. Duncan's model is composed of

cost elements such as sampling and inspection cost, false alarm cost, cost of identifying and correcting assignable cause, and production cost for the purpose of considering major costs occur during controlling period. Then, the design parameters are determined in order to minimize the cost function.

In order to improve the performance of original control chart, many researchers proposed using of variable design parameters. Aparisi developed the work of Reynolds et al (1988), Tagaras (1998), Prabhu et al (1994) and Costa (1997) which is around using adaptive design parameters for control chart, and designed variable sampling size (VSS), variable sampling interval (VSI), and variable sampling size and sampling interval (VSSI) charts in statistically way (see Aparisi (1996), Aparisi and Haro (2001), Aparisi and Haro (2003)). They indicated that these charts are faster than traditional chart in detection of small or moderate changes in process mean vector. Chen and Hsieh (2007) proposed charts with variable sample size and control limits (VSSC) in which the waiting time between successive samples are fixed. They show that VSSC control chart obtains a great and consistent improvement on fixed sampling rate charts and performs excellent in detecting very small mean shifts in compared to and control charts.

In the case of economic design of control charts, Chou et al (2006) developed the economic design of control chart and showed this procedure identifies most shifts in process, faster than the conventional charts. Chen (2007) used a Markov chain approach to design and control charts and concluded that both of them can be more efficient than FSR control scheme in terms of the Loss. Reviewing literature, we can not find economic design of chart with variable sampling size and control limits (VSSC).

In this paper, we develop an economic design of control chart based on Markov chain approach and present formulas to calculate it's warning limits. Via the genetic algorithms searching technique, the optimal design parameters of this model can be found. In the next section, VSSC control chart is described completely. The extension of cost model is formulated in section 3. Sensitivity analysis with numerical example and finally, concluding remarks are presented in sections 4 and 5, respectively.

2. Introduction of VSSC control chart

Suppose that be random vectors, each representing sample mean vector of p related quality characteristics followed a p-variate normal distribution with mean vector and variance-covariance matrix. When i-th sample of size n is taken at every sampling point, we calculate the following statistic,

and compare it with upper control limit (or action limit) denotes by which can be specified by the percentile point of a chi-square distribution with degree of freedom . However, in most cases the values of and are unknown, and must be estimated by sample mean vector and sample variance-covariance matrix of initial random samples prior to on-line process monitoring. In this case, new statistic id denoted by is estimated by,

And action limit used to monitor future random vectors, is given by Alt (1984) as,

where is the percentile point of F distribution with and degrees of freedom. and are calculated by,

Traditional Hotelling's chart works with a fixed sample of size drawn every hours from process, and statistic is plotted on a control chart with as the action limit. The chart is a modification of traditional chart. Let specify as minimum sample size, large warning and action limits, and specify as maximum sample size, small warning and action limits, respectively, such that while keeping sampling interval fixed at . The warning and action limits divide chart to three regions as shown in table 1.

The decision to use maximum or minimum sample size for i-th depends on position of the (i-1) -th sample point on the control chart, and as follows,

As seen in Chen (2007), during the in-control period, it is assumed that the size of samples are chosen at random between two values when the process starts or after a false alarm. Small size is selected with probability of , whereas large sample size is selected with probability of, where is the conditional probability of a sample point falling in the safe region, given that it did not fall in the action region and calculates as follows,

3. Development of cost model

To simplify the mathematical calculation for developing cost function, we firstly make a number of assumptions.

3.1. Model assumptions

The p quality characteristics monitored by the VSSC control chart are jointly distributed by a multivariate normal distribution with mean vector and covariance matrix.

The mean vector and variance-covariance matrix of process are unknown and is estimated by data from sampling.

The process starts with an in-control state but after a random time which follows an exponential distribution with a mean of hours, it will be disturbed by an assignable cause that causes a fixed shift in the process mean vector.

The process after the shift remains out-of-control until the assignable cause is eliminated.

The process is stopped if the chart produces a signal (i.e.) and then a search starts to find the assignable cause and adjust the process.

3.2. The cost function

To design the VSSC control chart in economically way, we define a cost function and then, search the optimal design parameters for minimizing the cost function over a production cycle. The production cycle length as shown in Fig. 1, is composed of four time intervals of in-control period, searching period due to false alarms, out of control period, and the time period for detecting and repairing the assignable cause . Once the expected cycle length is determined, the cost over the production cycle can be converted to the index "long run expected cost per hour" (Ross 1970). Here, we illustrate these four time components.

In figure 1, is the average length of in-control period and as assumed before, is equivalent to. is sum of the expected amount of times loses due to searching for the assignable cause after false alarms. is denoted as the expected length of out-of-control period, that is the duration lasts the time the process mean shifts till the time that chart signals. In statistical process control, this average time is called the adjusted average time to signal (AATS), and is a measure uses to compare the efficiencies of different adaptive control chart. We apply a basic formula in Markov chain approach proposed by Cinlar (1975) to calculate and .

Let represent the average time from the cycle start to the time the chart produces a signal after the process shift. Then,

In Eq. (8), G is the average time needed to take a sample, analyzing data and plotting statistic on the chart, and is the average sample size when the process operates in out of control state, and is given by,

where, according to Chen (2007), is the average number of sample point plotted in the safe region when the process is out of control and current sample point belongs to safe region. then,

is the average number of sample point plotted in the warning region when the process is out of control and current sample point belongs to warning region. then,

is the average number of sample point plotted in the warning region when the process is out of control and current sample point belongs to safe region. then,

is the average number of sample point plotted in the safe region when the process is out of control and current sample point belongs to warning region. then,

is the average total number of sample point plotted in the chart from the time the process mean shifts to the time the chart signals given that first sample point after mean shift belongs to safe region. then,

is the average total number of sample point plotted in the chart from the time the process mean shifts to the time the chart signals given that first sample point after mean shift belongs to warning region. then,

Where,

In above probabilities, and for obtain from Eq. (4), and for denotes non-central F distribution with p and degrees of freedom and non-centrality parameter that is calculated by . defining , leads to , where is the Mahalanobis distance that is a measure of change in process mean vector.

At each sampling point during the period, depending on the status of the process (in or out-of-control) and the position of on the chart, one of the five transient states listed in Table 2 occurs.

The transition probability matrix is given by,

where is the transition probability that is the prior state and is the current state. These 's can be mathematically expressed as follows:

where,

Here, for denotes the F distribution function with p and degrees of freedom, and for denotes the non-central F distribution function with p and degrees of freedom and non-centrality parameter .

In recent probabilities, for is calculated by Eq. (3). Here, we try to find a way to calculate warning limits. Aparisi and Haro (2003) indicated that to compare the FSR control chart with the variable sampling scheme of chart, we must fulfill the requirement that the average sample size when will be. In this way we guarantee that both charts are equivalent when the process is in-control. Eq. (17) shows this,

by extending this equation we obtain,

replacing Eq. (6) and (7) in Eq. (18) leads to,

because, then,

and finally, solving for result in,

extending Eq. (6) leads to following formula,

because, and by solving for, we conclude that,

Once the transition probability matrix is identified, according to formula proposed by Cinlar (1975), the average number of transitions in each transient state before the chart produce a true signal is equivalent to, where is the vector of starting probability such that; I is the identity matrix of order 5; Q is the transition matrix where the elements associated with the absorbing state have been deleted. Finally, M is the product of the average number of transitions in each transient state and the corresponding sampling intervals. Thus,

Where t is the vector of the sampling intervals corresponding to the five transient states used for next sampling. Here we set the vectors and because we suppose the process is in-control at the start of production cycle and we use small or large sample size at random.

If one defines as the average amount of time wasted to search for the assignable cause when the process is in-control, and be the expected number of false alarms per cycle, then,

where. Hence, the expected length of searching period due to false alarm is given by

Let be the time to discover and remove the assignable cause after a true signal of the chart. Then, .

Aggregating the foregoing four time intervals, the expected length of a production cycle would be expressed by

Also, if one defines, the average search cost we incur due to a false alarm;, the average cost to find and remove the assignable cause;, the hourly cost occurs when the process is operating in out of control state;, fixed cost of sampling per sample;, variable cost of sampling per unit sampled, then the expected cost during a production cycle is given by,

where and are the average numbers of samples drawn during in-control and out of control period, respectively, and they are given by,

where and. Finally, the expected cost per time ECT is given by,

4. A numerical example and solution procedure

The numerical example we use in this section, is a modification of Lin et al. (2008). Suppose that a production process is monitored by the VSSC control chart. The cost and process parameters are as follows,

The cost model given by Eq. (29) has some specification abbreviated as follows:

It is a nonlinear model and is a function of mixed continuous-discrete decision variable

The solution space is a discontinuous non-convex space

Thus, using nonlinear programming techniques for optimizing this model is a time consuming and inefficient work. Hence, we decide to use genetic algorithms with MATLAB software to obtain the optimal values of that minimize ECT.

The genetic algorithm (GA) introduced by Holland (1975), is based on the concept of natural genetics and is a random optimization search technique. Some advantages of GA are as follows:

1. GA uses the fitness function and the stochastic concepts (not deterministic rule) to search for optimal solution. Therefore the GA can be applied for many kinds of optimization problems.

2. Mutation and crossover techniques in the GA avoid trapping in the local optimum.

3. The GA is able to search for many possible solutions at the same time. Hence, it can obtain the global optimal solution efficiently.

We apply the solution procedure used in Lin et al (2008) to our example as follows:

Step 1. Initialization. Thirty initial solutions that satisfy the constraint condition of each test parameter are randomly produced. The constraint condition for each design parameter is set as follows:

Step 2. Evaluation. The fitness of each solution is evaluated by calculating the value of fitness function. The fitness function for our example is the cost function in Eq. (29).

Step 3. Selection. The survivors (i.e., 30 solutions) are selected for the next generation according to the better fitness of chromosomes. (In the first generation, the chromosome with the highest cost is replaced by the chromosome with the lowest cost.)

Step 4. Crossover. A pairs of survivors (from the 30 solutions) are selected randomly as the parents used for crossover operations to produce new chromosomes (or children) for the next generation. In this example, we apply the arithmetical crossover method with crossover rate 0.3 as follows,

where is the first new chromosome, is the second new chromosome, and R and M are the parents chromosomes. If 30 parents are randomly selected, then there are 60 children that will be produced. Thus, the population size increases to 90 (i.e., 30 parents + 60 children) in this step.

Step 5. Mutation. Suppose that the mutation rate is 0.1. In this example, we use non-uniform method to carry out the mutation operation. Since we have 90 solutions, we can randomly select 9 chromosomes (i.e., ) to mutate some parameters (or genes).

Step 6. Repeat Step 2 to Step 5 until the stopping criteria is found. In this example, we use ''50 generations" as our stopping criteria.

By running MATLAB for different values of process mean shift, we obtain the optimal solution to this example as shown in Tables 4 and 5,

5. Sensitivity analysis

In this section, a sensitivity analysis is conducted to study the effect of model parameters on the solution of economic design of the VSSC chart. The sensitivity analysis is carried out using orthogonal-array experimental design and multiple regression, in which the model parameters are considered as the independent variables and the seven test parameters as well as the average cost per time (ECT), adjusted average time to signal (AATS), false alarms (FA) and average number of samples drawn from process in out-of-control period are treated as the dependent variables.

Eleven independent variables (i.e., the process, time and cost parameters) considered in the sensitivity analysis and their corresponding level planning are shown in Table 6. The L27 orthogonal array is employed and the eleven independent variables are then assigned to the columns of the L27 array, as shown in Table 7. In the L27 orthogonal array experiment, there are 27 trials (i.e., 27 different level combinations of the independent variables). For each trial, the GA is applied to produce the optimal solution of the economic design. The output of the GA for each trial is also recorded in Table 8.

To study the effect of cost and process parameters on the solution of economic design of VSSC chart, based on the data in Table 8, the statistical software Minitab is used to run the regression analysis for each dependent variable. The output of Minitab includes an ANOVA table for regression and a table of regression coefficients, showing the corresponding information about statistical hypothesis testing.

ANOVA table in Fig. 2 and 3 show that there are at least one process or cost parameters that significantly affect the values of small and large sample size. From the tables of regression coefficients, we find that the amount of process mean shift significantly affect the values of small and large sample size. The sign of the coefficient of δ in Fig. 2 is positive and in Fig. 3 is negative, indicating that a larger magnitude of process mean shift generally increases the small sample size and decreases the large sample size.

Fig. 4 is the Minitab output for the sampling interval. From the table of coefficients, it may be seen that the sampling interval is influenced by the average cost of searching for assignable cause due to a false alarm, the hourly cost occurs when the process is operating in out-of-control state, fixed cost of sampling per sample, variable cost of sampling per unit sampled, average length of in-control period, and magnitude of process mean shift. A larger amount of and cause to increase sampling interval, and a larger amount of and reduce sampling interval.

Fig. 5 and 6 are the Minitab output for the small and large warning limits, respectively. Based on the table of coefficients, it is noted that a higher variable cost per unit sampled will reduce the amount of . On the other hand, if number of quality characteristics increases, magnitude of both and increase.

Fig. 7 is the Minitab output for the large action limit. It can be ignored because the value of R-Square is not large enough to support this regression analysis.

Fig. 8 is the Minitab output for the large action limit. As shown in coefficient table, variable cost of sampling per units sampled, and number of quality characteristics (p) influence the large action limits. The larger p results in wider , and the if the amount of increases the value of will decreases.

Fig. 9 is the Minitab output for the optimal value of cost function (ECT). According to the table of coefficients, the value of ECT is affected significantly by two cost parameters and three process parameters (i.e., ). A larger shift magnitude in process mean, a longest T2 duration results in the lower value of ECT. Meanwhile, increasing in values of and leads to increasing in value of ECT.

Fig. 10 is the Minitab output for the adjusted average time to signal, AATS. Based on the table of coefficients, it is noted that a higher hourly cost of operating process in out of control state, and a larger magnitude of mean shift leads to a lower AATS.

Fig.11 is the Minitab output for the average number of false alarms during a production cycle, FA. seeing the table of coefficients, we find if cost of searching due to a false alarm, fixes cost of sampling per sample, average length of in-control period, and amount of mean shift increase, then the average number of false alarms will reduce.

Fig. 12 is the Minitab output for the number of samples drawn when the process operates in out of control state . By examining coefficients table, we find that the shift magnitude of process mean significantly affect the value of . It is noticed that the sign of the coefficient of is negative, indicating that a larger shift magnitude in process mean generally reduces the.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.