A new methodology is proposed in this paper to both monitor an overall mean shift and classify the states of a multivariate quality control system. Based on the Bayesian rule, the belief that each quality characteristic is in an out-of-control state is first updated in an iterative approach and the proof of its convergence is given. Next, the decision-making process of the detection and classification the process mean-shift is modeled. Numerical examples by simulation are provided in order to understand the proposed methodology and to evaluate its performance. Moreover, the in-control and out-of-control average run length  of the proposed method are compared with the ones from the well-known Multivariate Cumulative Sum (MCUSUM), Multivariate Exponentially Weighted Moving Average (MEWMA) and Hotelling methods in different scenarios of mean shifts. The results of the simulation study show that the proposed methodology performs better than other methods for all shifts of the process mean. Additionally, the estimated probabilities of making correct classifications by the proposed approach are encouraging.
Get your grade
or your money back
using our Essay Writing Service!
A subtle approach towards monitoring and improving quality is Statistical Process Control (SPC) that aims at quality improvement through reduction of variation. One of the primary methods of SPC is the control chart technique. First introduced by Walter A. Shewhart in the 1920's, it was W. Edwards Deming who extended his ideas to a quality improvement strategy that is not only applicable in a manufacturing environment, but in all areas of an organization, from administration to sales .
Shewhart developed the simplest form of the control chart to detect special causes of variation . It is based on the assumption that if the process is in a state of statistical control, the outcomes are predictable. Based on previous observations, it is possible for a given set of limits to determine the probability of future observations falling within them. As these limits are simply a prediction of the variation that will occur due to common causes, when a point is plotted out of them, it may be because of special causes of variation, which a control chart should be sensitive enough to detect.
In many quality control settings, the product (process) under examination may have two or more correlated quality characteristics (variables); hence, an appropriate approach is needed to monitor all these characteristics simultaneously. This leads to the multivariate quality control problem, which is the subject of research by many quality control experts. Jackson and Morris  mentioned that multivariate control charts should possess three important properties, namely, they should produce signal answers to (1) whether the process is in-control, (2) whether the specified probability of type I error has been maintained, and (3) whether they have taken the relationships between the variables into account.
As the objective of performing multivariate statistical process control is to monitor the process over time, in order to detect any unusual events that allow quality and process improvement, it is essential to track the cause of an out-of-control signal. However, as opposed to univariate control charts, the complexity of multivariate control charts and the cross-correlation among variables make the analysis of assignable causes of out-of-control signals difficult. This has been the basis for extensive research performed in the field of multivariate control chart since the 1940's, when Hotteling  recognized that the quality of a product might depend on several correlated characteristics. However, because of computational complexity, researchers and practitioners did not pursue the multivariate quality control at that time. Today the development of high-speed computers, the technological advances in industrial control procedures, and the availability of modern data-acquisition equipments have alleviated this problem. Thus, many researchers have proposed several multivariate control charts, where each has advantages as well as disadvantages .
One may classify the multivariate quality control procedures in two broad categories. First, when quality test includes several parameters; second, when a production line includes several serial stages as a sequential system. In a sequential case, like chemical industry, production lines have several stages and the quality of the product in each stage depends not only on the process of the current stage, but also on the quality of the input, which is the output of the previous stage to the current stage. In the non-sequential case, where there are several correlated characteristics, the purpose of each test is to detect where the process has gone "out-of-control" and which of the variable(s) is/are the cause of deterioration. This research is focused on the non-sequential case.
2. Literature Survey
Always on Time
Marked to Standard
Multivariate statistical process control methods are applicable when several process variables are simultaneously monitored. These methods use the relationship between variables to generate powerful control algorithms, which are sensitive to assignable causes that are poorly detected by univariate control charts on individual observations. One can classify the substantial amount of researches in non-sequential multivariate quality control in four broad categories, namely; (1) Multivariate Shewhart Charts [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 and 21], (2) Multivariate Cumulative Sum (MCUSUM) Charts [3, 13, 18, 21, 22, 23, 24, 25, 26 and 27], (3) Multivariate Exponentially Weighted Moving Average (MEWMA) Charts [3, 13, 18, 21, 28, 29, 30, 31, 32, 33 and 34], and (4) Multivariate charts based on Artificial Neural Networks (ANN) [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47 and 48].
Early research on multivariate Shewhart charts goes back to Hotelling  who introduced the problem of correlation between the quality characteristics of a process and came up with the well-known statistic to identify whether the whole process is out-of-control. A major advantage of Hotelling's statistic is that it is the optimal test statistic for detecting a general shift in the process mean vector for an individual multivariate observation . However, the technique has several practical drawbacks. The most important one is when the statistic indicates that a process is out of control, it does not provide information on which variable or set of variables are out of control. Moreover, it is difficult to distinguish location (mean) shifts from scale (variance) shifts since the statistic is sensitive to both types of process changes.
One way of interpreting out-of-control signals is to utilize the corresponding univariate charts of a multivariate process to determine which quality characteristic is the assignable cause of variation. Although this is a simple and plausible way of out-of-control analysis, there are some concerns associated with the adaptability of the technique. First, when there are many variables being measured, this technique tends to get tedious since there would be many univariate charts to interpret. Second, in multivariate quality control, an out-of-control signal is usually not caused by one variable, but rather a function of several correlated variables due to their interdependent behavior. Therefore, in many circumstances, the respective univariate charts may show no signs of being out-of-control, while on the contrary, the multivariate chart gives out-of-control signals. The user needs to understand that there are other effective interpretation methods that could be used with this technique to perform a better analysis of the out-of-control signals and that he should not be limited to it simply because it is an obvious and uncomplicated approach to the interpretation.
Bersimis et al  discussed the basic procedures for the implementation of multivariate statistical process control via control charting and described the most significant methods for the interpretation of an out-of-control signal. Murphy  proposed a method to identify the "out-of-control" variables based on discriminant analysis. This quality control method can be viewed as trying to discriminate between the process of being "in control" and "out-of-control." He divided the complete set of variables into two subsets and then tried to determine which one caused the "out-of-control" signal. While in Murphy's method the population parameters were assumed to be known, extensions of the Murphy's work  are Niaki and Moeinzadeh  and Niaki et al  where they developed a statistic and an algorithm for the cause-selecting problem in which the population parameters are not known and are to be estimated.
Principal Component Analysis (PCA) is another reliable technique to interpret out-of-control signals . The basic concept of this technique is to break up the statistic into a sum of its principal components, which are the linear portions of the original variables. Then these components can be examined to understand why the process is out-of-control. The newest way to accomplish this is to express the statistic as the normalized principal component points of the multi-normal variables. Then when an out-of-control signal is received, those component points with abnormally high values are detected and plots of those variables are made to determine exactly what occurred in the original sets of data that contributed to the signal in the multivariate set of statistics. Jackson  gave a detailed description of principal components and its possible use as a multivariate quality control tool. The problem with principal components is that in many cases they are not easily interpretable and do not have a one-to-one relation with the original variables. Nevertheless, in some cases, depending on the context, they can be very useful.
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Doganoskoy et al  proposed the use of the univariate t-statistic for ranking the variables most likely to have changed. Then, to further strengthen the belief that a certain variable has changed, they applied the Bonferroni type interval. The obvious drawback of this method is that it only indicates which variable is most likely to have shifted, which is not conclusive. Moreover, this method does not allow the user to study the trends.
Mason et al  proposed a cause-selecting procedure using the decomposition of statistic. By decomposing T2 statistic, the user can see the contribution of each variable. This decomposition also allows the user to detect which variable(s) with significant contribution is (are) the cause of deviation. While the drawback of this method is extensive computation and its sensitivity to the number of variables, Mason et al  presented an appropriate computing scheme that can greatly reduce the required computational efforts.
Golnabi and Houshmand  proposed a methodology, in which with known probabilities of type-one error for controlling each quality characteristic, one could transform the ellipsoid control region to a box-shape control region and monitor the whole process easily. They named their method Multivariate Shewhart (MS) chart, because it is simply an extension of the univariate Shewhart chart. It has the advantage of directing the investigators to the possible cause(s) of an out-of-control signal. This method is easy to implement and its performance is as good as that of Hotelling T2 or other computing methods.
The MEWMA control charts use all the observations since the detection of the last special event as opposed to the last observation vector employed in the Shewhart type charts. Their advantage over the latter charts is that their average run length is smaller for small shifts in the process mean. In MEWMA category, several researchers proposed different procedures, to name a few see Alwan , Lowry et al , and Prabhu and Runger . Lowry et al  presented a multivariate extension of the exponentially weighted moving average (EWMA) control chart. They compared their chart to a MCUSUM control chart based on the average run length (ARL) performance. They concluded their chart was similar to the CUSUM chart in detecting a shift in the mean vector of a multivariate normal distribution, and that the ARL performance of the MEWMA chart, as well as the Hotelling's and MCUSUM charts depended on the underlying mean vector and covariance matrix only through the value of the non-centrality parameter. They stated that in order to avoid the potential inertia problems, one should always use the MEWMA and MCUSUM charts in conjunction with the Hotelling's chart. In order to improve the detection of small shifts in multivariate statistical process control, Prabhu and Runger  provided some recommendations in the selection of the parameters of a MEWMA control chart. Reynolds and Cho  proposed combinations of MEWMA control chart based on the sample mean and on the sum of squared deviation from the target to monitor both the mean vector and the covariance matrix.
The properties of MCUSUM control charts are quite similar to those of the MEWMA charts. In this category, Woodall and Ncube  proposed methods to approximate the distribution parameters of the minimum run length of the univariate CUSUM charts. For the bivariate normal distributions, they showed that their MCUSUM method works better than the Hotelling's procedure. Healy  discussed the natural applications of CUSUM procedures to the multivariate normal distribution. Crosier  presented the design procedures and the average run length for two MCUSUM quality control procedures. The first MCUSUM procedures reduced each multivariate observation to a scalar, and then formed a CUSUM of the scalars. The second MCUSUM method formed a CUSUM vector directly from the observations. These two procedures were then compared to a multivariate Shewhart chart and the robustness of the procedures was discussed. Pignatiello and Runger  considered several approaches for controlling the mean of a multivariate normal process. They compared the performance of these approaches, as well as the performance of their two newly proposed charts, based on the estimated ARL and reported the results.
Noorossana et al  was the first to present an ANN model to detect and classify non-random disturbances in auto-correlated processes. The preliminary results of their method implications indicated that the ANN modeling was an effective method for cause-selecting problems. Moreover, Niaki and Abbasi  employed a multi-layer perceptron neural network to model a multivariate quality control system and by some simulation studies showed that the neural-network based modeling might have promising results.
Today the idea of information technology is widely used to collect observations for the Shewhart-type control charts. Sequential analysis is an important subject in information technology that greatly improves the applications of data analysis. In this type of analysis, the number of required observations is not fixed in advance, but is a stochastic variable which depends upon the values of the gathered observation. In sequential analysis, in any stage of data gathering process, to determine the number of required observations in the next stage, the data at hand is analyzed and the number of needed observations are determined with respect to the obtained results. With this approach, the process of data gathering is cheaper and the information is used more effectively. In other words, the data gathering process in sequential analysis, in contrast to frequency analysis, is on-line. This idea caused new research to be conducted in various statistical aspects . One of them is a new approach in probability distribution fitting of a given statistical data that Eshragh-J and Modarres  named Decision On Belief (DOB), where a sequential analysis approach is employed to find the best underlying probability distribution of the observed data. Moreover, Eshragh-J , Eshragh-J and Niaki  applied the DOB concept as a decision-making tool in Response Surface Methodology  and Fallah-Nezhad and Niaki  used this concept in univariate statistical quality control environments.
This paper proposes a new approach to control and classify mean shifts of quality characteristics in multivariate environments. In section three, a measure on the probability of a quality characteristic to be in out-of-control state is defined and the multivariate SPC problem is modeled. In this section, the approach by which one can improve the measure is clarified. A proof to the convergence property of the proposed method is given in section four. While the proposed method works based on a single observation of the process at any stage of the data gathering process, to make it even more efficient, in this section an extension is proposed where one may collect more than one observation. In section five, the decision-making process of the detection and classification in multivariate quality control environment is explained. To better-understand the proposed method and evaluate its performances in terms of in-control and out-of-control average run lengths some simulation studies are performed in section six. Finally, the conclusion and recommendations for future research comes in section seven.
3. A New Modeling of SPC Problems
In a multivariate quality control environment, the collected observations on quality characteristics of a product contain much information on the production process such that if one is limited to apply a control charting method, most of them will not be used. In fact, the main aim of a control charting method is to rapidly detect the occurrence of the undesired variation in the process. However, applying both the sequential analysis concept and Bayesian rule, at iterations where observations on the quality characteristics are available, the probability of the process being out-of-control is calculated. A decision interval on these probabilities may be found and when the calculated probability in any iteration is not within this interval, an out-of-control signal is observed. This interval is determined based on the probability of type-one error in a standard Shewhart control chart.
The idea behind the new approach is similar to the decision-making process of human beings. In real-life problems, one makes a decision by first dividing all the probable solution space into smaller subspaces, the final decision being one of them. Then, a probability is assigned to every subspace, and finally the probabilities are updated and the decision is made. In a multivariate SPC problem, a similar decision-making process exits. First, the decision space can be divided into two subspaces; an existing in-control or out-of-control production process where the solution of the problem is one of these subspaces. Finally, a measure is assigned to each subspace where it shows the probability of the process being in or out-of-control. Based upon the updated measures, the decision on whether the process is out-of-control is made.
For the sake of simplicity, only one observation (n=1) on the quality characteristic of interest is assumed in each iteration of the data gathering process. For other values of n, the same conclusion can be reached. Furthermore, in multi-variate situation, when a shift occurs in the mean of one variable, since the other variables are dependent, their mean will shift too. However, the objective of this research is to detect the first variable with the biggest mean- shift at any stage of the data gathering process.
Let be the observation of the ith quality characteristic (variable), i=1,2,â€¦,m, in iteration k, k=1,2,â€¦ . Then, in iteration k of the data gathering process the observation vectorand observation matrixare defined. The decision-making process at any iteration is in a stochastic space such that one can never surely say that the production process is in out-of-control state. After taking a new observation,, the probability of variable i to be in an out-of-control state is defined as . Let us call this probability measure the belief of variable i to be in out-of-control condition given the observation matrix up to iteration k-1 and the observation vector at iteration k. At this iteration, an improvement to the belief of being in out-of-control state based on the observation matrix and the new observation vectoris desired. Note that the previous and the updated belief of variable i areand, respectively. Assuming that the observations in each iteration are taken independently, one will have
With this feature, using Bayesian rule  the updated belief is:
In an iteration of the data gathering process, since the goal is to detect the variable with the maximum mean-shift, only one quality characteristic can be considered being out-of-control. In this way, there are remaining subspaces for which quality characteristics are in-control. Hence, we can say that the subspaces are mutually exclusive and collectively exhaustive. Accordingly, using the Bayes theorem, one can write equation (1) as
For an in-control system, assume the quality characteristics of interest follow a multi-normal distribution with a known mean vector and covariance matrix , which remains constant. In different iterations, equation (2) can be used to update the beliefs of occurring shifts in the process mean (), which in turn is asking to evaluate for. To do this, is defined to be a logistic function in equation (3) such that not only the mathematical computation and derivations would be simple and the thresholds for out-of-control beliefs would be easily derived later, but also the beliefs would be logically and easily interpreted. If the shape of the distribution or its parameters is not known, one may use a non-parametric approach (see for example  and ).
In equation (3) is the sample mean of the observations on the jth variable up to iteration k. Note that the beliefs defined in equation (3) will be evaluated equally for the same magnitude of both positive and negative shifts around the process mean. Moreover, from equation (3) it is obvious that when a shift occurs in the mean of the ith variable, will increase and hence the belief defined in equation (2) will also increase. This is how the convergence of the proposed method is proven as shown in the following section.
4. The Convergence Property of the Proposed Method
To show that the proposed methodology converges to detect shifts in the mean of the process we prove the following theorem.
If a shift occurs in the mean of variable i, then
When a shift occurs in the mean of variable i, the maximum of the limits on the beliefs, as the number of iterations goes to infinity, is:
To show this, suppose the magnitude of the shift in the mean variable i is defined in terms of a multiple of its standard deviation as . In other words, the variablefollows a normal distribution with mean. If vector is defined to contain all but the ith variable, by conditioning the expectation on the ith variable we get
where, , and is the expectation vector of the other variables.
As probability is a continuous function and the fact that, using the strong law of large numbers, equation (4) may be written as equation (5).
In order to prove the theorem, the following two lemmas are first proven.
Define recursive sequence as
where are different positive constants,. Then, if, there exist at most one non-zero .
Suppose there is more than one nonzero as. Taking the limit on as n goes to infinity we have
Now since, then equation (7) can be written as
In other words, , which is a contradiction.
Sequence converges to 1 for and converges to 0 for, where g is an index for the maximum value of.
From equation (7),. Then by lemma 1, for only one i. We will show that andis a contradiction. Consider. By equation (5), . Since we will have
That is a paradox because
The next step is to prove the convergence property of the proposed method. Taking limit on both sides of equation (2) we will have
Then, by lemma 1 and 2, is concluded.
In order to estimate the beliefs at any given stage of the decision-making process, one may collect more than one observation (n=1) on the production process. In this way, a more accurate estimator of the process-mean will be reached such that better results are obtained. The equations for n>1 are the same as for n=1 except using an average of n observations in each iteration for estimating the beliefs. More precisely, if one definesto be the sth observation of the ith quality characteristic in iteration k, then equation (3) becomes
5. The Decision-making Process
In the proposed approach, at iteration k of the data gathering process, the values of are used as a basis for the decision-making process. Different strategies may be employed at this point to see whether the process is out-of-control.
When is less than , then an out-of-control signal is observed. The value of is determined such that the probability of type-one error in both the proposed and other control charts are identical.
When is more than a decision threshold, say, then the process is out of control.
When is more than a threshold, then an out-of-control signal occurs.
In this research, the first decision-making strategy is adopted and simulation is used to determine the values of the threshold.
When an out-of-control signal is observed, in order to determine which other variable(s) is causing the deterioration, the variable with maximum belief is omitted and the process is continued on the other variable(s). However, first the means and the variances of the other variable(s) need to be updated. Otherwise, one may jump to the wrong conclusions and determine the wrong variable. To see this, consider a production process involving three quality characteristics, which follow a trivariate normal distribution with and covariance matrix . Assume a shift has occurred in the mean of variable one and a shift, has occurred in the mean of variable two and no shift has occurred in the mean of variable three. Applying the proposed approach, one first detects that the mean of variable one is out-of-control. If variable one is omitted, then using the conditional distribution, the means of variable two and three are. If the means of the second and the third variables are not updated, i.e., both means remain as they initially were (in this case zeros), then using equation (5) one will have:
Note that if is more than , then variable three would be falsely detected in the next stage of the algorithm. However, after omitting variable one, if the means and standard deviations of variables two and three are updated to,, and respectively, one will have:
which indicates that variable two is detected in the next stage of the algorithm.
Hence, the detection and classification process of mean shifts in multivariate quality control environment in stage k can be achieved by employing the following algorithm:
Calculate the beliefs using equations (2) and (3).
Determine the maximum and minimum beliefs.
If the minimum belief is less than, then conclude that the variable corresponding to maximum belief is out of control.
Omit the out-of-control variable and update the means and variances of other process variables.
Continue the process with remaining variables.
Go to step one.
6. Performance Evaluation
In this section, some simulation studies are performed to evaluate the performance of the proposed methodology in term of in-control average-run-length (ARL0) criterion  and compare its out-of-control average run lengths (ARL1)  with the ones from MCUSUM , MEWMA  and Hotelling's methods. In addition, in all simulation cases the probability of the correct classification is estimated by the proposed method. In the first series of simulation studies, the quality characteristics are considered to follow a bivariate normal distribution and in the second one, a trivariate normal distribution is assumed.
6.1 Bivariate Normal Case
The threshold value () is a decision limit that is necessary for the decision-making process in the proposed methodology. To determine, first pairs of independent uniform random variates are generated and is used to generate standard normal observations. Let the quality characteristics be X and Y random variables with the coefficient of correlation. At stage k of the data gathering process are generated with mean zero and variance one, and is obtained using
and , where . Then, the sample means are calculated byand. In the next step, using equations (2) and (3) the beliefsare updated. When is less than , then an out-of-control signal is observed and the variable corresponding to the maximum belief is the cause of the deterioration.
The usual practice in comparing the performances of different multivariate quality control methods is to set their threshold values such that they have an almost unique in-control average-run-lengths (ARL0). Essentially, ARL is the average number of points that must be plotted before a point indicates an out-of-control condition. Then, under different scenarios of the process-mean shifts, the method with the smallest out-of-control average-run-length (ARL1) would be the best. In other words, one first fixes the type-I error of the method. Then, the method with the smallest type-II error performs the best.
In 20000 independent replications, for an intended ARL0 of 320, the threshold-values of the MCUSUM, MEWMA, and Hotelling methods are calculated as 7.86, 2.34, and 11.69 with estimated ARL0 of 323.01, 314.84, and 330.69, respectively. The value of is selected such that the ARL0 of the proposed method becomes 320. Applying the trial-and-error approach, gives an ARL0=318.30.
For the comparison study, the ARL1 values of the proposed method as well as the MEWMA, MCUSUM, and Hotelling methods are estimated using 20000 independent replications in each of the different scenarios of mean shifts. The shifts are given as multiples of the process standard deviations and shown in the first column of Table (1). The second to the fifth column of Table (1) show the ARL1 values of the methods under consideration. Note that the numbers in the first parentheses of each cell in Table (1) are the standard deviations of the run length and the numbers in the second parentheses are the estimated probabilities of detecting the correct variable causing the shift.
Although the MEWMA and MCUSUM have been the most powerful methods in detecting small shifts of the process means, the results of Table (1) show the proposed method performs considerably better in all shifts of process mean. Certainly, as the other methods are not able to classify the variables causing the out-of-control signals, the second parentheses do not apply. In this case, the results of Table (1) show that the proposed method performs reasonably well to detect the real cause of the process deterioration. In some instances, where the shifts in the means of both quality characteristics are large, the method does not work very well. This may be due to the correlation between pairs of quality characteristics. Although these instances do not occur very often in practice, this may be a drawback of the proposed detection procedure and requires more research.
To make the proposed method even more efficient, the extension method introduced in section four is suggested. The ARLs and the probabilities of correct classifications obtained from a simulation study in which five observations (instead of one) are used for different mean-shift scenarios, are shown in the second column of Table (2). In this study is selected such that the ARL0 becomes 312.98. The results indicate that although the ARL1 values are better than those obtained from using a single observation, there is no improvement in instances where the probability of choosing the real cause of the mean shifts is low. Nevertheless, most of the times the proposed method detects the correct mean shift with probability of one.
Table (1): The results of ARL1 study (bivariate normal)
In-control (ARL0) and Out-of-control Average Run Lengths (ARL1)
Table (2): The performance of the proposed method for n=5
ARL1 of the
6.2 Trivariate Normal Case
In this section the proposed method is implemented to a production process with three correlated quality characteristics following a trivariate normal distribution with parameters of and . To determine, first pairs of independent uniform random variates are generated and is used to generate standard normal observations. To generate three normal variables, first with mean, 0, and variance, 1 is generated. Then is generated by conditioning on; that is a normal distribution with mean, and variance,. Finally is generated by conditioning on; that is a normal distribution with mean, and variance, . Then, the sample means are calculated by,, and . In the next step, the beliefs, , of the process being out-of-control are updated by equations (2) and (3). When is less than , then an out-of-control signal is observed and the variable corresponding to the maximum belief is the cause of the deterioration.
In 20000 independent replications, for an intended ARL0 of 320, the threshold-value of the MCUSUM, MEWMA, and Hotelling methods are estimated as 2.75, 5.95, and 12.65, with ARL0 of 327.86, 318.73, and 307.67, respectively. The value of is picked such that the ARL0 of the proposed method becomes 320. Applying the trial-and-error approach, gives an ARL0=320.50.
For the comparison study, the ARL1 values of the proposed method as well as the MEWMA, MCUSUM, and Hotelling methods are estimated by 20000 independent replications in each of the different scenarios of mean shifts. The shifts are given as multiples of the process standard deviations and are shown in the first column of Table (3). The second to the fifth column of Table (3) represent the performance measures of the proposed procedure as well as the other methods. In this Table, the numbers in the first row cells show the ARL1, the standard deviation of the run length, and the probability of detecting the correct variable, respectively (of course the second parenthesis does not apply to the other methods). The numbers in the second row show the ARL1, the standard deviation of the run length, and the probability of detecting the correct variable after omitting the first variable whose mean has shifted (in cases where there are two shifts). Finally, the numbers in the third row of the column show the ARL1, the standard deviation of the run length, and the probability of detecting the correct variable after omitting the first two variables whose means have shifted (in cases where there are three shifts).
According to Table (3), the proposed method performs better than the available methods in all experimental shifts of the process means. In addition, although the performance of the proposed method in terms of probability of selecting the correct variable is encouraging; there still remains more research to be conducted for improving the performance of the cause detecting procedure in some instances of the process mean shifts.
7. Conclusion and Recommendations for Future Research
In this paper, a new approach to control and classify mean shifts in multivariate quality control environments is proposed. To do this, first, the advantages and disadvantages of different multivariate statistical quality control techniques were studied. Then, by using Bayesian rule, a probability measure for the out-of-control state was developed and the corresponding convergence proof was given. Next, an extension to multiple variables was developed and a decision-making rule presented. Furthermore, bivariate and trivariate examples were illustrated to prove that the proposed methodology was an improvement to the ability of detecting out-of-control variables and out-of-control average run lengths.
While the examples of the simulation studied contain correlation of 0.5 between variables, future research may contain cases with different values of correlations. In addition, more mean vectors and covariance matrices are desired for further performance analysis. Moreover, some other functions may be considered to determine the beliefs.
Table (3): The comparison of the proposed approach and the MEWMA, MCUSUM and Hotelling method for three variables
ARL1 of the
ARL1 of the
ARL1 of the
ARL1 of the