Investigate Statistical Techniques Used In Process Control Accounting Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Statistical Process Control is unique in that its power lies in the ability to monitor both process centre and its variation about that centre which is done by collecting data from samples at various points within the process. The process may have variations within the process that can affect the quality of the end product or service, the variations can be detected and corrected, thus reducing waste as well as the likelihood that problems will be passed on to the end user or customer. Statistical Process Control has its emphasis on early detection and prevention of problems which makes it have a distinct advantage over other quality methods, such as inspection, that apply resources to detecting and correcting problems in the end product or service.

As well as reducing waste, SPC can be helpful towards reduction in the time required to produce the product or service from end to end. This is partially due to a diminished likelihood that the final product will have to be reworked, but it may also result from using SPC data. This is done through identifying bottlenecks, wait times, and other sources of delays within the process. Process cycle time reductions coupled with improvements in yield have made SPC a valuable tool from both a cost reduction and a customer satisfaction standpoint.

Common Cause Variation.

There are two significant kinds of variation that occur in manufacturing processes both these types of process variation have a bearing in the subsequent variation of the final product. The first is known as natural cause variation and may be variation in temperature, properties of raw materials, strength of an electrical current etc. This variation is small, the observed values generally being quite close to the average value. The pattern of variation will be the distribution that forms the bell-shaped normal distribution curve.


A breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. An example is when as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of "free" product for the consumer, typically waste consists of rework or scrap. Through observing at the sequence of what happened in the process that led to a change, the quality engineer or any member of the team responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem. [2] 

SPC indicates when the manufacturing processes are at normal or abnormal variations so as to verify what action should be taken in a process, but it also indicates when NO action should be taken. An example is a person who would like to maintain a constant body weight and takes weight measurements weekly. A person who does not understand SPC concepts might start dieting every time his or her weight increased, or eat more every time his or her weight decreased. This type of action could be harmful and possibly generate even more variation in body weight. SPC would account for normal weight variation and better indicate when the person is in fact gaining or losing weight. [3] 


Initially, one starts with an amount of data from a manufacturing process with a specific metric, i.e. mass, length, surface energy...of a widget. One example may be a manufacturing process of a nanoparticle type and two parameters are key to the process; particle mean-diameter and surface area. So, with the exiting data one would calculate the sample mean and sample standard deviation. The upper control limits of the process would be set to mean plus three standard deviations and the lower control limit would be set to mean minus three standard deviations. The action taken depends on statistic and where each run lands on the SPC chart in order to control but not tamper with the process. The criticalness of the process can be defined by the Westinghouse rules used. The only way to reduce natural variation is through improvement to the process technology [4] 

If a product fails or a process performs inadequately, it is desirable to attempt to discover what's wrong so that the situation may be rectified. By reacting to the problem, however, one is allowing that process or product failure to control the persons behaviour. It is more effective to understand and control the process rather than allowing the process to control the person? [5] 

Second graph depicts the data collected over time.

To understand just how data can be tabulated in a misleading way is illustrated in these graphs. Using a histogram, one can see how the measurements stack up relative to the specification limits. It is difficult to tell by looking at the graph, what can one conclude about the process? It is notable that USL and LSL represent the upper and lower specification limits, respectively. While it's tempting to conclude that things are NORMAL, and that the process is approximately centred between the specification limits, because data is not viewed over time, one cannot draw conclusions regarding the process. The data shown in the second graph illustrates exactly the same data as depicted in the histogram above.

Viewing this data over time provides a more accurate description than the histogram. Here, the histogram misleads one to believe the information comes from a single distribution. This process has been shifting over time, however, so no single distribution actually describes the data. The distribution continues to change with time passing.

That's where Statistical Process Control (SPC) steps in with its advantages hence SPC is a set of techniques that provides a superior understanding about how a process behaves. In order to implement SPC, data must be collected over time.

When monitoring data, you expect variation, even when nothing out of the ordinary occurs. Common-cause sources of variation represent the inherent and natural level of variation affecting a process. When data constantly fluctuates knowledge of SPC becomes and added value hence the use of SPC answers most questions.

To understand whether the information varies in a predicted or unexpected way, one must understand the expected degree of system variation. Once one knows the expected level of variability, one can identify whether the observations exceed the expected amount.

With SPC, the idea is to view a stable process long enough to understand the level of inherent variation. Using that information, one can tabulate limits of expected variation, otherwise known as control limits. To be valuable, these control limits must be tabulated and based upon data originating from a stable process.

When a statistic exceeds a control limit or unexpected patterns are observed, then there is evidence that a special-cause source of variation has entered the system. These special-cause sources of variation, otherwise known as assignable causes, result in unexpected changes in the process. This observation doesn't imply that the parts under production have exceeded specification limits--only that something unusual has entered the system.


Until a product exceeds specification limits it is however difficult to notice differences, if they are found at all. Unfortunately, sometimes it is too late hence the production of bad products. The company has invested time and resources producing an inferior product, further eroding profitability. Moreover, discovering the root causes is more difficult at that point.

SPC not only provides the opportunity to identify unusual behaviour before unacceptable products are produced, it allows one to determine when something unexpected occurred. Sadly, most companies don't take advantage of SPC's benefits.

Many manufacturers don't require evidence of process stability from their suppliers (or sometimes from themselves). Without that information, we not only discover the problem too late, but it's more difficult to determine the cause.

When the process distribution changes it is most likely that no one, including the manufacturer and customer knows what to expect. That's not to say that every change is bad. Some unexpected changes might represent an improvement, but unless we appreciate that an improvement was made, it is difficult to sustain that progress.

Many of the personnel who work in a manufacturing fraternity still don't believe that processes change. That is a negative attitude because processes can change due to a variety of factors including changes in supplied parts, temperature, humidity, worn tools, or changes in personnel its self.

X-bar and R charts are used to monitor change. The X-bar chart helps to detect changes in the process average, while the R chart is designed to determine changes in process variability. When properly used, these charts can be effective indicators of process behaviour, as well as a tool to predict quality improvement or decline. Unfortunately, most American manufacturers don't use these charts correctly.

To understand one can look at the mechanics behind the X-bar chart. It's commonly written as an X with a bar over it--a symbol that denotes an average or mean.

Typically, a production operator will take a few measurements over a period of time. The operator averages those measurements and places the results on an X-bar chart. Then the range--the maximum data point minus the minimum data point--is computed and placed on the R chart.

Control limits are computed. These describe the expected amount of variation among the averages (on the X-bar chart) and ranges (on the R chart) as long as the process remains in control. By design, control limits should capture about 99.7% of the dots on the chart when the process is stable.

While some quality professionals believe that control charts indicate the ability to meet specifications (process capability), that's completely untrue. Control charts were invented to serve one purpose--to identify process changes as quickly as possible after the change occurs. They do nothing more and nothing less.

This graphic uses an X to illustrate the individual measurements for a few of the averages. The six individual measurements that create the average are widely scattered. Averages always possess less variation than individual measurements.

So why look at averages if they are so misleading that they can't indicate whether parts conform to specification limits? There are two compelling reasons to do so.

First, and most importantly, averages detect process shifts more quickly than individual measurements--which are the reason for implementing SPC. (This assumes the appropriate sample size has been determined.) Second, averages from a stable process tend to follow a normal distribution, so it's easy to estimate control limits for averages. Contrary to popular belief, individual measurements typically do not follow a normal distribution--even when a process is stable.

Specification limits do not belong on control charts. Processes that are in control do not necessarily produce parts within specification. Moreover, production of "acceptable" parts doesn't imply that processes are stable.

There are numerous fundamental errors typically made in applying SPC, and this article has touched on only a few. All the mistakes result in misjudging reality.

Additional common mistakes include improper sampling methods. The method employed to take physical samples is critical. Furthermore, the type of control chart used depends on the type of sampling scheme used. There are instances where rational sampling (I'll explain) is required and instances where we can violate that method as long as appropriate SPC methods accompany the sampling plan.

A rational sample is a group of measurements that come from a single distribution. An example of a violation of rational sampling might be measurements taken from several cavities from an injection moulding machine, when at least one cavity differs from the rest. The measurements then do not come from one distribution. There are efficient techniques available to handle situations where rational samples aren't possible, but traditional X-bar and R charts are not effective.

Inappropriate sample sizes are nearly always used. While a sample size of five may determine some process changes, it will not determine others very quickly. The most appropriate sample size depends on the application and the amount of change deemed critical to detect. The consequence of using inappropriate sample sizes in SPC includes the inability to detect important process shifts. While a sample size (subgroup size) of three detects large shifts, a sample size of more than three is necessary to detect smaller shifts quickly.

These are sample X-bar and R charts. The average of averages and the average range are solid bold lines, while control limits (UCL, LCL) are indicated by dashed lines. The top graph shows that averages are expected to randomly fluctuate between 77.7 and 82.3 about 99.7% of the time. When the process is stable, ranges should randomly fluctuate between 0 and 8.5.

Individual measurements should only be used for control charting in certain situations, and when using individual measurements, several issues such as the chart's ability to detect important changes must be evaluated. Often, charts such as Cumulative Sum (CUSUM) and Exponentially Weighted Moving Averages (EWMA) are effective on individual measurements because they don't depend heavily on the distribution of individuals, and they can be designed with varying degrees of sensitivity to detect important changes. CUSUM and EWMA charts are most valuable when the sample size is restricted to one or two. Due to an inadequate sample size, important shifts cannot be detected with traditional X-bar and R charts. But CUSUM and EWMA charts transform the individual readings so that the required process shifts are detected.

Taking measurements over time helps manufacturers better understand whether their processes are stable or in control. But what exactly does that mean? Company ABC produces spring clips. Knowing that the radius of a spring clip is an important dimension, the manufacturer measures the radius of spring clips produced over a few minutes. If the radius data were collected on another day, you shouldn't expect to see the identical radius values. But the distribution of the data should remain nearly the same. When you spot the same distribution repeatedly over time, the process is said to be in control. The radius data above represent distribution #1. When monitoring the process at future points in time, the same distribution pattern emerges. This process is stable or "in control." It's risky to assume that these radius dimensions will always follow the same distribution. Numerous factors such as material properties, machine settings, and environmental conditions will affect dimensions. But many manufacturers believe that being in control isn't important as long as the product meets its specification limits. That belief has negative consequences for American businesses striving to achieve quality and efficiency.

Here's what happens when a product characteristic meets specifications, but isn't in control. The top graph represents a product that meets both Upper Specification Limit (USL) and Lower Specification Limit (LSL) but is not stable. Once customers receive units that follow a particular distribution, they expect to see the same distribution again. Customers like consistency. If they suddenly receive a different distribution, they are usually disappointed with the perceived lack of quality. At least, they do not expect the change. The problem becomes magnified if these distributions represent an important dimension for pieces that will be mated together. Units from one distribution will fit together differently than units from a different distribution, so you can't expect products to perform consistently. The varying performance means less-predictable failure times and less-predictable customer responses. Compare distribution #3 to distribution #2. Which do you think customers would prefer? Knowing they prefer consistency, #3 is the clear choice, even though both depict products that fall within specification limits.

In many applications, variation should be decomposed so that we understand the variation within a sample (range within charts) and the variation between samples (range between charts). Essentially, there are at least two significantly different sources of variation in the system. Many common production methods introduce multiple sources of variation, and traditional X-bar and R charts are misleading in these cases.

Examine this X-bar chart. Both the control and specification limits are shown. If the process remains stable, would you expect that the characteristic being plotted will be within the specification limits most of the time? Many would say yes, but this is an erroneous conclusion, and the consequences may be severe. Remember, you're not looking at individual measurements being plotted on the chart. You are looking at averages. Control limits suggest that if the process is stable the averages will remain within those limits 99.7% of the time. But don't forget that specification limits apply to individual measurements--not averages.

Proper application of SPC aids in our understanding of system variation and indicates when that variation increases or decreases. This knowledge puts you--not your system--in control [6] .


Sampling is often used for quality evaluation of large lots. Statistical sampling methods that minimize subjective elements are best. There are two types of statistical sampling methods; attribute and variable.

Attribute Sampling.

Attribute plans test the sample, rank each part as GOOD or BAD, and decide whether to accept or reject the lot based upon the number of BAD units. Attribute plans are simple and easy to execute, but do not detect marginal results and are not predictive, ignoring the history of past lots.

Attribute sampling is usually based on Poisson statistics, where sample size and acceptance number (maximum number of BAD units in the sample) are specified. This generates an Operating Characteristic (OC) that is a plot of the probability of lot acceptance (or confidence level) vs. percent defective in the total lot from which the sample was randomly selected.

Acceptable Quality Level (AQL) sampling fixes the probability of lot acceptance at 95%, automatically giving the AQL percent defective from the OC plot. For example, AQL=2% means that, on average, 95 of 100 lots containing 2% defective parts will be accepted, while only 5 of the 100 will be rejected. AQL sampling was implemented shortly after World War I.

AQL sampling results in a high probability that a bad lot will be accepted, being favourable to manufacturers but not to end users. Consequently, Lot Tolerance Percent Defective (LTPD) sampling was introduced later for high reliability military and aerospace applications. LTPD sampling uses the same OC, but fixes the probability of lot acceptance at 10%. For example, LTPD=5% means that, on average, 10 of 100 lots containing 5% defective will be accepted while 90 of the 100 will be rejected. Since this is more favourable to end users,

Its Advantages are-

There is no condition on the mathematical law of distribution of the variable inspected.

There is greater simplicity of processing the results of the sample.


Less effective than variables plans for a same sample size of n increments (the LQ is higher); 

more costly than variables plans because the collected sample requires more increments than those required, for the same efficacy, by a variables plan

A graphical method showing the designing attributes acceptance sampling plans

The Spin Button can show visually that the exact probability of lot acceptance determined by the Hyper geometric distribution can be approximated by the Binomial distribution. [7] 

Variable Sampling.

Variables methods select and evaluate small samples, perhaps only 3 or 5 parts, on a regular predetermined schedule. Using Statistical Process Control (SPC) methods, their quality characteristics are evaluated, and the values of each sample are graphed using both the average value and range (variation) for each sample. After a given number of samples have been tested and the results plotted as a control chart, upper and control limits are calculated and used to determine whether the process is "in control", and predictable, and also the statistical probability that a part will be out of spec.

Attribute plans can be likened to ranking weather only as hot or cold, while variables methods examine the exact temperature and trends. The application of SPC methods decades ago allowed our semiconductor and automotive industries, who had used obsolete attribute methods, to survive the challenges of Japanese manufacturers who used SPC to deliver superior products.

SPC is best implemented by the manufacturer of mass produced parts, since it requires regular sampling and tracking of each individual production line. This is uneconomical for most end users who would have to station a knowledgeable representative at each plant. Media Sciences recommends hybrid methods whereby test results are ranked as acceptable, minor defects, major defects, and critical defects, each subject to a different LTPD. This approach is contained in specifications and test plans that Media Sciences can prepare for its clients.

When conducting tests, Media Sciences also provides a summary, or overview of test results, containing a list of major and minor positive variances as well as critical, major, and minor defects. Such product evaluations, when properly used, provide our customers with highly useful information that allows them to clearly differentiate between potential suppliers and to avoid unnecessary field failures when quality is regularly monitored.


It is effective than attributes plans for the same sample size of n increments (the LQ is lower); for the same AQL they are less expensive than attributes plans because the sample collected requires fewer.


They cannot be used in all cases because to validate the calculation formulas the mathematical law of distribution of the inspected variable must necessarily follow or approximately follow a normal law.


Sigma is a measure of variation.

De Feo, Joseph A, Barnard, William (2005)


Goal post mentality depicts that some parts are clearly made within specifications while others are outside of specifications.

There is no relationship called out for Y = f (x), but the question that Should be asked here is what is the "real" difference between parts if one is just inside the spec and another is just outside the spec.

Misconceptions to the customer.

Put together, these two parts are very close and will probably function equally well, or poorly, when used by the customer. That is one of the reasons that people who use this

Traditional model will tend to ship the part at the spec limit (even if just outside

The spec limits) because they think they can get a few more sales to the customer.


The best example is that of a garage and a car. The garage defines the specification limits and the car defines the output of the process. If the car is only a little bit smaller than the garage, you have to park it right in the middle of the garage if you want the car to fit. If the car is wider than the garage it will not fit. If the car is a lot smaller than the garage, it will fit and you have plenty of room on either side. If you have a process that is in control and with little variation, you should be able to park the car easily within the garage and thus meet customer requirements. Cpk tells you the relationship between the size of the car, the size of the garage and how far away from the middle of the garage you parked the car. [8] 


Three quality guru groups are known from the period 1945-1989.

the 'early Americans' who took the quality message to the Japanese

the Japanese who developed new concepts in response to the Americans' messages - simple tools, mass education, teamwork

the 'new wave of Western Gurus' who, following Japanese industrial success, increased quality awareness in the West

The early Americans

this group was effectively responsible for the amazing turn-around of Japanese industry after 1945, and for putting Japan on the path to quality leadership. Much of this change was because of the introduction of statistical quality control to Japan by the US Army from 1946 to 1950, as well as visits by three key US quality figures in the early 1950s - W Edwards Deming, Joseph M Juran and Armand V Feigenbaum.

The Japanese

The Japanese adopted, developed and changed the methodologies introduced by the Americans, and by the late 1950s had begun to develop clearly distinctive approaches suited to their own culture. The Japanese gurus I featured in the 'Managing into the 90s' booklet, emphasised mass education, and the use of basic tools and teamwork, and had a background in an educational role. The three Japanese gurus included were Dr Kaoru Ishikawa, Dr Genichi Taguchi and Shigeo Shingo.

The 'Western wave'

From a 1980s perspective, most of the increased awareness of quality in the West in recent years was connected to a new wave of gurus. The three gurus were Philip Crosby, Tom Peters and Claus Møller. [9] 

Statistical Techniques / Process Capability

(October 14, 1900 - December 20, 1993) He was an American statistician, professor, author, lecturer, and consultant. Deming is widely credited with improving production in the United States during the Cold War, although he is perhaps best known for his work in Japan. There, from 1950 onward he taught top management how to improve design (and thus service), product quality, testing and sales (the last through global markets) [1] through various methods, including the application of statistical methods.

Deming made a significant contribution to Japan's later reputation for innovative high-quality products and its economic power. He is regarded as having had more impact upon Japanese manufacturing and business than any other individual not of Japanese heritage. Despite being considered something of a hero in Japan, he was only just beginning to win widespread recognition in the U.S. at the time of his death.[2]

Deming's advocacy of the Plan-Do-Check-Act cy [10] cle, his 14 Points, and Seven Deadly Diseases have had tremendous influence outside of manufacturing and have been applied in other arenas, such as in the relatively new field of sales process engineering.[25]


Deming offered fourteen key principles for management for transforming business effectiveness. The points were first presented in his book Out of the Crisis. (p. 23-24) [22]

Create constancy of purpose toward improvement of product and service, with the aim to become competitive and stay in business, and to provide jobs.

Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn their responsibilities, and take on leadership for change.

Cease dependence on inspection to achieve quality. Eliminate the need for massive inspection by building quality into the product in the first place.

End the practice of awarding business on the basis of price tag. Instead, minimize total cost. Move towards a single supplier for any one item, on a long-term relationship of loyalty and trust.

Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs.

Institute training on the job.

Institute leadership (see Point 12 and Ch. 8 of "Out of the Crisis"). The aim of supervision should be to help people and machines and gadgets to do a better job. Supervision of management is in need of overhaul, as well as supervision of production workers.

Drive out fear, so that everyone may work effectively for the company. (See Ch. 3 of "Out of the Crisis")

Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service.

Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the work force.

a. Eliminate work standards (quotas) on the factory floor. Substitute leadership.

b. Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.

a. Remove barriers that rob the hourly worker of his right to pride of workmanship. The responsibility of supervisors must be changed from sheer numbers to quality.

b. Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia," abolishment of the annual or merit rating and of management by objective (See Ch. 3 of "Out of the Crisis").

Institute a vigorous program of education and self-improvement.

Put everybody in the company to work to accomplish the transformation. The transformation is everybody's job.


Lack of constancy of purpose

Emphasis on short-term profits

Evaluation by performance, merit rating, or annual review of performance

Mobility of management

Running a company on visible figures alone

Excessive medical costs

Excessive costs of warranty, fuelled by lawyers who work for contingency [11] 


Sample Number

Sample Mean (X)

Sample Range (R)





























































x bar bar






r bar



X bar bar =280.48 r bar = 4.95

To calculate UCL/LCL for x bar bar and range

UCL= x + A2 x R (A2 =0.557)

= 280.48 + 0.577 x 4.95.

UCL = 283.336

CP =

LCL= X bar = X bar bar - Aâ‚‚R bar

280.48-(0.577 x 4.95)


UCL R=D4R bar

2.114 x 4.95

UCLR = 10.4643

Charts for sample means and ranges charts showing control limits.

Below are charts showing the mean and the range




SIGMA = R bar / D2

= 4.95/2.326 = 2.128

CP = Customer Tolerance/Process Tolerance

10/ (6 x 2.128) = 0.783.

CPK = (285 - 280.48)/ (3 x 2.128) = 0.708

= (280.48 - 275)/ (3 x 2.128) = 0.858


Cp and Cpk are measurements of process capabilities. They are used in studies such as process capability and can be used to monitor a process similar to how X-Bar and Range charts are used.

Cp - Inherent Process Capability.

This is the ratio of the Upper Specification Limit minus the Lower Specification Limit to six sigma. It is denoted by the symbol Cp. A process must be designed with a Cp value that is higher than the required Cpk value. This compensates for the process not centered perfectly centered within the specification limits.



(Upper Spec Limit - Lower Spec Limit) 

6s Actual

Cpk - Process Capability.

This is the capability of the process expressed in relation to a worst case scenario view of the data. It is denoted by the symbol Cpk.

Use it to determine if a process or service, is within the normal variation, and is capable of meeting specification [12] s.

Cpk = the lesser of...



(Upper Spec Limit - Mean)

3s Actual


Sample Number
























































































Highest -Lowest

















































(a)The graphs showing the mean averages and showing normality.

The graph below shows a normal pattern because of the curve.

It is important to show that the the chart that is being plotted is normal. A rational assumption is that the information follows a normal curve.a large vdegree of doubt is there is abnomality in the curve.

Therefore with such a normal resulting curve such a graph can be used, even with a small sample, to distinguish between information from a normal and abnormal curve.

Mean (X) and range (R) charts and comments.

UCL X = X + Aâ‚‚R UCL R = D4 R

UCL X = X + Aâ‚‚R LCL R = D3 R

When sample size (n)

























UCL X = 10.125 + (0.557* 0.537) = 10.43

UCL X = 10.125 - (0.557* 0.537) = 10.43

Calculate and comment on the process capability indices Cp and Cpk. What actions do you think you need to take before full production starts? (P3.1)

Sigma = 0.537/2.326 = 0.230

CP = 1/ (6 * 0.230) = 0.72456

CPK = (10.5 - 10.125) / (3 * 0.230) = 0.543

(10.125 - 9.5) / (3 * 0.230) = 0.905

In this case the CPK is quite low which is a good fact, but not centred very well. There might be a need to adjust the more deviating side by changing the average. It's always easier to shift the mean than it is to reduce the standard deviation.



In this assignment I feel that I have achieved what I was given in the task. I have analysed several measurements provided within a set of samples. I have understood the terms Sigma, CP and CPK and the impact they can have on a manufacturing process especially on a large scale. These tools provide a vision to the leaders of the production and manufacturing world to be able to see any anomalies in their processes or components.

Common cause variation due to variation in temperature and special cause variation due to wear and tear such as that of production line machines.

I also learned about attribute sampling which provides greater simplicity of processing the results of the sample and variable sampling which is most suitable for evaluating small samples.

With the selected data I have been able to construct the necessary control charts and make a deduction as to the nature of the variables, good or bad. With these variables one can then establish the best way to use SPC. I have discovered the importance of the assignable causes of variation.

I explored the "Goalpost Mentality "and the misconception that it is portrayed by suppliers to customers.

I then described in detail the three waves one of the most influential ones being the Americans, their main focus and the gurus associated with them such as Deming and Duran. Their contribution to Quality is highly notable. In conclusion I have equipped readers with the knowledge to get relevant statistical techniques used in process quality control, and to be able to evaluate a process against a given specification.


De Feo, Joseph A, Barnard, William (2005). JURAN Institute's Six Sigma Breakthrough and Beyond -Quality Performance Breakthrough Methods. New York, NY: McGraw-Hill Professional.

Gill.K, 2010, classroom notes, Henley College.

Manufacturing Engineering March 2005 Vol. 134 No. 3

Allise Wachs, President, Integral Concepts Inc. West Bloomfield, MI

Steve, H. K. (2002) Ng Acceptance sampling plans.America: NSDL

Tennant, Geoff (2001). SIX SIGMA:SPC and TQM in Manufacturing and Services. Aldershot, UK: Gower Publishing,

Six Sigma, Energy Forum for Process Exellence, may24-27.Texas, SSA&Co.

Allise Wachs, President, Integral Concepts Inc., West Bloomfield, MI