Strategy of experimentation

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Introduction

Strategy Of Experimentation

Experiments are performed today in almost all fields of research and many manufacturing industries to increase the understanding and knowledge of various manufacturing processes. In manufacturing companies, experiments are often conducted in a series of trials or tests which produce proven outcomes. It is very important to understand the process behavior, the amount of variability and its effect on processes for continuous improvement in product or process quality. In an engineering environment, the experiments are frequently conducted to explore, estimate or conform. Exploration is defined as the understanding of data from the process. Estimation means determining the effects of process variables or factors on the output performance characteristic and conformation implies verification of the predicted results obtained from the experiment.

Experimentation plays a fundamental role in product realization activities such as new product design and formulation, manufacturing process development, and process improvement. Thus, the objective in many cases is to develop a process which is affected minimally by external sources of variability.

In manufacturing processes it is required to explore the relationship between the input process variables and the output performance characteristics. For example, in a metal cutting operation, cutting speed, feed rate, type of coolant, depth of cut, etc. can be treated as input variables and surface finish of the finished part can be considered as an output performance characteristic.

Anexperimentdeliberately imposes atreatmenton a group of objects in the significance of observing the response. This varies from anobservational study, which consists of gathering and analyzing the data without altering existing conditions. The soundness of an experiment is directly affected by its building and execution and thus, attention toexperimental designis extremely important.

As an example of an experiment, suppose that a metallurgical engineer is interested in studying the effect of two different hardening processes, oil quenching and saltwater quenching, on an aluminum alloy. The objective here, is to determine that which quenching solution produces the maximum hardness for this particular alloy. In this case, a well designed experiment is important because the results and conclusions that can be drawn from the experiment depend to a large extent on the manner in which the data were collected.

In general, experiments are used to study the performance of processes and systems.

The process or a system. The process can be visualized as a combination of operations, machines, methods, people and other resources that transform some input into an output that has one or more observable response variables.

Some of the process variables and material properties are controllable whereas other variables are uncontrollable.

The objectives of the experiment may include the following:

1) To determine which variables are most significant on the response y

2) Determining where to set the significant x's so that y is almost always near the most desired value.

3) Determining where to set the significant x's so that variability in y is small and the effects of uncontrollable variables are minimized.

This general approach to planning and conducting the experiment is called strategy of experimentation.

Experimental Design Fundamentals

Experimental design is a process of planning and conducting experiments and analysing the resulting data so that suitable and objective conclusions are obtained. An experiment basically involves inducing a response to determine the effect of certain controlled parameters often called as factors. For example, the cutting tool, the temperature setting of an oven, the amount of certain additive, etc. These factors may be of two types i.e. quantitative or qualitative. For quantitative factors, the range of values, their measurement techniques and the levels at which they are to be controlled must be decided.

In the terminology of experimental design, a treatment is a certain combination of factor levels whose effect on the response variable is of interest. Treatments are given to experimental units by level, where level means amount or magnitude.An experimental unit is the quantity of material or the number served to which one trial of a single treatment is applied. A sampling unit on the other hand is the part of experimental unit on which the treatment effect is measured. Experimental error is the variation in experimental units that are exposed to the same treatment. This variability is due to uncontrollable factors, or noise factors.

The specific questions that the experiment is projected to answer should be clearly identified prior to carrying out the experiment. The known or likely sources of variability in the experimental units have to be identified since one of the main aims of a designed experiment is to decrease the effect of these sources of variability on the answers to problems of concern. That is, an experiment is designed in order to improve the accuracy of our responses.

Frequently, more than one factor is considered in experimental analysis. If more than one factor is considered, experiments do not usually involve changing one variable at a time. The experiments for one variable at a time are not cost effective. One other drawback to this approach is that it cannot detect interactions between two or more factors. Interactions exist when the nature of the relationship between the response variable and a certain factor is influenced by the level of some other factor.

Features Of Experimentation

The basic features of experimentation include replication, randomization and blocking.

Replication is defined as the operation of one or more test parts under the same conditions. It means reiterating some of the test parts in the test design. This repeated iteration or testing of test parts builds assurance in the test results and permits us to calculate the statistical significance of test results. It serves another essential function. If the sample mean is used to estimate the effect of a factor, replication adds to the amount of precision by reducing the standard deviation of the mean.

Randomization implies to operation of test parts in a random order. It is contrary to running tests methodically. The running of tests arbitrarily avoids the confounding of effects that can take place when tests are run in a regular order. For example, if pressure is one of the controlled design variables, not running all the pressures at a given level at the same time would be the best option. This is because if all test points at given pressure are run at same time, the effects of time can be confoundedor mixed up with different effects of pressure. This effect of uncontrolled factors is averaged out by randomization.

The third feature of experimentation is the principle of blocking. Blocking is the purposeful screening out of effects of variables which are considered to have an influence on the test results. Thus blocking allows treatments to be compared under nearly equal conditions because the blocks consist of uniform units.

Randomized, Block And Latin Square Designs

Completely Randomized Design

A completely randomized design is the simplest and least restrictive design. In acompletely randomized design, the objects are randomly assigned to groups with each unit having an equal chance of being assigned to any treatment. A standard method for assigning subjects to treatment groups is to label each subject and then use a table of random numbers to select from the labeled subjects.

This sort of design has many advantages. Firstly, any number of treatments or replications can be used. Different treatments may not be replicated the same no. of times, making the design flexible. When all treatments are replicated an equal number of times, the design is balanced, otherwise, the experiment is unbalanced. Another advantage is that it provides the most number of degrees of freedom for the experimental error. This ensures a more accurate estimation of the experimental error.

One disadvantage of this design is that its accuracy may be low if the experimental units are not uniform. This problem is determined by the means of blocking or grouping, similar homogenous units.

The layout of an experiment is the actual placement of the treatments on the experimental units, which may pertain low time, space or type of material. Suppose we havek treatments and the experimental material is divided into n experimental units. We shall then assign thek treatments at random to thenexperimental units in such a way that the treatmentƮj = (i=1, 2,…,k) is applied rj times, with Ʃ rj = n. When each treatment is applied the same number of times, then r1 = r2 =…= rk = r and Ʃ rj = rk = n. Usually, each treatment is applied or replicated an equal number of times.

An example of the experimental layout for a completely randomized design (CR) using four treatments A, B, C and D each repeated three times, is given below:

D

A

B

C

D

B

D

A

A

C

C

B

Randomized Block Design

Blocking involves grouping experimental units that have similar effects on the response variable. In this type of design, experimental subjects are divided into identical blocks before being randomly assigned to a group of treatments.

For example, if we have ‘p' treatments and ‘r' blocks in a completely randomized block design, each of the p treatments is applied once in each block, resulting in a total of rp experimental units. Thus r blocks each consisting of p units, are chosen such that the units within blocks are homogenous. Next, the p treatments are randomly assigned to units within the blocks, with each treatment occurring only once within each block. The block serves as replications for the treatments and interaction between the treatments and the blocks is null.

There are many advantages of this type of design. Firstly, it increases the precision by removing a source of variation from the experimental error. The design can accommodate any number of treatments and blocks. One disadvantage is that the missing data can require complex calculations. Also, the number of degrees of freedom for the experimental error is not as large as that for completely randomized design. If there are r blocks then there is r-1 fewer degrees of freedom in the experimental error.

The figure 2.2 shows the layout for an example for which there are four treatments, A, B, C and D, and four blocks, I, II, III and IV. Hence, p=4 and r=4. Within each block, the four treatments are randomly applied to the experimental units.

A

C

B

D

C

D

B

A

D

B

A

C

C

A

D

B

Latin Square Design

The Latin square design controls two sources of variation blocking two variables. The number of groups or blocks for each of the two variables is equal, which in turn is equal to the number of treatments. The treatments are arranged in rows and columns where each row and column contains every treatment.

The advantages of this design are that you can control variation in two directions and there is an increase in efficiency as compared to the RCBD. Also they hold the situation when we have quite a few nuisance factors which either cannot be combined into a single factor or are desired to be kept separate. Also, this design permits experiments with a comparatively small number of runs.

The drawbacks of this type of design are that the numbers of treatments have to equal the number of replicates. Also, the experimental error is expected to increase with the size of square. Besides, small squares have a small number of degrees of freedom for the experimental error and the interactions between rows and columns, rows and treatments and columns and treatments cannot be evaluated. The design for four treatments is as shown.

B

A

C

D

A

C

D

B

D

B

A

C

C

D

B

A

The Analysis Of Variance

The Terminologies used in ANOVA

The analysis of variance is an important technique for analyzing the effect of categorical factors on a response. An ANOVA decays the variability in the response variable among the different factors. It may be important to determine which factors have a significant effect on the response, and/or how much of the variability in the response variable is attributable to each factor depending upon the type of analysis.

The various terminologies used for the analysis of variance are as stated below.

Sample - It is the set of observations calculated at a level ofX. IfXis continuous, then the sample consists of all the measures ofYonX. Here, Y is a response variable or a dependent variable and X is an independent variable i.e. treatments, factors or an effect.

Degree of freedom - It is an estimate of the number of independent categories in a particular statistical test or experiment. Degrees of freedom for a sample are simply defined as df = k - 1, where k is the number of scores in the sample.

Variance - In a normally distributed population, variance is expressed by the average ofNsquared deviations from the mean. It generally refers to a sample; however it is estimated as ratio of sum of squares withk-1 rather thank.

Sum of Squares - It refers to the squared distance between each data point and the sample mean, summed for allNdata points. The squared deviations determine variation in a form which can be separated into diverse components added together to give the total variation. As an example, the component of variation between the samples and component of variation within these samples.

Mean sum of squares - It is obtained by dividing the corresponding sum of squares by the appropriate number of degrees of freedom. The number is one less than the number of observations in each source of variation.

Error - It is the amount by which an observedvariatediffers from the value calculated by the model. Thus, errors or residuals are the sections of scores unaccounted by the analysis. Errors are believed to be independent of each other, and are normally distributed about the sample means in the analysis of variance. They are also supposed to be equally distributed for each sample.

F-Statistic - It is the ratio of the mean squares for treatment to the mean squares for error. It is calculated by Analysis of Variance and reveals the significance of the hypothesis thatYdepends onX. This is why theF-ratio is constantly presented with two degrees of freedom, one of which is used to create the numerator MS[X], and another, the denominator, MS [Error]. It precisely tell us how much more variation inYis explained byXthan is due to random, unexplained, variation where in a large proportion indicates a significant effect ofX. In effect, the observedF-ratio is attached by a very complex equation to the precise probability of a true null hypothesis, i.e. the ratio equals to unity, but standard tables can be used to detect whether the observedF-ratio indicates a significant relationship.

Significance - This is the probability of rejecting a null hypothesis that is in reality true. A significant valueof P= 0.05 is usually taken to mark a tolerable boundary of significance. Larger F-ratio indicates a smaller probability that the null hypothesis is true.

Forexample, F3,25= 3.12,P< 0.05 means that the variation in between samples is 3.12 times larger than the variation within samples, and we can have greater than 95% confidence in an effect of sample on the factor. This regression line takes the form: y =B0+B1x, and 95% confidence intervals for expected slope are achieved.

One Factor ANOVA

One factor ANOVA is of the type Y=X. It is an Analysis of Variance which tests the model hypothesis that deviation in the response variableYcan be separated into different levels of a single descriptive variableX. IfXis a continuous variable, then the analysis is equal to a linear regression.

The cases involving two or more factors are discussed in the next chapter.

Introduction To Factorial Designs

Definitions And Principles

The study of the effects of two or more factors in an experimental design is of great importance. Factorial design means that in each complete trial or a replication of the experiment, all possible combinations of the levels of the factors are investigated. For an example, if there are ‘a' levels of factor A and ‘b' levels of factor B, each replicate includes all ‘ab' treatment combinations.

Factorial experiments offer the ability to estimate the interaction effects between factors which is not possible with the one variable at a time approach.

Simple effect - A simple effect expresses the differences among the means of one independent variable at a fixed level of the other independent variable.

Main effect - The main effect expresses the difference among the means for an individual independent variable averaged over the intensity of the other independent variables making up the factorial design. The main effects are most easily interpreted when interaction is absent.

Interaction effects - A comparison among the simple effects of the component experiments i.e. differences in the simple effects is called the analysis of interaction effects.

Interaction is present when the simple effects of an individual independent variable are different at all levels of the other independent variable and if the outcomes of the different component experiments within each set are the same, interaction is absent.

The Advantages And Disadvantages

Factorial experiments play an important role in exploratory studies to determine which factors are important. By conducting experiments for a factor at several levels of other factors, the inferences from a factorial experiment are valid over a wide range of conditions. Also, a factorial design is necessary when interactions are present to avoid ambiguous conclusions.

A disadvantage of factorial experiments is that there is an exponential increase in experiment size as the number of factors and/or their number of levels increase. The cost to conduct the experiments can become unreasonable. Also, higher order interactions need interpreting and this can be complex.

ANOVA For Two Factor Factorial Experiment

This is of the type Y=X1 +X2 +X1 X2. It is a test of the hypothesis that variation inYcan be explained by one or both variablesX1andX2. IfX1 andX2 are definite andYhas been calculated only once in each arrangement of levels ofX1 andX2 then the interaction effectX1 X2 cannot be anticipated. A significant interaction term implies that the effect ofX1is modulated byX2. If one of the descriptive variables is constant, then the analysis is equal to a linear regression having one line for each level of the definite variable. Different intercepts signify a considerable effect of the definite variable, whereas different slopes signify an important interaction effect with continuous variable.

The above table shows the analysis of variance for two factor factorial experimental designs. Here, A and B are the main effects and AB is the interaction effect.

The 2k Factorial Design

Need For 2k Factorial Experiments

Experimentation firstly involves determining the more important factors (in terms of the impact on response variables) while taking into account the number of factors that can be dealt with feasibility. Next, the desirable levels of the selected factors are recognized. Finally, the relationships between the factor levels, the equivalent responses and the physical and economical conditions that are imposed may be of interest. Although, different types of designs may be used at each of these stages, multifactor experiments are usually employed. The 2k Factorial Design is one such multifactor experiment. It involves k factors, each at two levels. The total number of treatments is 2k. Thus if we have two factors each at two levels there are 22 or 4 treatments.

The level of each factor is thought of as either low or high. The notation ‘1'is used to indicate the treatment for which all factors are at the low level. In many cases, the low level signifies the absence of the factor and the high level shows its presence.

Role Of Contrasts

In factorial experiments involving multiple treatments, a hypothesis on a linear combination of the treatment means or partitioning of the total sum of squares of all treatments into component sum of squares of certain desirable treatments often needs to be tested. A contrast is helpful in such situations. A contrast of treatment means is a linear combination of the means such that the sum of the co-efficients of the linear combination is equivalent to zero.

An Example

An engineer is interested in effects of cutting speed(A), tool geometry(B), and cutting angle(C), on the life(in hours) of a machine tool. Two levels of each factor are chosen and three replicates of a 23 factorial design are run.

(a)Estimate the factor effects which appear to be large.

(b)Use ANOVA

(c)Write down a regression model for predicting tool life in hours.

(d)Based on the analysis of main effect and interactions, what factor levels of A, B and C would you recommend using?

Data:

Response: Life in hours.

Solution: Number of treatments =8

Number of replications for each treatment, n=3

Main effect A=

= 4/12 = 0.33

Main effect B=

= 136/12 = 11.33

Main effect C=

= 82/12 = 6.83

Two factor interaction effect AB=

= -20/12 = -1.67

Two factor interaction effect BC=

= -34/12 = -2.83

Two factor interaction effect AC=

= -106/12 = -8.83

Three factor interaction effect ABC=

= -26/12 = -2.17

Now, Sum of Squares = (contrast)2/8n

Thus SSA = 42 /24 = 0.67

SSB = 1362 /24 = 770.67

SSC = 822 /24 = 280.17

SSA B = (-20)2 /24 = 16.67

SSBC = (-34)2 /24 = 48.17

SSA C = (-106)2 /24 = 468.17

SSA BC = (-26)2 /24 = 28.17

SST = = 2095.33

SSError = SST - (SSA + SSB + SSC + SSA B + SSBC + SSA C + SSA BC)

= 482.64

SSModel = 1612.67

Analysis of Variance (ANOVA) table for selected factorial model.

Source

Sum of Squares

Deg. Of Freedom

Mean SS

F Value

Prob>F

Model

1612.67

7

230.38

7.64

0.004

A

0.67

1

0.67

0.022

0.8837

B

770.67

1

770.67

25.55

0.001

C

280.17

1

280.17

9.29

0.0077

AB

16.67

1

16.67

0.55

0.4681

AC

468.17

1

468.17

15.52

0.0012

BC

48.17

1

48.17

1.60

0.2245

ABC

28.17

1

28.17

0.93

0.3483

Error

482.67

16

30.17

Total

2095.33

23

Result: The F-test for the “model” source is testing the significance of the overall model; that is, is either A, B, or AB or some combination of these effects important?

Thus we see that the recommended factor levels significant are:

1) Main effect B

2) Main effect C

3) Two Factor Interaction effect AC

Also the Model is significant since there is negligible chance that F-value this large would occur due to noise.

Regression analysis:

= 40.8333

= (factor effect of A)/2 = 0.33/2 = 0.165

= (factor effect of B)/2 = 11.33/2 = 5.665

= (factor effect of C)/2 = 6.83/2 = 3.4167

= (factor effect of AC)/2 = 1.67/2 = 4.4167

Thus, y = 40.8333 +

Confounding

The experimental error variance increases with the increase in block size. For instance, for 6 factors with 2 levels each, the no. of treatments is 26 = 64. Thus, for a complete replication of the treatments in one block, 64 experimental units are needed. It is difficult to obtain a large no. of homogenous units per block. To increase the precision of the experiment by blocking, the block size must be kept as small as possible. Thus, confounding is a designing technique for arranging a complete factorial experiment in blocks, where block size is smaller than the number of treatment combinations in one replicate.

The Taguchi Method

Taguchi Philosophy

Taguchi's approach towards the Design of Experiments is based upon two basic principles. According to the first principle, a limited set of experiments is performed to determine the sensitivity of the quality measures to various controllable and uncontrollable parameters and second principle states that design of the product and process is done so that the sensitivity of the quality measures to noise is minimized.

As discussed earlier, a product or process is said to be robust when it exhibits the ability to perform on target i.e. with minimum output performance variation in the presence of varying input or noise. Taguchi's techniques reduce variation by reducing the sensitivity of the engineering design to the sources of variation rather than by controlling the sources, and are very cost-effective for improving product quality. The use of orthogonal arrays as an experimental design tool is adapted for greatly reducing the size of needed experiments. With these orthogonal arrays, we can determine the influence of each variable under study, on both the mean result and the variation from the result. Also, signal to noise ratio is used to represent the variation impacts.

Three Phases Of Design Process

The design process is categorized by Taguchi into three phases to determine the target value and tolerances for relevant parameters in the product and the process. They are system design, parameter design and tolerance design.

System design: In consists of creation of a prototype of the product to meet the functional requirements and the process that will build it by using scientific and engineering principles and experiences.

Parameter design: It stresses upon finding the optimal settings of the product and process parameters in order to minimise performance variability. Off-line quality control happens during parameter design, in which the parameters should be set so that they are insensitive to variations.

Tolerance design: This step includes setting up of a range of admissible values, known as tolerances, around the target values of the control parameters determined in parameter design. On-line control focuses on the production process to match the tolerance design and to reduce the variations. Tolerances that are too tight or rigid, increase manufacturing costs, while tolerances that are too wide, increase performance variation, which in turn increases the customer's loss. The main objective is thus, to find the optimal trade-off between these two costs.

Signal To Noise Ratio (S/N)

A production process includes variations which come from various factors;

1. Control factors: These are the factors for which designers have control, for example, parameter set points.

2. Noise factors: These are the factors for which designers do not have control.

Thus, to determine the effectiveness of a design, the impact of design parameters on output quality characteristic must be evaluated. The term signal is defined as the average value of the characteristic and it represents the desirable component which will preferably be close to a specified target value. The term noise represents the undesirable component and is a measure of the variability of the output characteristic which will preferably be as small as possible. A combination of these two performance measures is known as Signal to Noise ratio (S/N). It is given by S/N = -10 log (V) where V is the variance within each run. A performance measure should have the property that when it is maximised, the expected cost will be minimised. This form of S/N ratio represents a nominal-is-best situation, and evaluates the variance of the performance only. The S/N ratio consolidates several repetitions of experiments into one value which reflects the amount of variation present.

Experimental Design Using Taguchi Method

The following steps are used in this approach for the development of robust design guidelines:

1. Selection of controllable design parameters.

2. Selection of uncontrollable (noise) parameters.

3. Selection of quality (output) parameter.

4. Design of experiments.

5. Simulation of experiments.

6. Statistical analysis of results.

7. Development of robust design guidelines.

In this method, the matrix that designates the settings of the controllable factors or the design parameters for each run or experiment is called the inner array and the matrix that designates the settings of uncontrollable or noise factors is defined as an outer array. These arrays are designated as the design and noise matrices respectively. A series of experiments is run to determine the optimal settings of the design parameters after selecting the design and noise factors. This is done with the help of orthogonal array, which represents a matrix of numbers. In such an array, each row represents the levels or states of the chosen factors and each column represents a specific factor, whose effects on the response variable are of interest.

In L9 (34) Array the columns are mutually orthogonal. That is, for any pair of columns, all combinations of factor levels occur; and they occur an equal number of times. Here there are four parameters A, B, C, and D, each at three levels. This is called an L9 design, with the 9 indicating the nine rows, configurations, or prototypes to be tested. Specific test characteristics for each experimental evaluation are identified in the associated row of the table. Thus, L9 means that nine experiments are to be carried out to study four variables at three levels. The number of columns of an array represents the maximum number of parameters that can be studied using that array. Note that this design reduces 81 (34) configurations to 9 experimental evaluations. There are greater savings in testing for the larger arrays. For example, using an L27 array, 13 parameters can be studied at 3 levels by running only 27 experiments instead of 1,594,323 (313).

To decide which orthogonal array will be used, the first step is the determination of the degrees of freedom needed to estimate all of the main effects and the important interaction effects. After determining the type of orthogonal array, the remaining experimental procedure is the same as that for factorial design and analysis.

Advantages And Disadvantages

The Taguchi method can thus reduce research and development costs by improving the efficiency of generating information needed to design systems that are insensitive to usage conditions, manufacturing variation, and deterioration of parts. As a result, development time can be shortened significantly; and important design parameters affecting operation, performance, and cost can be identified. It allows for the identification of key parameters that have the most effect on the performance characteristic value so that further experimentation on these parameters can be performed and the parameters that have little effect can be ignored. Furthermore, the optimum choice of parameters can result in wider tolerances so that low cost components and production processes can be used. Thus, manufacturing and operations costs can also be greatly reduced.

The main disadvantage of the Taguchi method is that the results obtained are only relative and do not exactly indicate what parameter has the highest effect on the performance characteristic value.

Statistical Modeling

The Objective

Some of the main objectives of statistical modeling in design of experiments for industries are:

  • To determine the ‘why' of product rankings
  • To analysis the variable tradeoffs
  • Find the optimum product
  • Interpret the significance of test outcomes
  • Decrease of experimental deviation

We may require to know what is the bestor the optimumproduct in a designed test. Many times this best or optimum product is not the one that was essentially included in the test. Designing of experiments by using statistical modelling permits us to extrapolate the data and look for the best likely product inside the test variable scopes.

Along with finding the best productor best operating conditions we also desire to identify why this product was the finest. The model permits us to plot graphs and shows how variables are correlated and what levels of variables constitute the optimum product.

Substitutions between levels of variables, especially measured responses may be present. For example, the cost may be higher for an expensive constituent. The interrelationships between these variables can be determined by the use of models.

Characteristics Of A Design Experiment

The main characteristics of a design experiment include planned testing in which, the data analysis approach is found out before the test and factors are varied simultaneously and not one at a time. It also has a very scientific approach. A statistically designed test gives emphasis to proper planning before the test is conducted. The objectives must be clearly summarized. The data analysis should be outlined before the test starts.

Cost Savings Using DOE

The application of DOE reduces the amount of testing versus testing without DOE.

The application of DOE saves money in opposition to testing without DOE. The above has been documented from a sample of tests. The savings varied from Rs. 5 lacs to Rs. 28 lacs per test. Of course the amount of savings depends upon the cost of testing.

Advantages Of Statistical Modelling

Analysis of variable tradeoffs

Find optimum product

Assess the significance of test results and reduction of experimental variation.

Steps In Using DOE

The various steps used in designing of experiments and statistical modelling are;

1. Planning the test for DOE.

2. Choosing the design variables and the test range of each.

3. Deciding which variables to hold constant.

4. Selecting the measured response variables to include.

5. Selecting a statistical test design.

6. Running the test.

7. Analyzing data by computer to develop models.

8. Interpreting the computer results.

9. Make recommendations.

10. To build verification products.

Advantages Of DOE

1. DOE removes the confounding of effects, where the design variable effects are mixed up. Confounding implies that product changes cannot be correlated with product characteristics.

2. DOE helps in handling the experimental inaccuracy. Any data point might contain certain bad data, i.e. the experimental error, the effects of variation in raw materials, test instruments, machine operators, etc.

3. It helps us to find the important variables that require to be controlled.

4. It also helps us find the unimportant variables that do not need to be controlled.

5. DOE helps us to measure interactions, which is very significant.

Design Element

Introduction

This chapter deals with the modelling, simulation and optimization of an industrial problem. In a pipe manufacturing industry, the effect of different factors on the impact strength of the pipe material is to be found. Using Design of Experiments we have to identify the significant factors and identify the factor settings that will maximise the impact strength.

The Need For Impact Testing

Impact is defined as the resistance of a material to rapidly applied loads. The purpose of impact testing in pipe industries is to measure an object's ability to resist high-rate loading. Impact resistance can be one of the most difficult properties to quantify. In pipe manufacturing industries impact testing is usually done to assess numerically the fundamental mechanical properties of ductility, malleability, toughness, etc., to determine data, i.e., force-deformation values to draw up sets of specifications upon which the engineer can base his design, to determine the surface or sub-surface defects in raw materials or processed parts, to check chemical composition or to determine suitability of a material for a particular application. The ability to quantify this property is a great advantage in product liability and safety. In addition to providing information not available from any other simple mechanical test, these tests are quick and inexpensive. The main objective of the impact test is to predict the likelihood of brittle fracture of a given material under impact loading. The data obtained from the impact tests is employed for further engineering purposes.

The process

The impact testing process implies hammering effect on the work material of the pipes that determines how much mechanical energy is required for its failure. It is usually thought of in terms of two objects striking each other at high relative speeds. The test involves measuring the energy consumed in breaking a notched specimen when hammered by a swinging pendulum. Pendulum impact machines consist of a base, a pendulum of either single-arm or "sectorial" design, and a striker rod (also called a hammer), whose geometry varies in accordance with the testing standard. The mass and the drop height determine the potential energy of the hammer. The difference between height and the height to which it would have risen, had no specimen been present is a measure, the energy required to break the specimen. This, expressed in Joules (i.e. N-m), is the impact value of the specimen. A high impact value indicates better ability to withstand shock than a lower impact value. The energy absorbed in fracture, usually expressed in joules.

Selection Of Design Parameters

1) Specimen material: A part or material's ability to resist impact often is one of the determining factors in the service life of a part, or in the suitability of a designated material for a particular application. The testing of full sized parts or structures in impact is very difficult because of the magnitude of the force required to produce failure. An impact test signifies toughness of material that is ability of material to absorb energy (per volume) during plastic deformation. Thus material specimens are selected as one of the useful parameters for designing purpose.

2) Weight of the hammer (kg): The impact test involves measuring the energy consumed in breaking the specimen when hammered by a swinging pendulum. The mass and the drop height determine the potential energy of the hammer. Each pendulum unit has provisions to add extra weight. Thus, weight is a major deciding factor for testing the impact values.

3) Height of hammer (inch): All forms of the impact test depend upon the swinging pendulum. The height from which it drops is a measure of its inertia at the lowest point. There it collides with the specimen, breaking latter and continuing onward in its swing. The height to which the Pendulum rises is dependent upon the inertia left in the pendulum after breaking the specimen. Thus, height is the third design parameter chosen.

Selection Of Levels

For all the above mentioned design parameters, three levels for each, are selected.

1) Specimen material: In proposed work, EN-8, EN-24 and EN-31 steel alloys are selected for specimen material. These alloy steel are cheaply available and widely used. EN-8 alloys are used for moderately stressed pipes,EN-24 are preferred to be applied for heat treated components having large sections and subjected to exacting requirements like air craft and heavy vehicle crank shafts, connecting rods, gear shafts etc. EN31 has high resisting nature against wear and can be used for components which are subjected to severe abrasion, wear or high surface loading like ball and roller bearing, punches and dies.

2) Weight and height of hammer: Three levels i.e. 14.5, 15.5 and 16.5 kg are selected corresponding to the weight of hammer and 46.9, 43.5 and 40.8 inches corresponding to the height based on the required quality objectives.

Methodology

After the selection of design parameters and the corresponding levels, selection of experimental design method is very important. The methodology involves establishment of a series of experiments to find out the optimum combination of the parameters which has the greatest influence on the performance and the least variation from the target of the design. This depends upon the degree of freedom. Degree of freedom is a very important value as it determines the minimum number of treatment conditions. Degree of freedom for each factor is given by df = k-1, where k is equivalent to the number of levels. The degree of freedom (DOF) of a three level parameter is 2 (number of levels-1), hence total DOF for the experiment is 6. There are three interactions (between work materials (A), weight of hammer (B) and height of hammer (C) which are to be studied in the experiment. The minimum required degree of freedom in the experiment is the sum of all factor and interaction degrees of freedom.

Selection of which orthogonal array to use depends upon the no. of factors and interactions of interest and the no. of levels for the factors of interest. Total DOF of the interactions is 12. Total DOF for this experiment is 18 as shown above. As the degree of freedom required for the experiment is 18 so the orthogonal array that is to be selected should have degree of freedom higher than 18. The most suitable orthogonal array that can be used for this experiment is L27. Using an L27 array, maximum of 13 parameters can be studied at 3 levels by running only 27 experiments instead of 1,594,323 (313 ).

In this experiment, the assignment of factors is carried out using DOE++ Software. The Standard L27 Orthogonal Array as suggested by DOE++ using Taguchi Linear Graphs for the experiment is as listed below.

Taguchi method using design of experiments approach is used to optimize the process. The various input parameters are taken under experimental investigation and then model is prepared. The factor properties for the experiment are as stated below.

The output impact strength value has been selected as response parameter for this research work. The effect of the variation in input process parameter is studied on response parameters and the experimental data is analyzed as per Taguchi method to find out the optimum condition and significance of each factor.

Experimental Analysis And Simulation

The factors (Material, weight and height) are varied at three levels for the impact test. The measured response is impact energy absorbed by the material. Analysis of the results is carried out analytically as well as graphically. All the statistical calculations and plots ware generated by DOE++ software. ANOVA plots of the experimental data have been created to calculate the significance of each factor for each response. α = 0.05 was selected for all statistical calculations.

The above table shows ANOVA results for the impact test. F values for all the main effects and their interactions are obtained. It is seen that Material is the most significant factor. Weight (21.59) and Height (41.72) are the insignificant factors. Also, all the interactions are found to be insignificant.

Result

Effect probability plot for impact value shows the significant and non significant factors. Factor A is the most significant factor. Factors B and C are found to affect the response at 0.05% significance.

The main effect plot and the term and interaction effects plots are as shown below. Main effect plot for impact value shows maximum impact value for EN-24 and minimum impact value for EN-31.Also, maximum impact value was observed at lower height levels.

Conclusion

Anexperimentdeliberately imposes atreatmenton a group of objects for observing the response which differs from anobservational study consisting of collecting and analyzing data without changing the existing conditions. An attention toexperimental designis extremely important as the validity of an experiment is directly affected by its creation and implementation.

The study of the effects of two or more factors in an experimental design is of great importance. Factorial experiments offer the ability to estimate the interaction effects between factors which is not possible with the one variable at a time approach.

Also, designing of experiments by using statistical modelling allows the experimenter to extrapolate the data and look for the best likely product within the test variable ranges. A statistically designed test gives emphasis to planning before the test is run.

Design of experiments also removes the confounding of effects where the effects of design variables are mixed up. This confounding indicates we cannot correlate product changes with product characteristics. DOE helps in finding the important variables that require to be controlled and also helps to find the unimportant variables that may not require to be controlled. It also helps to measure interactions, which is very important.

Finally, DOE helps the experimenter to handle experimental error. This feature is a very important aspect for testing of different manufactured industrial components and quality control. An experiment carried out using proper designing and statistical analysis methods helps in reducing the manufacturing costs leading to profits and also improves the quality of the products thereby increasing its market demands.

List Of References

1. Montgomery,Douglas C. (2001). Design and Analysis of Experiments. 6th Edition. Wiley, New York.

2. Mitra, Amitava. (2007). Fundamentals of Quality Control and Improvement. 2nd Edition. Auburn university, Auburn (Alaska).

3. www.scholargoogle.com

4. http://www.doesinc.com

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.