The Sensitivity Of The Machine Variables Biology Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Machine Operation optimization strategy is described in this paper using a simulation - based parameter design (PD) approach under stochastic running condition. For the PD problem a Taguchi - based definition is presented in the approach in which control factors include operating patterns, machine operating hours, maintenance level, scheduled shutdowns and product changeovers. Random factors are, Machine random variables (RVs) of cycle time (CT), time to time repair (TTR), defects rate (DR) and time between failures (TBF). Control and random factors are a complicated function of machine performance and its defined in term of net productivity (NP) which is based on a three key performance evaluator which involves, reliability rate (RR), quality rate (QR), and gross throughput(GT).Both optimization and modeling difficulties have been presented in the definition of the resulting problem.

The limited mathematical modeling and experimental design result optimization complication. The sensitivity of the machine variables (RVs) to different setting of machine operation parameters result modeling complications, therefore to overcome these difficulties, combined empirical modeling and Monte Carlo Simulation (MCS) is used to model the sensitive factors and to estimate NP under stochastic running conditions.


Optimization is a science of selecting the best of many of many possible decisions in a complex real life environment {1}, Monte Carlo Simulation (MCS) is one of the optimization methods, Monte Carlo is a technique using pseudorandom or random numbers to solve a model. This method is used for analyzing uncertain propagation, where the main goal is to determine how lack of knowledge, random variation or errors affects the performance and reliability of the system. In MCS the inputs are randomly generated from probability distribution to simulate the process of sampling from actual population. The data generated can be represented as probability distribution, reliability prediction, tolerance zone and confidence intervals.

Fig a: shows the principal of stochastic uncertainty propagation

The term "Monte Carlo" was introduced by von Neumann and Ulam during world war II, as a code word for secret work at Los Alamos; it was suggested by gambling casino at Monte Carlo city in Monaco, then the method has been applied in atomic bomb related problems {1-2}.later on the (MCS) were used to evaluate and solve complex multidimensional integrals. It is used also to solve stochastic and deterministic problems.

The Monte Carlo Simulation could be considered as the most powerful technique for analyzing and solving complex problems. The method could be found in many applications and fields from river basin modeling to radiation transport, as a result of the complexity and extensively of the problems recently the complexity and the computational effort required has been increasing, so the range of the range of the application has been broadening.

In this paper Monte Carlo method is used to design machine operation strategy , in the present time there are a huge reduction in the safety margins in different businesses specially in industry concurring with increasing in the marketplace competition, companies are focusing on improving their production system and keeping the operational cost as low as possible with high product quality, the production system in this paper consists of a group of machines which are organized logically to facilitate the production of a specified product or a mix or products. Metal removal machines such as drilling, milling and turning machines are playing an important role in the performance of the production system {2}.The operation strategy has influence on the machine performance in terms of selected operating parameters i.e. operating patterns, machine operating hours , maintenance level, scheduled shutdowns and product changeovers. As macro-level operational decisions, operating parameters are complementary to the micro-level machining parameters i.e. cutting speed , depth of cut and feed rate {3}, both of the levels are interdependent and they are critical to the performance indicator of the machine i.e. reliability, throughput and the production quality , for improving such metrics parameters are set depending on a complex combination of a process capability, operation schedule and the specifications of the product {A}.

Problem Definition

This part is defining the problem structure which is designing machine operating strategy guided by Taguchi's PD approach [22] where categorizing in the factors influencing machine performance (Y). (Fig 1) shows this approach.


Machine (M)

Production Y


Figure 1. P Diagram

The influencing factors could be classified into two types, Control Factors (C-factors) and Noise (Random) Factors (X-factors) as it is shown in Fig 2.

Figure 2. Influencing Factors

X-factors are random variables , it is variation due to the time , so the NP will behave in stochastic way in the P-Diagram response (Y), many other factors and sub-factors are contributing in the randomness of X-factors .To model the impact on the machine's performance both mathematical and empirical modeling could be used. For example the randomness in CT factor is effected by the operator's variability and changes in the processing conditions .

In machining operation the CT depends on machining time (t m) , loading time (t l) and unloading time (t u) for each part during machining process:

CT= t l + t m + t u……………….. (1)

Where (t l ) and (t u ) depends on the operator's variability and tm depends on the machining parameters for turning process using Lathe machine such as ,depth of cut (d) , feed rate (f), cutting speed (N), part length (L) , initial diameter (Do) and the final diameter (D f):

Therefore the variability of CT is related to the variation on the operator's efficiency and other running parameters in terms of N, f and d .

Machine failures, tool wear, tool breakage and tool change causes operation interruptions and stochastic failures which leads to randomness of TBF and TTR. Where the randomness in DR is related to the physical and the chemical properties of the product and the machining conditions.

We assume that the RVs are independent from each others during the time, and a Markovian chain (X=X n; n=0,1,….) is used in representing the discrete stochastic process while Markovian process is used to represent a continues stochastic process. Thus X-factors are continues RVs that can be modeled using Markovian stochastic processes.

C-factors impact the machine performance by influencing the occurrence of the events related to the machine RVs as it is shown in Fig 3.

C- Factors

∆x = ƒ (C)

Y= Æ’ (X)

X- Factors


Fig 3.Influence Diagram on PD problem

The relationship between C & X factors is defined in an empirical model ∆X = ƒ (C), this estimates the amount of X-factors sensitivity to changes in C-Factors.also Y= ƒ (X) is used to propagate the randomness in X-factors into the machine performance in term of NP. The randomness in X-factors effects the accuracy of the Y= ƒ (X) and the sensitivity error (ε) in the empirical model ∆X = ƒ (C).

To solve the PD problem we need to determine the optimum level for machine operating parameters, so that the process response of Y is maximized due to randomness in X-factors. This requires ∆X = ƒ (C) sensitivity modal, a stochastic modeling of Y= f(X) relationship, Machine performance in term of NP and a combinatorial optimization engine for searching the complex and large domain of PD problem.

Solution Approach

To develop the empirical regression model of ∆X = ƒ (C), available data on the defined C and X factors. That regression model predicts changes in X factors in term of ∆X as it is showing in Fig4. .Monte Carlo Simulation (MCS) model will be utilized by using ∆X values to determine the machine performance (Y) with effect in stochastic changes in X factors, then the estimations of machine performance which are based on MCS, will be used to guide the algorithm search towards the optimal operating PD. The solution approach can be explained in the flowing steps :

Step 1: Developing a P-D diagram ,X factors, C-Factors and response Y in terms of NP.

Step 2: Develop an empirical model of ∆X = ƒ (C) sensitivity relationship.

Step 3:Develop a Monte Carlo Simulation (MCS) model of Y= Æ’ (X) stochastic relationship.

Step 4: Setting the algorithm parameters

Step 5 : Running the simulation to obtain the optimal solution.


MCS Model

Y= Æ’ (x)


Regression Model∆x = ƒ (C) (Empirical)X-Factors


Performance Y

Algorithm (Optimizing Model)

Optimum C- factors

Fig 4. prototype of Solution approach

2.1 Machine Performance as Net productivity (NP)

The machine Performance is combining of several values such as reliability rate (RR) machine gross throughput (GT) and quality rate (QR).Thus the (GT) is depending on (CT) values and it is a variable value because it cannot be sustained practically due to the actual operation conditions, so the mean CT (MCT) in seconds will be used to determine the average machine GT in term of unit per hour (UPH):

The machine performance also influenced by varying factors such as machine failures and repairing time so the machine reliability rate (RR) will be by:

Where MTBF is the mean time between failure and MTTR is the mean time to repair.

Finally the machine performance is a function of a production quality which could be expressed by quality rate (QR), using the machines mean defect rate (DR) (MDR) the QR could be determined through:

Thus the machines net productivity is combining of RR, GT and DR then , it will be represented as bellow:

Thus the NP is not achievable analytically due to the stochastic machine operating conditions. Hence a combination of MCS and empirical model is used to determine the realistic NP

2.2 Empirical model for ∆x = ƒ (C)

To change C- factors random X factors should be processed , and that could be done by modeling the sensitivity amount (∆x), so we assume that historical data are available to achieve the regression model which is based on three C-Factor's levels (Low L; Medium M; High H) where the model response will be measured at each level combination in

terms of mean in X factors (MCT, MTBF, MTTR and MDR). To predict the changes in ∆x, different linear regression models is developed for each response as a function of model C factor which will be represented by generic approximation:


The mode above is used for non linear approximation relationship (both polynomials and quadric) and factors interactions. To check the model adequacy during adding nonlinear or interaction terms , R2 (Determination Coefficient) and R2 adj (adjusted determination factor) will be used , where R2 is increasing when new terms added to the regression model and( R2 adj) is increasing only when addition of new term reduce the error of mean square [ 27 ]

2.3 Monte Carlo Simulation MCS model for Y=Æ’(X)

Based on the previous formulas Y could be determined using means of X factors :

To turn Deterministic model into a stochastic model by randomness modeling (uncertainty propagation) is the main objective of MCS, Fig (5) shows the effect of random propagation with Monte Carlo Simulation (MCS).



Deterministic Model

Æ’ (X)


Stochastic Response (Y)

Fig (5) Randomness Propagation with MCS

Monte Carlo Simulation is applicable for different type of problems where the approximate solution to problems is found using statistical sampling , MCS depend on the random number generation (RNG) where inputs randomly generated from the probability distributions to simulate the sampling process from the actual ones [31]. Congruential method is used to achieve RNG between 0 and 1 while Inverse method is used to generate independent random samples from theoretical probability distribution .At the end a collection of the randomly chosen points provides information about the behavior of model response .

Solution Algorithm

An algorithm is employed to optimize the problem solution , fig (6) shows the algorithm mechanism where it starts with setting the control parameters which consist of (initial Temperature T, numbers of T decrement steps S, cooling parameters (α) and maximum number of iterations in each step (n) at each T step. Initially an initial solution is randomly found from the feasible space then is compared to a candidate solution with the neighborhood initial solution. If it performs better the candidate's solution replaces the initial solution and the process is repeated many times, before reducing the temperature and checking the terminal condition. T must be high enough to allow the search to move to almost any neighborhood state to prevent trapping in the local optima, in the other hand the search will seek a convergence to the local optima toward the end of the calculation when T is almost zero.

The cooling parameter is controlling the temperature T reduction where values between (0.7 and 0.99) is producing better results by slow cooling, so longer temperature steps means larger number of iterations n which leads to slower cooling rate at α by allowing the system to stabilize at that temperature step. The combination of cooling rate α and cooling time in term of n established the algorithm cooling schedule which has influence on the solution quality.


Algorithm Parameters

Ts , α, n, S

Initialize solution

Generate a candidate solution

Estimate Objective value of current and candidate solution (MCS)


(Observe ΔE)



Update Solution

Change T?

(Observe n)


Produce T

(Use α)

Terminate ?

(Observe S)


Final Solution


Fig(6). Algorithm

After setting the algorithm parameter, an initial solution is generated and then it is used as a current solution. MCS based on the estimation of NP will evaluate the function value of the initial condition and the candidate solution where NP analogues to the Energy (E) value in thermodynamic (Higher NP means lower E implies a closer state to thermal equilibrium .Thus the change in the Energy (∆ E = New E - Current E ) is measured at each evaluated solution and the solution is accepted if ∆ E < 0 (candidate solution results in lower E).When the random value of is generated and compared to the Boltzmann probability of acceptance p(∆ E), the candidate solution is accepted if R≤ p(∆ E), otherwise another solution is generated , by using this we can prove that the average E over the ensemble is equal to the Boltzmann average of E as determined by Boltzmann distribution law for a sufficiently large ensemble [40]

For the current Temperature

in this loop the search continues until reaching the maximum iteration n where T will decremented according to the cooling parameter α. Reducing T results in reducing

Boltzmann probability for acceptance of worse solutions (those with higher energy E).

Different methods can be used for reducing T in the cooling process [41].

A commonly used simple approach is the linear method, where:

Ti+1= α Ti………. (10)

During the iteration the temperature drops and the process is finding neighborhood solutions , this process continue until the solutions meets the criterion acceptance till termination. The algorithm is either set for terminating at a subzero final T, to run certain number of T steps (S), or when no improvement occurs in one T step.

Results and Discussions

In this section the operating parameters for a lathe machine is designed with the help of 5 step procedure, as it has been described previously. First the PD problem is defined in term of C-factors, X factors and response Y as shown in P diagram in Fig (1). Then the response of the model is assessed using NP definition in equation (6).As it is explained before the problem definition which includes five C-factors (C1: operating pattern in terms of working hours between breaks, C2: operating hours per shift, C3: changeovers per shift, C4: schedule shutdowns per year, and C5: maintenance level) and four X-factors (X1: CT, X2: TBF, X3: TTR, and X4: DR), levels of defined C factors as clarified in Table (1).

C1 Level C2 level C3 Level C4 Level C5 Level

2.0 4 1 3 1

2.5 5 2 4 2

3.0 6 3 5 3

3.5 7 4 6 4

4.0 8 5 7 5

4.5 9 6 8

5.0 10 9

11 10

12 11


Table (1). Levels of C-Factor

The sensitivity of X- Factors as of (∆ X) to change of the level of C factors is determined by multiplying the linear regressions (empirical models). At this stage the historical machine performance data (MCT, MTTR, MTBF, and MDR) at different levels of C factors is collected and fraction factorial design is used for its formularization. In this case the defined C-factors results 18 900 potential combinations which makes it extremely difficult , time consuming and insufficient to set full factorial design of all factors level [42].

Table 2 is showing that three levels of fractional factorial design data have been used. Based on [27,43], and L27 fractional factorial design is developed (Taguchi's orthogonal array of k n-p designs, in this case 3 5-2 = 27 designs).

Parameter L:Low (0) M:Medium(1) H:High(2)

C1 2.0h 3.5h 5.0h

C2 4h 8h 12h

C3 1 changeover 3 changeover 6 changeover

C4 3 shutdowns 6 shutdowns 12 shutdowns

C5 Repair method 1 Repair method 3 Repair method 5

Table 2: Shows the three levels factorial design

Base Regression Model:

MTBF = 275 - 0.667 C1 - 0.222 C2 - 0.325 C3 - 4.63 C4 - 0.139 C5

MTTR = 13.5 + 0.333 C1 - 0.097 C2 + 0.307 C3 - 0.111 C4 + 6.03 C5

MCT = 193 - 0.73 C1 - 7.26 C2 + 0.162 C3 + 0.076 C4 - 0.752 C5

MDR = 4.60 + 1.67 C1 + 0.0139 C2 - 0.0058 C3 + 0.0582 C4 + 0.0833 C5

Enhanced Regression Model

MDR = 4.97 + 1.73 C1 - 0.061 C2 - 0.006 C3 + 0.058 C4 - 0.117 C5 + 0.025 C2C5

MTTR = 17.9 - 1.18 C1 - 0.758 C2 + 0.307 C3 - 0.111 C4 + 6.31 C5 + 0.189 C1C2

MCT = 212 - 8.47 C1 - 6.29 C2 + 0.162 C3 + 0.076 C4 - 9.79 C5 + 2.58 C1C5

Fig. 7. Base and enhanced regression models.

Table A1 is summarizing the results of the L27 experimental design. In the regression analysis the experimental design results are used where in each model nonlinearity and interaction have been checked. To determine the factors with largest influence on X factors , main effect plots are used .According on the plots of main effect (shown in Fig 7),it was found that C1 has largest main effect on MDR ,C2 has largest main effect on MCT,C4 gas largest main effect on MTBF and has largest main effect on MTTR. The nonlinearity is checked using quadric approximations with factor with largest main effect and it was found that the nonlinearity is neglectable as no nonlinear term was added to the regression model.

Some interactions existed among C factors as it is shown in the interaction plots .The interactions are C1-C2 interaction in MTTR,C2-C5 interaction in MDR,C1-C5 interaction in MCT and no interaction was noticed in MTBF, therefore enhanced by adding main interaction terms. Fig. 8 shows the base and enhanced regression


Coefficient of determination R2 and the adjust coefficient of determination (R2 adj) is used to check the model adequacy , both of R2 and R2 adj are very close to unity and have improved when enhancing the model with interaction terms as it is shown in Table 3 which also show factors with effect and main interaction.

Then , a MCS model is utilized to estimate the machine NP at each C-factor's combination based on random propagation as it is shown in Fig 5. where a large number of trails are sampled to from corresponding probability distribution at each response evaluation to propagate the randomness encompassed X-factor into the model response (Y). For TNG a congruential method is used where inverse method used in sampling from the corresponding probability distribution. Different random stream generators has been used to certify sampling independence for 1000 data pints (sampling size) where the MCS has run five applications at each point. Table 4 is providing the probability distribution of X factor in MCS model. To determine the probability distribution of RVs the statistical data analyses were used.

Means of X-factor Base regression models Enhanced models

R2 (%) R2adj (%) Main effect Main interaction R2 (%) R2adj (%)

MTTR 96.1 95.2 C5 C1-C2 96.5 95.4

MTBF 96.5 95.7 C4 None N/A N/A

MDR 93.7 92.2 C1 C2-C5 93.9 92.3

MCT 92.0 90.1 C2 C1-C5 94.7 93.2

Table 3 Adequacy check of regression models

A graphed X factor distribution and chi-square test is used to verify the accuracy of MCS sampling where the accuracy were 95% and it is accepted. Finally to search the PD problem domain based on MCS estimation , an algorithm is set and adapted to the defined PD problem. Thus the starting temperature(T) is set to 100 , and cooling rate α is 0.91so the number of T iterations is 100 , the final subzero temperature after 50 T steps (5000 total iterations).The best 100 iterations from 5000 evaluated iteration as it is shown in Fig 8.The best solution of machine NP found in the range of 27.5 and 32.28 UPH(Units Per Hour) .

The final solution for the operating machine PD: 3.5 h between breaks, 6 h daily operation, 7 yearly scheduled maintenances, three daily changeovers, and performing maintenance with level 2 repair method. This parameter setting has resulted in an average overall machine NP of 32.38UPH.Then the solution was repeated with changing the terminating condition so that the program terminates upon achieving no NP improvement in the 100 iterations run at anyone T step. It was noticed that the program terminates earlier after running 10 T steps (a total of 1000 iterations). The same solution (PD) is gained but with less running time (2 min and 14 s) using the same computer.