Shrinkage Prediction of Injection Molded HDPE Parts Using Artificial Neural Networks

10427 words (42 pages) Full Dissertation in Full Dissertations

06/06/19 Full Dissertations Reference this

Disclaimer: This work has been submitted by a student. This is not an example of the work produced by our Dissertation Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.

Shrinkage Prediction of Injection Molded HDPE Parts Using Artificial Neural Networks

Abstract

Injection molding is classified as one of the most flexible and economical manufacturing processes for high volume production of plastic parts. However, there are a large number of factors that lead to the variations of this complex process and thus the quality issues of final products. A common quality trouble in finished products is the presence of shrinkage. Part shrinkage is largely affected by molding conditions, and consequently it often leads to part warpage. The main objective of this paper is to predict the shrinkage of injection molded parts under different processing parameters. The second objective is to facilitate the setup of injection molding machine and reduce the need for trial and error. To meet these objectives, an artificial neural network (ANN) model was presented in this study, to predict and control the part shrinkage from the optimal molding parameters. Molding parameters studied include metering stroke, holding time, and cooling time. A study was conducted based on the Taguchi methodology to find out optimized processing parameters, which can lead to the minimum shrinkage in the direction of length and width of injection moldings. An orthogonal array (OA) L27 was applied in the Taguchi experimental design, with three controllable factors and one non-controllable noise factor. The feedforward neural network model, trained in back propagation, was validated by comparing the predicted shrinkage with the actual shrinkage obtained from Taguchi-based experimental results. It demonstrates that the ANN model has a high prediction accuracy.

Keywords: Injection Molding, Shrinkage, Artificial Neural Network, Taguchi Methodology, Polyethylene

  1. INTRODUCTION

Injection molding is a versatile process used for forming both thermoplastic and thermoset materials into molded products of intricate shapes. This technology is particularly suitable for mass production due to its short cycle time and high tooling cost. The quality of injection moldings are affected by many factors such as plastic part design, mold design, materials, and a variety of molding parameters.  One of common quality issues is part shrinkage and its associated warpage. Part shrinkage is inevitable as the plastic material changes from its molten state to solid state, to form the desired shape in the mold cavity. Large non-uniform shrinkages, due to poor part design and improper molding conditions, often lead to more severe problems (i.e., part warpage). Therefore, it is highly desirable to predict and minimize the shrinkage of plastic injection moldings by optimizing processing conditions.

Polyethylene is one of the most widely utilized raw materials in injection molding process all over the world. It is widely used because of its unique significance in its properties like having lower melting point, sturdy compared to other plastic materials and it becomes harder as well as stiffer when the temperature drops, but does not get brittle like other plastics. Compared with amorphous plastics such as PS, PVC, semi-crystalline HDPE has a higher shrinkage value and it’s associated more likelihood for warpage. It is much needed to develop a prediction method for injection molding of HDPE.

The mold tool is the key component in the injection molding of plastic. It provides a passageway for molten plastic to travel from the injection cylinder (barrel) to the mold cavity. It allows the air which would be trapped inside to escape when the mold closes. If the air could not escape (be vented) then the molded component would contain voids (air bubbles) and have a poor surface finish. It cools the mold until it sets. The temperature of the mold is controlled because it is important that the mold cools at correct rate to avoid distortion and stress.

Shrinkage is inherent in the injection molding process, it occurs because the density of polymer varies from the processing temperature to the ambient temperature. During injection molding, the variation in shrinkage both globally and through the cross section of a part creates internal stresses or residual stresses act on a part with effects similar to externally applied stresses. Crystalline and semi-crystalline materials are particularly prone to thermal shrinkage. Excessive shrinkage can be caused by following factors, low injection pressure, cooling time, high melt temperature, metering stroke, high molding temperatures. Part warpage results from molded-in residual stresses, which, in turn, is caused by differential shrinkage of material in the molded part. In this study we optimized three most significant factors i.e. metering stroke, holding time and cooling time using taguchi approach. However, achieving low and uniform shrinkage is a complicated task due to the presence and interaction of many factors such as molecular and fiber orientations, mold cooling, part and mold designs, and process conditions. It is very necessary to develop ANN model to predict the shrinkage of the HDPE injection molded part

Artificial neural networks (ANNs) are one of the most powerful computer modeling techniques, based on statistical approach. The prediction of the shrinkage of injection molded parts has been reported in the literature. Lotti et al. [1] used complex models to solve the conservation equations (mass, momentum and energy) in order to predict polymer flow and cooling within the mold. Shen et al. [2] used combined ANN/GA approach to optimize the injection molding process based on the CAE analyses of the process, and GA is used in the process conditions optimization with the fitness function based on an ANN model. Halimin et al. [3] used ANN for prediction of warpage in injection molding. In Lee et al. [4] neural network is trained by utilizing numerical flow analysis to predict shrinkage of the injection molded part. Spina et al. [5] used Particle Swam Optimization approach and coupled to Artificial Neural Networks in the optimization of the process parameters of the thermoplastic injection molding. In Chiang et al. [6] response surface methodology is used for analysis of shrinkage and warpage in an injection molded part within thin shell feature.

From the above literature review, we observed that no work has been done to investigate the prediction accuracy of the shrinkage of HDPE injection molded parts by ANN when compared to Taguchi approach. In this study, Feed Forward Back propagated ANN is used as modeling technique in predicting the minimum shrinkage under the optimal injection molding parameters that affect the part shrinkage. The parameters studied include metering stroke, holding time, and cooling time. The optimal parameters are derived from the taguchi approach and results were obtained under conditions determined by orthogonal array (L27).

2. MODEL DESCRIPTION

 

2.1 Artificial Neural Networks

Artificial neural networks (ANN) are currently being used in a variety of applications with great success. Back-propagation (BP) is one of the best and uses the unique learning principle which of minimizing of errors in neural network output. Artificial neural network (ANN’s), are information processing system with their design inspired by biological neural networks which are used to estimate or approximate functions that can depend on a large number of inputs that are general unknown. An artificial neural network operates by creating connections between many different processing elements, each analogous to a single neuron in a biological brain [7]. The fact that neural network can be trained to learn any arbitrary nonlinear input/output relationships from corresponding data and the acquired knowledge has resulted in their use in a number of areas such as pattern recognition.

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig. 1 Typical Structure of an Artificial Neural Network (from Tadiou, Koné Mamadou, and John            Mc Carthy, 2009[8])

2.2 Back Propagation – Learning in Feed Forward network training

We consider again the learning problem for neural networks. Since we want to minimize the error function E, which depends on the network weights, we have to deal with all weights in the network one at a time. The feed-forward step is computed in the usual way, but now we also store the output of each unit in its right side. We perform the backpropagation step in the extended network that computes the error function and we then fix our attention on one of the weights, say wij whose associated edge points from the i-th to the j-th node in the network. This weight can be treated as an input channel into the subnetwork made of all paths starting at wij and ending in the single output unit of the network. The information fed into the subnetwork in the feed-forward step was oiwij , where oi is the stored output of unit i. The backpropagation step computes the gradient of E with respect to this input, i.e.

∂E/∂οiwij, Since in the backpropagation step oi is treated as a constant, we finally have [9]

wij    = Weights whose associated edge points from the the i-th to the j-th node in the network

∂E∂wij= οi ∂E∂οiwij                                         (1)

Summarizing, the backpropagation step is performed in the usual way. All subnetworks defined by each weight of the network can be handled simultaneously, but we now store additionally at each node i:

• The output oi of the node in the feed-forward step.

• The cumulative result of the backward computation in the backpropagation step up to this node.

We call this quantity the back propagated error. [9]

If we denote the back propagated error at the j-th node by δj, we can then express the partial derivative of E with respect to wij as:

∂E∂wij = οi δj                                                                 (2)

Once all partial derivatives have been computed, we can perform gradient descent by adding to each weight wij the increment

∆wij-γοiδj                                        (3)?          (3)

This correction step is needed to transform the backpropagation algorithm into a learning method for neural networks. This graphical proof of the backpropagation algorithm applies to arbitrary feed-forward topologies. The graphical approach also immediately suggests hardware implementation techniques for backpropagation [9]

 

2.3 Extended network

We will consider a network with n input sites, k hidden, and m output units. The weight between input site i and hidden unit j will be called w (1) ij . The weight between hidden unit i and output unit j will be called w (2) ij . The bias −θ of each unit is implemented as the weight of an additional edge. Input vectors are thus extended with a 1 component, and the same is done with the output vector from the hidden layer. Figure 7.17 shows how this is done. The weight between the constant 1 and the hidden unit j is called w (1) n+1,j and the weight between the constant 1 and the output unit j is denoted by w (2) k+1,j . [9]

Fig. 2 Notation for the three-layered network

There are (n + 1) × k weights between input sites and hidden units and (k + 1) × m between hidden and output units. Let W1 denote the (n + 1) × k matrix with component w (1) ij at the i-th row and the j-th column. Similarly let W2 denote the (k + 1) × m matrix with components w (2) ij . We use an over lined notation to emphasize that the last row of both matrices corresponds to the biases of the computing units. The matrix of weights without this last row will be needed in the backpropagation step. The n-dimensional input vector o = (o1, . . . , on) is extended, transforming it to oˆ = (o1, . . ., on, 1). The excitation netj of the j-th hidden unit is given by

netj= ∑i=1n+1Wi(1)oˆI                                                         (4)

The activation function is a sigmoid and the output o (1)j of this unit is thus

ο 1j=s(∑i=1n+1Wi(1)oˆj)                                                    (5)

The excitation of all units in the hidden layer can be computed with the vector-matrix multiplication oˆW1. The vector o (1) whose components are the outputs of the hidden units is given by [9]

ο(1)=s(οˆW1)                                                               (6)

Using the convention of applying the sigmoid to each component of the argument vector. The excitation of the units in the output layer is computed using the extended vector oˆ (1) = (o (1) 1, . . . , o (1) k , 1). The output of the network is the m-dimensional vector o (2), where

ο(2)=s(οˆ(1)W2)                                                        (7)

These formulas can be generalized for any number of layers and allow direct computation of the flow of information in the network with simple matrix operations [9].

 

Backpropagation ANN Model

 

For the Feed – Forward model adopted in this study, there are three neurons in its input layer, which are the dimensionless metering stroke, mold temperature and holding time. The output layer has two neurons, which are the length shrinkage and width shrinkage [10].

The effects of the number of training pairs, number of hidden layer nodes, transfer function of hidden layer, increasing ratio of learning rate, decreasing ratio of learning rate and momentum on network training quality are discussed in this study. Under each test condition, the training of the Feed- Forward ANN is carried out with ten different sets of initial values of weights and bias produced randomly by computer. The training process is stopped when either one of two convergence criteria is reached. The two convergence criteria are:

1) sum-squared error, SSE is smaller than 0.0001 or

2) Iterative number reaches 10,000 times. The training quality is determined by root-mean-squared error. The definitions of sum-squared-error, root-mean-squared error and average root-mean-squared error are [10]

SSE=∑j=1m∑i=1n(Tij-Pij)^2                                            (8)

RMSq=SSEqm×n

Steps involved in Algorithm of Backpropagation –Learning Feed-Forward neural network training:

  1. Collection of experimental data set of inputs and outputs (in excel sheets).
  2. Providing these input and output pool to software tool (MATLAB-Mathworks)
  3. Tag the data in software as inputs, outputs, training, testing and production.
  4. Create a custom network of required structure and save it. (Check for least RMSE value network)
  5. Further, train and test the network. The data tagged as train will be used for training; similarly other operations will be done. This will generate the required model.
  6. Selection of optimal parameters: After developing ANN system, end relation can predict the optimal output values that lead to minimum shrinkage value in injection molding

 

3. EXPERIMENTAL WORK

3.1 Machining parameters

Injection Molding is a cyclic process, where heat is utilized to soften the material and injected into closed mold to obtain desired shape. While performing a process it is very essential to know the parameters that greatly influence on manufacturing specimen. The parameters as shown in Table.1

The Injection molding machine used is Engel E-victory 30. It is composed with a 30-ton clamping force, it has screw diameter of 22 mm and the L/D ratio is 30

Table 1 Parameters and their limits.

Machine Parameter Value assigned
Injection Speed (cm3/sec) 10
Injection Pressure (bar) 1000
Cooling time (sec) 24
Shot Volume(cc) 24
Holding Pressure(bar) 150
Holding Time(sec) 14
Melt temperature(°F) 410
Metering stroke (mm) 2.75

 

Parameters are chosen and conformed after checking whether the obtained specimen is completely filled. Several experiments are conducted by varying parameters and parameters are said to be advisable after conforming through gate seal study. An initial test for parameters is made and parts obtained are partially filled, which is considered a failure. So, upon varying parameters a completely filled specimen is achieved.

Holding time is monitored until obtained parts possess similar weights. If the weight varies, holding time is increased until similar weights are attained. Hence, conforming of holding time that is best suitable for making parts is done after obtaining parts with similar weights.

3.2 Mold Design

Injection molding can be used for making a wide range of varieties of parts which can vary from tiny components to large components. After deciding a component to be made, a mold is prepared to form features of desired part.

Mold for this experiment is designed accordingly to dimensions described by D638 Type-IV. Dimensions of specimen is shown in Figure.3 Specimen dimensions.D-638 Type-IV design is chosen for the experiment as it is used for performing direct comparisons between materials in different cases

Fig. 3 Specimen dimensions (All Dimensions in mm)

Mold is designed such that its runner and gates resembles fan gate geometry. Idea of having a fan gate is to achieve uniform flow of material through the mold. It also adds up benefits by minimizing backfilling and part warpage. It also provides constant cross sectional area and the mold is made of steel insert mold base from DME (Model 08/09 U style frame). It is cut using a CNC machining center.

Original Length of mold cavity = 116.56 mm

Original Width of mold cavity = 19.25 mm

Nominal Length of the designed part = 115 mm

Nominal Width of the designed part = 19 mm

Table 2: Baseline Data (Length Shrinkage %)

Sample # 1 2 3 4 5 6 7 8 9
Length shrinkage (%) 2.67 2.80 2.55 2.65 2.66 2.45 2.58 2.72 2.66

Table 3: Baseline Data (Width Shrinkage %)

Sample # 1 2 3 4 5 6 7 8 9
Width

shrinkage (%)

3.67 3.56 3.47 3.36 3.32 3.76 4.28 4.17 2.43

Length shrinkage % Nominal (Xbar) = 2.642

Width shrinkage % Nominal (Xbar = 3.562

3.3 Materials

Polyethylene (PROLINE2053 HDPE NATURAL).It is a nucleated homo polymer resin with anti-static properties. Supplier of this product is Shannon Industrial Corporation.

 

4. Results and Discussions

 

4.1 Taguchi methodology

The Taguchi method was applied to determine the effect of the process parameters on the shrinkage. Taguchi’s method uses orthogonal arrays to schedule the experiment and to obtain statistical information with fewer experiments (compared to full-factorial experiments). One of its advantages is to reduce the number of experiments, but the experimental results are obtained under the conditions determined by the orthogonal array. Therefore, the value of each optimal process parameter determined by Taguchi’s method [5]. The measured shrinkage values and the signal-to-noise results are given in Table 4. The signal to-noise ratio is a simple quality indicator that researchers and designers can use to evaluate the effect of changing a particular design parameter on the performance or the products. In this study, ‘‘the smaller the better” quality characteristic was selected when calculating the S/N ratios

Optimization settings

Table 4. Main and noise factors

      Levels
Designation Variable Unit 1 2 3
A Metering stroke mm 2.7 2.75 2.8
B Holding Time sec 12 15 18
C Cooling Time sec 18 24 30
Non- Controllable Factors
Noise factor Room temperature without ventilation (70 °F ) & with ventilation (65 °F)
Output variable Length & Width shrinkage

The selection of an appropriate orthogonal array (OA) depends on the total degrees of freedom of the parameters. Degrees of freedom are defined as the number of comparisons between process parameters that need to be made to determine which level is better and specifically how much better it is. In this study, since each parameter has three levels the total degrees of freedom (DOF) for the parameters are equal to 7. Basically, the degrees of freedom for the OA should be greater than or at least equal to those for the process parameters. Therefore, an L27 (79), 3 Parameters (Control Factors) with each having 3 set variables(Level 1, Level 2, Level 3). Table 4 shows the Taguchi parameters design for Linear Shrinkage where the addition of room temperature is considered as the noise factor, which is an uncontrollable effect on the quality of the product.

Each row of this table represents an experiment with different combination of parameters and their levels. However, the sequence in which these experiments are carried is randomized

Table 5. Experimental layout using standard L27 orthogonal array – Length shrinkage

Test No Metering stroke

(mm)

Holding time

(sec)

Cooling time

(sec)

Room temperature without ventilation shrinkage% Room temperature with ventilation  shrinkage% Y bar S/N   ratio Variance Yi^2
T-101 2.7(1) 12(1) 18(1) 2.66 2.27 2.47 -21.95 0.08 12.23
T-102 2.7(1) 12(1) 24(2) 2.56 2.47 2.52 -35.48 0 12.67
T-103 2.7(1) 12(1) 30(3) 2.7 2.02 2.36 -16.92 0.23 11.4
T-104 2.7(1) 15(2) 18(1) 3.14 2.69 2.92 -22.31 0.1 17.11
T-105 2.7(1) 15(2) 24(2) 2.78 2.55 2.66 -27.51 0.03 14.21
T-106 2.7(1) 15(2) 30(3) 2.69 3.07 2.88 -23.78 0.07 16.64
T-107 2.7(1) 18(3) 18(1) 2.55 2.59 2.57 -42.68 0 13.21
T-108 2.7(1) 18(3) 24(2) 2.65 2.93 2.79 -26.22 0.04 15.61
T-109 2.7(1) 18(3) 30(3) 2.67 2.45 2.56 -27.51 0.02 13.12
T-110 2.75(2) 12(1) 18(1) 3.07 2.29 2.68 -16.87 0.3 14.64
T-111 2.75(2) 12(1) 24(2) 2.59 2.93 2.76 -24.1 0.06 15.31
T-112 2.75(2) 12(1) 30(3) 2.71 2.23 2.47 -20.32 0.11 12.31
T-113 2.75(2) 15(2) 18(1) 2.67 2.71 2.69 -41.79 0 14.49
T-114 2.75(2) 15(2) 24(2) 2.51 2.02 2.27 -19.47 0.12 10.38
T-115 2.75(2) 15(2) 30(3) 2.93 2.69 2.81 -27.34 0.03 15.85
T-116 2.75(2) 18(3) 18(1) 2.75 2.55 2.65 -28.33 0.02 14.09
T-117 2.75(2) 18(3) 24(2) 3.14 2.12 2.63 -14.43 0.52 14.36
T-118 2.75(2) 18(3) 30(3) 2.6 2.23 2.42 -22.41 0.07 11.74
T-119 2.8(3) 12(1) 18(1) 2.69 2.55 2.62 -31.33 0.01 13.76
T-120 2.8(3) 12(1) 24(2) 2.66 2.65 2.65 -67.73 0 14.09
T-121 2.8(3) 12(1) 30(3) 2.65 2.67 2.66 -48.99 0 14.13
T-122 2.8(3) 15(2) 18(1) 2.76 2.68 2.72 -37.67 0 14.8
T-123 2.8(3) 15(2) 24(2) 2.79 2.17 2.48 -18.14 0.19 12.47
T-124 2.8(3) 15(2) 30(3) 2.66 2.64 2.65 -50.13 0 14.02
T-125 2.8(3) 18(3) 18(1) 2.65 2.58 2.62 -38.26 0 13.69
T-126 2.8(3) 18(3) 24(2) 2.76 3.12 2.94 -24.28 0.06 17.3
T-127 2.8(3) 18(3) 30(3) 2.79 2.26 2.52 -19.7 0.14 12.88

 

Analysis for Noise factor and S/N ratio – Length shrinkage

 

The response table of the mean and S/N for the factors which showed the most significant effects on Length shrinkage are shown in in Table 6 and 7. Fig. 4 shows the graphical representation of the effect of control parameters on Length shrinkage and S/N ratio

The optimal processing parameter settings from this Taguchi Parameter Design experiment can be determined from Figure 5. From each plot, the point with the highest value of both Length shrinkage and S/N ratio is observed; then the factor level at this point is selected as the optimal level. The Optimal combination of the factors are the second level of Metering stroke of (A2), first level of Holding time of (B1), third level of Cooling time of (C1).

Table 6. Response of controllable factors to Length   Table 7. Response of controllable factors to S/N ratio                                                                                     shrinkage

Length shrinkage   Signal to noise Ratio
Level A(Metering stroke) B(Holding time) C(Cooling time) Level A(Metering stroke) B(Holding time) C(Cooling time)
1 2.6358 2.5765 2.6591 1 -27.15 -31.52 -31.24
2 2.5975 2.6747 2.6328 2 -23.9 -29.79 -28.6
3 2.6507 2.6406 2.5922 3 -37.36 -27.09 -28.57

Predicted Length shrinkage % = 2.5083 %                              Prediction Length shrinkage % =

2.5725

Main Plots: Fig 4. Response graphs for length shrinkage

Factor A                                                                     Factor B

Factor C

 

4.2   Hypothesis testing

(T-TEST FOR controllable factor analysis using 99% confidence interval)

Hypothesis:   H0:  μroom temperature without ventilation = μroom temperature with ventilation
                      H1:  μroom temperature without ventilation ≠ μroom temperature with ventilation
                       Average- room temperature without ventilation 2.73
                       Average- room temperature with ventilation 2.52
                       Variance – room temperature without ventilation 0.026699
                       Variance – room temperature with ventilation 0.088832
                       T value 1.83
                       Degrees of freedom 79
                       T-Critical 2.37
                       Alpha 0.01

Conclusion: Hence we fail to reject the null hypothesis, as T value obtained is less than the T- critical value; Linear Shrinkage is not significantly affected by temperature.

Table 8 shows the Taguchi parameters design for Width Shrinkage where the addition of room temperature is considered as the noise factor, which is an uncontrollable effect on the quality of the product.

Table 8 Experimental layout using standard L27 orthogonal array – Width shrinkage

Test No Metering stroke

(mm)

Holding time

(sec)

Cooling time

(sec)

Room temperature without ventilation shrinkage% Room temperature with ventilation  shrinkage% Y bar S/N ratio Variance Yi^2
T-101 2.7(1) 12(1) 18(1) 3.13 2.44 2.79 -18.13 0.24 15.76
T-102 2.7(1) 12(1) 24(2) 2.65 2.92 2.78 -26.5 0.03 15.53
T-103 2.7(1) 12(1) 30(3) 3.07 3.35 3.21 -27.31 0.04 20.68
T-104 2.7(1) 15(2) 18(1) 3.02 3.14 3.08 -34.4 0.01 18.98
T-105 2.7(1) 15(2) 24(2) 2.97 3.11 3.04 -32.45 0.01 18.5
T-106 2.7(1) 15(2) 30(3) 3.87 1.9 2.89 -9.8 1.95 18.62
T-107 2.7(1) 18(3) 18(1) 3.01 2.64 2.83 -23.74 0.07 16.06
T-108 2.7(1) 18(3) 24(2) 3.36 2.56 2.96 -17.48 0.32 17.82
T-109 2.7(1) 18(3) 30(3) 2.97 2.85 2.91 -33.81 0.01 16.93
T-110 2.75(2) 12(1) 18(1) 3.42 4.04 3.73 -21.7 0.19 28
T-111 2.75(2) 12(1) 24(2) 3.57 2.63 3.1 -16.46 0.44 19.61
T-112 2.75(2) 12(1) 30(3) 4.39 4.42 4.4 -49.13 0 38.79
T-113 2.75(2) 15(2) 18(1) 4.07 2.85 3.46 -15.21 0.74 24.68
T-114 2.75(2) 15(2) 24(2) 2.34 4.04 3.19 -11.77 1.45 21.75
T-115 2.75(2) 15(2) 30(3) 3.06 4.38 3.72 -15.16 0.87 28.55
T-116 2.75(2) 18(3) 18(1) 4.07 3.1 3.59 -17.46 0.47 26.18
T-117 2.75(2) 18(3) 24(2) 3.49 4.28 3.89 -19.89 0.31 30.52
T-118 2.75(2) 18(3) 30(3) 3.83 1.6 2.71 -8.4 2.49 17.19
T-119 2.8(3) 12(1) 18(1) 2.96 2.56 2.76 -23.03 0.08 15.31
T-120 2.8(3) 12(1) 24(2) 4.04 2.97 3.51 -16.39 0.58 25.16
T-121 2.8(3) 12(1) 30(3) 2.56 4.01 3.28 -13.32 1.05 22.63
T-122 2.8(3) 15(2) 18(1) 2.85 4.04 3.44 -15.4 0.7 24.42
T-123 2.8(3) 15(2) 24(2) 4.04 3.33 3.68 -20.43 0.25 27.4
T-124 2.8(3) 15(2) 30(3) 2.63 3.18 2.9 -20.46 0.15 16.99
T-125 2.8(3) 18(3) 18(1) 4.42 2.92 3.67 -13.99 1.12 28.08
T-126 2.8(3) 18(3) 24(2) 2.85 5.72 4.28 -9.96 4.12 40.84
T-127 2.8(3) 18(3) 30(3) 4.04 3.09 3.56 -17.58 0.45 25.83

 

Analysis for Noise factor and S/N ratio – Width shrinkage

 

The response table of the mean and S/N for the factors which showed the most significant effects on Length shrinkage are shown in in Table 8 and 9. Fig. 8,9,10 shows the graphical representation of the effect of control parameters on Width shrinkage and S/N ratio

The optimal processing parameter settings from this Taguchi Parameter Design experiment can be determined from Figure 8,9,10. From each plot, the point with the highest value of both Width shrinkage and S/N ratio is observed; then the factor level at this point is selected as the optimal level. The Optimal combination of the factors are the first level of Metering stroke of (A1), second level of Holding time of (s), first level of Cooling time of (C1).

Table 9. Response of controllable factors to width               Table 10. Response of controllable factors to S/N ratio shrinkage

Width Shrinkage Signal to noise Ratio
Level A(Metering stroke) B(Holding time) C(Cooling time) Level A(Metering stroke) B(Holding time) C(Cooling time)
1 2.9427 3.2846 3.2602 1 -24.85 -23.55 -20.34
2 3.5309 3.2671 3.3808 2 -19.46 -19.45 -19.04
3 3.4554 3.446 3.2881 3 -16.73 -18.03 -21.66

Prediction of Width shrinkage % = 2.8353                    Prediction of Width shrinkage % = 3.6476

Main effect plot – Fig 5. Response Graphs for Width Shrinkage

 

                            Factor A                                                                               Factor B

Factor C

 

 

 

 

4.3 Hypothesis testing

 

(T-TEST FOR controllable factor analysis using 99% confidence interval)

Hypothesis:   H0:  μroom temperature without ventilation = μroom temperature with ventilation
                      H1:  μroom temperature without ventilation ≠ μroom temperature with ventilation
                       Average- room temperature without ventilation 3.36
                       Average- room temperature with ventilation 3.26
                       Variance – room temperature without ventilation 0.357715
                       Variance – room temperature with ventilation 0.764972
                       T value 0.27
                       Degrees of freedom 79
                       T-Critical 2.37
                       Alpha 0.01

Conclusion: Hence we fail to reject the null hypothesis, as T value obtained is less than the T- critical value, Width Shrinkage is not significantly affected by temperature.

 

4.4 Training and Testing Artificial Neural Networks

 

After the original data were organized, a new data set with 27 samples was obtained; each sample is average 5 sub-levels, with which the ANN system could be developed.

Three input factors used for developing the ANN system included

  1. Metering stroke (mm) from the prediction system
  2. Holding time (sec) rate from the prediction system
  3. Cooling time (sec)

From the prediction system and the output factor was the Length shrinkage and Width shrinkage

Table.11Optimal combination for ANN technique

Network type Feed-Forward Backprop
Training function TRAINGDX
Adaption learning function LEARNGD
Performance function MSE
Transfer function LOGSIG

 

Table.12Training parameters for predicting shrinkage

 

Training parameters Data
Epochs 1000
Time lnf
Goal 0
Min Gradient 1e-05
Max fail 1000
Learning rate 0.01
Learning inc 1.05
Learning dec 0.7
Maximum per inc 1.04
Mc 0.9

Table.13 Sample set used for the training & testing data

Test No Metering stroke

(mm)

Holding time

(sec)

Cooling time

(sec)

Length Shrinkage

%

Width Shrinkage

%

T-101 2.7(1) 12(1) 18(1) 2.6629 3.1332
T-102 2.7(1) 12(1) 24(2) 2.5592 2.6517
T-103 2.7(1) 12(1) 30(3) 2.7036 3.0739
T-104 2.7(1) 15(2) 18(1) 3.1401 3.0211
T-105 2.7(1) 15(2) 24(2) 2.7755 2.9683
T-106 2.7(1) 15(2) 30(3) 2.6919 3.8742
T-107 2.7(1) 18(3) 18(1) 2.551 3.0123
T-108 2.7(1) 18(3) 24(2) 2.6534 3.3575
T-109 2.7(1) 18(3) 30(3) 2.6672 2.9683
T-110 2.75(2) 12(1) 18(1) 3.0653 3.4213
T-111 2.75(2) 12(1) 24(2) 2.5888 3.5664
T-112 2.75(2) 12(1) 30(3) 2.7087 4.3887
T-113 2.75(2) 15(2) 18(1) 2.6694 4.0699
T-114 2.75(2) 15(2) 24(2) 2.5074 2.3351
T-115 2.75(2) 15(2) 30(3) 2.9338 3.0607
T-116 2.75(2) 18(3) 18(1) 2.7544 4.0699
T-117 2.75(2) 18(3) 24(2) 3.1394 3.4908
T-118 2.75(2) 18(3) 30(3) 2.5997 3.8259
T-119 2.8(3) 12(1) 18(1) 2.6934 2.9551
T-120 2.8(3) 12(1) 24(2) 2.6556 4.0435
T-121 2.8(3) 12(1) 30(3) 2.6484 2.5594
T-122 2.8(3) 15(2) 18(1) 2.7559 2.8496
T-123 2.8(3) 15(2) 24(2) 2.7871 4.0369
T-124 2.8(3) 15(2) 30(3) 2.6556 2.6253
T-125 2.8(3) 18(3) 18(1) 2.6484 4.4195
T-126 2.8(3) 18(3) 24(2) 2.7559 2.8496
T-127 2.8(3) 18(3) 30(3) 2.7871 4.0369

 

Network architecture

 

An optimum number of hidden layer neurons needs to be designed to accurately forecast a parameter via Artificial Neural Networks. By taking less numbers of neurons and the slight increase of the number of neurons are considered the best approach in seeking the optimum number of hidden neurons. At the starting point of the process, 9 neurons was selected for the hidden layer. Then, the number of neurons increase every step. For every step, 3 neurons were added until a considerable improvement was observed. For network structure, the selection of hidden layer is based on trial and error and this study using one hidden layers. [11]

3-9-2-2 network structure has the least RMSE value among other selected network structures

Fig. 6 ANN network

A data set for every output, including 135 data samples obtained from the experimental studies, was used for the training, testing, and validation phases of the network models. Data subsets of training, testing, and validation were achieved by dividing the available data set into 85%, 10%, and 5%. From the available data set, a sample from the data was selected randomly to make a data subset. The program utilized in running the network models was obtained from the Matlab software toolbox.

 

5 .OPTIMAL PARAMETERS

 

No wording right after a session?

Table.14 shows the predicted shrinkage values

 

Test No Metering stroke

(mm)

Holding time

(sec)

Cooling time

(sec)

Predicted Length Shrinkage% Predicted Width Shrinkage%
T-101 2.7(1) 12(1) 18(1) 3.0647 3.0804
T-102 2.7(1) 12(1) 24(2) 2.5794 2.4524
T-103 2.7(1) 12(1) 30(3) 2.6709 3.0779
T-104 2.7(1) 15(2) 18(1) 3.0497 2.8155
T-105 2.7(1) 15(2) 24(2) 2.7587 2.4335
T-106 2.7(1) 15(2) 30(3) 2.6781 2.8065
T-107 2.7(1) 18(3) 18(1) 2.838 2.939
T-108 2.7(1) 18(3) 24(2) 2.7775 3.2817
T-109 2.7(1) 18(3) 30(3) 2.7843 3.7707
T-110 2.75(2) 12(1) 18(1) 2.5705 3.3813
T-111 2.75(2) 12(1) 24(2) 2.6946 3.5731
T-112 2.75(2) 12(1) 30(3) 2.5062 3.1665
T-113 2.75(2) 15(2) 18(1) 2.8518 4.2369
T-114 2.75(2) 15(2) 24(2) 2.6479 2.5973
T-115 2.75(2) 15(2) 30(3) 2.6115 2.8121
T-116 2.75(2) 18(3) 18(1) 2.7104 3.945
T-117 2.75(2) 18(3) 24(2) 2.8202 3.5435
T-118 2.75(2) 18(3) 30(3) 2.7024 3.762
T-119 2.8(3) 12(1) 18(1) 2.6767 2.9367
T-120 2.8(3) 12(1) 24(2) 2.7455 3.9519
T-121 2.8(3) 12(1) 30(3) 2.555 2.7642
T-122 2.8(3) 15(2) 18(1) 2.5503 3.0049
T-123 2.8(3) 15(2) 24(2) 2.6262 3.8148
T-124 2.8(3) 15(2) 30(3) 2.5485 2.6156
T-125 2.8(3) 18(3) 18(1) 2.5859 4.2458
T-126 2.8(3) 18(3) 24(2) 2.5525 2.9437
T-127 2.8(3) 18(3) 30(3) 2.5783 3.5551

 

Regression analysis

 

As shown in the below graph, among the output parameters, minimal shrinkage achieved the highest R-value based on the training, validation, and testing errors. This finding showed that the achieved network (3-9-2-2) for the shrinkage parameter was more valid than the other networks. If the network is trained accurately, the function relating the input variables to the output variables can be modelled as well.

Fig .7 R-value for testing, training, validation data shown

Validation set is used for tuning the model parameters. This procedure was used in the

Present work, as shown in Figure 12. To make sure you don’t over fit the network and also fine-tune models you need to input the validation set to the network and check if the error is within some range .When the error in the measurement begins to increase, the training process must be stopped by the independent validation set. Test set is used for performance evaluation [11]

 

 

Fig .8 Schematic representation of

Validation performance

The training was performed for 1000 cycles. The momentum and learning rate were taken to be 0.9 and 0.01, respectively. After 1000 cycles, the network was able to predict the outputs with a Mean squared error (MSE) equal to 0.0067, Learning rate =1.7948, Gradient= 0.0020313 shown in Fig .9

 

Fig .9 Gradients and validation check Representation

 

 

 

 

 

Comparison of Prediction performance of ANN and Taguchi Methodology in determining minimal shrinkage

 

To compare the prediction performance of the ANN and Taguchi’s method in determining minimal shrinkage, this study calculates the shrinkage (Length and Width) under the optimal process conditions determined by Taguchi’s method. This model is capable of shrinkage prediction of injection molded plastic parts based on variable process parameters [12]. Optimization can be performed using parametric evaluation function (PSE), PSE is an infilling sampling criterion. Although the design of experiment size is small, this criterion can take the relatively unexpected space into consideration to improve the accuracy of the ANN model but in this study we used taguchi analysis for optimization [13].

Taguchi and ANOVA methods have been used by the researchers for optimizing the injection molding parameters for minimum shrinkage [14–18]. In this study, Feed Forward back propagated neural network is compared with taguchi to reduced shrinkage of the injection molded part.

Table.15 Comparison of the Length shrinkage results of the experimental study from Taguchi and Backpropagation Feed-Forward neural network for the optimal conditions

Optimal process conditions Experimental

(Taguchi- HDPE)

A2B1C3

Neural Network A2B1C3 Error %
Length Shrinkage (%) 2.5083 2.5062 0.08

.

Fig.10 y-axis (Length shrinkage %) and x-axis (No of test pairs) values representation

Table.16 Comparison of the Width shrinkage results of the experimental study from Taguchi and Backpropagation Feed-Forward neural network for the optimal conditions

Optimal process conditions Experimental

(Taguchi- HDPE)

A1B2C1

Neural Network A1B2C1 Error %
Width Shrinkage (%) 2.8353 2.8155 0.69

Fig .11 y-axis (Width shrinkage %) and x-axis (No of test pairs) values representation

6. Conclusion

 

Taguchi and ANN methods were utilized to investigate the effects of metering stroke, holding time, and coolingtime on the shrinkage of the HDPE injection moldings. In Taguchi method, controllable factors analysis (Length shrinkage) was used for determine the optimal set of process parameters. The results showed that 2.75 mm Metering stroke, 12 sec of holding time, 30 sec of cooling time. Reduction of Length shrinkage by 5.06%, for HDPE injection molded part when compared to nominal length value, which were produced by setting baseline parameters values to the injection molding machine. The generated neural network predicted value in terms of the most effective process parameters on the length shrinkage is 2.5062%, gave good approximation as compared with the optimal experimental results minimizing shrinkage by 0.08%.

In Taguchi method, controllable factors analysis (Width shrinkage) was used for determine the optimal set of process parameters. The results showed that 2.70 mm Metering stroke, 15 sec of holding time, 18 sec of cooling time minimized shrinkage of 20.4% (HDPE) when compared nominal width value. The generated neural network predicted value in terms of the most effective process parameters on the Width shrinkage is 2.8155%, gave good approximation as compared with the optimal experimental results minimizing shrinkage 0.69%. Proposed ANN modelling technique has been shown as the effective model to predict shrinkage for HDPE injection molded part. All R-squared value is 0.91 for 3-9-2-2 network, which says that predicted model is accurate. ANN modelling and optimization method proposed in this model will benefit directly to injection molders to predict the precise injection molded parts, it is also widely used in the industrial to overcome the quality issues.

For future work, this study recommends using other techniques like Genetic Algorithm, Fuzzy Logic, KNN, SOM neural network to find the optimal parameter setting to find the minimum shrinkage values. This study also recommends using combined modelling techniques which consists of more than one defect such as delamination, Flow marks, warpage and others in order to improve the quality of the product.

 

References

 

1. Lotti, C., M. M. Ueki, and R. E. S. Bretas. “Prediction of the shrinkage of injection molded iPP plaques using artificial neural networks.” Journal of Injection Molding Technology 6.3 (2002): 157.

2. Shen, Changyu, Lixia Wang, and Qian Li. “Optimization of injection molding process parameters using combination of artificial neural network and genetic algorithm method.” Journal of Materials Processing Technology183.2 (2007): 412-418.

3. Halimin, Nur Asyikin Mohamad, Azlan Mohd Zain, and Muhammad Firdaus Azman. “Warpage Prediction in Injection Molding Using Artificial Neural Network.” Journal of Soft Computing and Decision Support Systems 2.5 (2015): 7-9.

4. Lee, Sang Chan, and Jae Ryoun Youn. “Shrinkage analysis of molded parts using neural network.” Journal of reinforced plastics and composites 18.2 (1999): 186-195.

5. Spina, R. “Optimisation of injection moulded parts by using ANN-PSO approach.” Journal of Achievements in Materials and Manufacturing Engineering 15.1-2 (2006): 146-152.

6. Chiang, Ko-Ta, and Fu-Ping Chang. “Analysis of shrinkage and warpage in an injection-molded part with a thin shell feature using the response surface methodology.” The International Journal of Advanced Manufacturing Technology 35.5-6 (2007): 468-479.

7. Russell, Stuart, and Peter Norvig. “Artificial intelligence, a modern approach. Series in AI, chapter

 20.8 Genetic algorithms and evolutionary programming.” (1995): 619-621.

8.Tadiou, Koné Mamadou, and John Mc Carthy. “Introduction to artificial intelligence.”

9. Rojas, Raúl. Neural networks: a systematic introduction. Springer Science & Business Media, 2013.

10.Liao, S. J., et al. “Shrinkage and warpage prediction of injectionmolded thinwall parts using artificial neural networks.” Polymer Engineering & Science 44.11 (2004): 2029-2040.

11. Rajabia, Javad, et al. “Prediction of powder injection molding process parameters using artificial neural networks.” Jurnal Teknologi (Sciences and Engineering) 59.SUPPL. 2 (2012): 183-186.

12. Taghizadeh, S., A. Özdemir, and O. Uluer. “Warpage prediction in plastic injection molded part using artificial neural network.” Iranian Journal of Science and Technology. Transactions of Mechanical Engineering 37.M2 (2013): 149.

13. Shi, Huizhuo, Suming Xie, and Xicheng Wang. “A warpage optimization method for injection molding using artificial neural network with parametric sampling evaluation strategy.” The International Journal of Advanced Manufacturing Technology 65.1-4 (2013): 343-353.

14. Altan, Mirigul. “Reducing shrinkage in injection moldings via the Taguchi, ANOVA and neural network methods.” Materials & Design 31.1 (2010): 599-604.

15. Oktem, Hasan, Tuncay Erzurumlu, and Ibrahim Uzman. “Application of Taguchi optimization technique in determining plastic injection molding process parameters for a thin-shell part.” Materials & design 28.4 (2007): 1271-1278.

16. Väätäinen, O., et al. “The effect of processing parameters on the quality of injection moulded parts by using the Taguchi parameter design method.” Plastics rubber and composites processing and applications 21.4 (1994): 211-217.

17. Chen, J., et al. “Effects of process conditions on shrinkage of the injection-molded part.” ANTEC-CONFERENCE PROCEEDINGS-. Vol. 2. 2005.

18. Gong Guan, Joseph C. Chen, and Gangjian Guo. “Enhancing tensile strength of injection molded fiber reinforced composites using the Taguchi-based six sigma approach.” The International Journal of Advanced Manufacturing Technology (2017): 1-9.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please:

McAfee SECURE sites help keep you safe from identity theft, credit card fraud, spyware, spam, viruses and online scams Prices from
£29

Undergraduate 2:2 • 250 words • 7 day delivery

Order now

Delivered on-time or your money back

Rated 4.1 out of 5 by
Reviews.co.uk Logo (25 Reviews)

Get help with your dissertation